编程语言
首页 > 编程语言> > java-在消费者端通过kafka对UUID进行Avro自定义解码

java-在消费者端通过kafka对UUID进行Avro自定义解码

作者:互联网

我已经编写了一个类,用于将UUID类型的对象自定义编码为字节,以跨kafka和avro进行传输.

要使用此类,我在目标对象的uuid变量上方放置了一个@AvroEncode(using = UUIDAsBytesEncoding.class). (这是由Apache Avro反射库实现的)

我很难弄清楚如何让消费者自动使用自定义解码器. (或者我是否必须手动解码?).

这是我的UUIDAsBytesEncoder扩展CustomEncoding类:

public class UUIDAsBytesEncoding extends CustomEncoding<UUID> {

    public UUIDAsBytesEncoding() {
        List<Schema> union = Arrays.asList(Schema.create(Schema.Type.NULL), Schema.create(Schema.Type.BYTES));
        union.get(1).addProp("CustomEncoding", "UUIDAsBytesEncoding");

        schema = Schema.createUnion(union);
    }

    @Override
    protected void write(Object datum, Encoder out) throws IOException {
        if(datum != null) {
            // encode the position of the data in the union
            out.writeLong(1);

            // convert uuid to bytes
            byte[] bytes = new byte[16];
            Conversion.uuidToByteArray(((UUID) datum),bytes,0,16);

            // encode length of data
            out.writeLong(16);

            // write the data
            out.writeBytes(bytes);
        } else {
            // position of null in union
            out.writeLong(0);
        }
    }

    @Override
    protected UUID read(Object reuse, Decoder in) throws IOException {
        System.out.println("READING");
        Long size = in.readLong();
        Long leastSig = in.readLong();
        Long mostSig = in.readLong();
        return new UUID(mostSig, leastSig);
    }
}

write方法和编码工作良好,但是在反序列化过程中永远不会调用read方法.我将如何在消费者中实现呢?

注册表上的架构如下所示:

{“type”:”record”,”name”:”Request”,”namespace”:”xxxxxxx.xxx.xxx”,”fields”:[{“name”:”password”,”type”:”string”},{“name”:”email”,”type”:”string”},{“name”:”id”,”type”:[“null”,{“type”:”bytes”,”CustomEncoding”:”UUIDAsBytesEncoding”}],”default”:null}]}
`

如果使用者不能自动使用该信息来使用UUIDAsBytesEncoding读取方法,那么如何在使用者中找到标有该标签的数据?

我也在使用融合的架构注册表.

任何帮助,将不胜感激!

解决方法:

最终找到了解决方案.编码不正确-内置的writeBytes()方法会自动为您写入长度.

然后,在使用者中,我们必须通过GenericDatumWriter进行操作,写入二进制流,然后使用ReflectDatumReader从二进制流中进行读取.这将自动调用UUIAsBytesEncoding read()方法并反序列化UUID.

我的使用者看起来像这样(作为使用者组执行者服务walkthrough here的一部分):

/**
 * Start a single consumer instance
 * This will use the schema built into the IndexedRecord to decode and create key/value for the message
 */
public void run() {
    ConsumerIterator it = this.stream.iterator();
    while (it.hasNext()) {
        MessageAndMetadata messageAndMetadata = it.next();
        try {
            String key = (String) messageAndMetadata.key();
            IndexedRecord value = (IndexedRecord) messageAndMetadata.message();

            ByteArrayOutputStream bytes = new ByteArrayOutputStream();

            GenericDatumWriter<Object> genericRecordWriter = new GenericDatumWriter<>(value.getSchema());
            genericRecordWriter.write(value, EncoderFactory.get().directBinaryEncoder(bytes, null));

            ReflectDatumReader<T> reflectDatumReader = new ReflectDatumReader<>(value.getSchema());
            T newObject = reflectDatumReader.read(null, DecoderFactory.get().binaryDecoder(bytes.toByteArray(), null));
            IOUtils.closeQuietly(bytes);

            System.out.println("************CONSUMED:  " + key + ": "+ newObject);

        } catch(SerializationException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    System.out.println("Shutting down Thread: " + this.threadNumber);
}

然后新的UUIDAsBytesEncoding看起来像:

public class UUIDAsBytesEncoding extends CustomEncoding<UUID> {

    public UUIDAsBytesEncoding() {
        List<Schema> union = Arrays.asList(Schema.create(Schema.Type.NULL), Schema.create(Schema.Type.BYTES));
        union.get(1).addProp("CustomEncoding", "UUIDAsBytesEncoding");

        schema = Schema.createUnion(union);
    }

    @Override
    protected void write(Object datum, Encoder out) throws IOException {
        if(datum != null) {
            // encode the position of the data in the union
            out.writeLong(1);

            // convert uuid to bytes
            byte[] bytes = new byte[16];
            Conversion.uuidToByteArray(((UUID) datum), bytes, 0, 16);

            // write the data
            out.writeBytes(bytes);
        } else {
            // position of null in union
            out.writeLong(0);
        }
    }

    @Override
    protected UUID read(Object reuse, Decoder in) throws IOException {
        // get index in union
        int index = in.readIndex();
        if (index == 1) {
            // read in 16 bytes of data
            ByteBuffer b = ByteBuffer.allocate(16);
            in.readBytes(b);

            // convert
            UUID uuid = Conversion.byteArrayToUuid(b.array(), 0);

            return uuid;
        } else {
            // no uuid present
            return null;
        }
    }
}

这也是如何实现CustomEncoding avro类的示例.当前版本的avro没有内置的UUID序列化程序,因此这是解决该问题的方法.

标签:consumer,apache-kafka,uuid,java,avro
来源: https://codeday.me/bug/20191028/1949029.html