Skip to main content
Loading

Configuring the Kafka Sink (Inbound) Connector

Overview

This page describes how to configure streaming from Kafka to an Aerospike database.

The Aerospike Kafka sink (inbound) connector reads data from Apache Kafka and writes data to an Aerospike database.

Configure streaming

To configure streaming from Kafka to Aerospike, set the Kafka sink connector to transform Kafka records into Aerospike records. Store the configuration as aerospike-kafka-inbound.yml or aerospike-kafka-inbound.json in the /etc/ directory in your Kafka installation on each Kafka connect node. You can also pass the configuration as a JSON-formatted object. See Standalone mode for more information.

The configuration has the following options:
OptionRequiredDefaultDescription
max-queued-recordsno32768Maximum number of records queued up with the connector. The size of the queue can go over this before topics are paused.
All topics resume after the size of the queue drops under half of the maximum size.
processing-threadsnoAvailable processorsNumber of threads to use for processing Kafka records and converting them to Aerospike records.
aerospikeyesConfigures the connection properties that the connector must use when connecting to your Aerospike database.
topicsyesConfigures the Kafka topics the connector listens to and the transformations to Aerospike records.

Here is an example:

max-queued-records: 10000

aerospike:
seeds:
- 192.168.50.1:
port: 3000
tls-name: red
- 192.168.50.2
cluster-name: east

topics:
users:
invalid-record: ignore
mapping:
namespace:
mode: static
value: users
set:
mode: dynamic
source: value-field
field-name: city
key-field:
source: key
ttl:
mode: dynamic
source: value-field
field-name: ttl
bins:
type: multi-bins
map:
name:
source: value-field
field-name: firstName