kafka-to-postgresql

The technical documentation of the microservice kafka-to-postgresql, which consumes messages from a Kafka topic and writes them up to a PostgreSQL database.

This microservice is responsible for taking kafka messages and inserting the payload into a Postgresql database. It is based on the topic of the kafka message and the data model defined in the United Manufacturing Hub data model By default, it sets up two Kafka consumers, one for high throughput and one for high integrity.

💡This microservices requires that the Kafka Topic umh.v1.kafka.newTopic exits. In Version > 0.9.12 this happens automatically.

High throughput

This Kafka listener is usually configured to listen in on the processValue topics.

High integrity

This Kafka listener is usually configured to listen in on the other topics.

Environment variables

The following environment variables can be used to configure the behavior of this microservice:

Variable nameDescriptionTypePossible valuesExample value
DRY_RUNIf set to true, the microservice will not write to the databasebooltrue, falsetrue
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringalllocalhost:9092
KAFKA_SSL_KEY_PASSWORDKey password to decode the SSL private keystringanychangeme
LOGGING_LEVELDefines which logging level is used, mostly relevant for developers. If logging level is not DEVELOPMENT, default logging will be usedstringanyDEVELOPMENT
MEMORY_REQUESTMemory request for the message cachestringany128Mi
POSTGRES_DATABASEThe name of the PostgreSQL databasestringanyumh
POSTGRES_HOSTHostname of the PostgreSQL databasestringanylocalhost
POSTGRES_PASSWORDThe password to use for PostgreSQL connectionsstringanychangeme
POSTGRES_SSLMODEIf set to true, the PostgreSQL connection will use SSLstringanydisable
POSTGRES_USERThe username to use for PostgreSQL connectionsstringanypostgres
PVS_CHANNEL_SIZEThe size of the channel used to buffer messages for the high throughput Kafka processValueString listener. If full it will write to DBintany1000
PVS_WRITE_TO_DB_INTERVALThe interval in which the high throughput Kafka processValueString listener will write to the database. This prevents high latencyintany1000
PV_CHANNEL_SIZEThe size of the channel used to buffer messages for the high throughput Kafka processValue listener. If full it will write to DBintany1000
PV_WRITE_TO_DB_INTERVALThe interval in which the high throughput Kafka processValue listener will write to the database. This prevents high latencyintany1000
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringalldevelopment
MICROSERVICE_NAMEName of the microservice (used for tracing)stringallbarcodereader
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library, do not enable in productionstringtrue, 1, any1

Program flow

The graphic below shows the program flow of the microservice.

Kafka-to-postgres-flow

Data flow

High Integrity

The graphic below shows the flow for an example High Integrity message

high-integrity-data-flow

Last modified February 17, 2023: update (#208) (ea731fc)