Deployment options
The United Manufacturing Hub can be deployed in three forms: cloud, on-premise server and in an edge device. Please proceed with the present article in order to understand each option.
8 minute read
This article details how the United Manufacturing Hub is structured and how the microservices interact with each other, to get an overview on how you can deploy the United Manufacturing Hub, make sure to check out our Deployment options
The goal of the United Manufacturing Hub is to act as link between several IT- and OT-tools to allow for easy extraction, processing, storage and visualization of machine data, while also allowing for simple deployment, scaling and management with Kubernetes. Given the stack is fed enough information, it then possesses the capabilities to calculate Key Performance Indicators (KPIs), like the Overall Equipment Efficiency (OEE), for each asset.
The United Manufacturing Hub is a modular system, with several various microservices as building blocks interacting with each other and with open source 3rd party applications, such as Node-RED and Grafana. This is beneficial as it enables flexible use and development with more functionalities easily being added with further microservices. It allows us to create comprehensive solutions for various challenges of the industry.
At the core of the United Manufacturing Hub, we are using two message broker systems, MQTT and Apache Kafka. These two broker systems operate on a pub/sub pattern to ensure the services, which need to receive certain data, actually receive them. That way data processing services receive fresh shop floor data and the TimescaleDB database receives the processed data to store it.
We chose MQTT to receive data from the shop floor as it excels at handling large numbers of unreliable connections. It was designed to fetch telemetry data from sensors in oil pipelines.
Kafka on the other hand is used to communicate between the microservices to utilize its large scale data processing. It was designed by LinkedIn to deal with the incoming data streams.
We are using a TimescaleDB database, as it excels at the handling of both relational and time-series data. To learn more about our decision to use TimescaleDB, please read this article.
Our default data processing option out of the box is Node-RED, although other tools such as Benthos can be used at will. We choose Node-RED as its large community base ensures that packages exist to solve almost every shop floor edge case.
In the following, we will go through all layers of the United Manufacturing Hub and provide examples.
Broadly speaking, the United Manufacturing Hub consists of three different layers: Data acquisition, data processing and data visualization. In the following chapter, we will discuss the purposes of each layer, before going into detail which microservices are involved and how they interact.
Data extraction entails managing the connections of the edge device to the data sources, such as external sensors (e.g. light barriers or vibration sensors), input devices (e.g. button bars), Auto-ID technologies (e.g. barcode scanners), industrial cameras and other data sources, such as machine PLCs. The data acquisition also reads out the data from the sources to provide the data to the data processing services.
Current examples of data acquisition:
sensorconnect, which automatically reads out IO-Link Master and their connected IO sensors, like the light bridge sensor in the basic installation guide
barcodereader, which connects to USB barcode reader devices and pushes the data to the message broker
Node-RED, which can handle proprietary or machine specific protocols
MQTT Simulator, which generates constant MQTT messages based on its configuration for data input for development and testing
Further examples, which are either still in development or currently not supported:
This layer is the central part of the United Manufacturing Hub. It provides an infrastructure including data models to fulfill all manufacturing needs for data processing and storage. All acquired data is made accessible in real-time for processing using either established solutions like Node-RED or microservices. Therefore, adding new data, processing it and integrating it with other systems on the edge is very easy. We recommend to start transforming data into the central data model at this step.
The backbone of this layer are the message brokers we are utilizing. Both MQTT and an Apache Kafka broker are used, so their respective strengths can be used and their weaknesses are avoided. Kafka is used as the message broker to communicate between the respective microservices and to feed the data into the TimescaleDB database. However, its reliance on a stable connection makes it unsuitable to connect outside the stack to the sensors on the shop floor. MQTT on the other hand is very proficient at managing unreliable network connections, which is why MQTT is employed to collect the data and to insert it into the data processing pipeline, while Kafka is used for other communication.
The raw or processed data can be sent to a central place (both cloud or on-premise) with our self-written microservices, such as kafka-bridge, which can connect the local kafka broker to a central one. Since internet connections or general networks are often unstable in manufacturing environments, buffering messages is required to safely get messages across even through internet and electricity outages. Existing services were unsatisfactory, so we developed our own solution for kafka bridging to other kafka brokers and to MQTT ones.
Once the data is at the desired server, it will be further processed with the same methods as on the edge device. For development, we integrated kowl into the stack to read out kafka in real time so debugging and observing is even easier. Therefore, one can achieve high-availability and enormous scalability through the load-balanced microservices.
Relational data, like data about orders and products as well as time series data in high resolution, like temperature
and pressure can be stored in the TimescaleDB database. We do not recommend direct access to the database for security reasons.
Instead, we have added another, self-written component called factoryinsight.
It proves a REST API to access raw data from the database as well as processed data in form of KPIs like OEE losses
or similar ones.
All requests are load-balanced, cached and executed only on a replica of the database.
To read more in detail about data processing, specifically the brokering, you should check out our blog post about that.
Current examples of data processing:
TimescaleDB, which is the database and as such stores the relational and time series data for further use. See also the database model
Node-RED, which is the tool to customize the data processing and allows to output processed data from the raw input.
factoryinsight, which reads out the TimescaleDB database for further visualization of the data.
kowl, which reads out kafka messages for debugging and observation in development
kafka-bridge, which connects a local kafka broker to a remote kafka broker for centralization of data processing
mqtt-kafka-bridge, which connects MQTT and kafka brokers with each other to allow for a maximum of flexibility
kafka-to-postgresql, which feeds the kafka message data into the TimescaleDB database
Further examples, which are either still in development or currently not supported anymore:
mqtt-to-postgresql, which feeds MQTT message data into the TimescaleDB database
kafka-to-blob, which would feed blob data and data models into the minio database
minio, which is the database for blob storage to store images
blob-insight, which provides the blob storage data via REST API
mqttbridge, which connects different MQTT brokers
factoryinput, which allows for data insertion via REST API
As a standard dashboarding tool the United Manufacturing Hub utilizes Grafana in combination with self-written plugins, which allows users to quickly and easily compose tailor-made dashboards with the help of modular building blocks.
Current examples of data visualization:
Grafana, which manages the dashboard for visualization
umh-datasource, which integrates the data from factoryinsight into Grafana
Further examples, which are either still in development or currently not supported:
dashboard-template, which provides a wide range of pre-defined dashboards to choose from
factoryinput-panel, which allows to enter data to the REST API endpoint through the Grafana dashboard
The entire stack can be deployed using only a single configuration file values.yaml
and the
corresponding helm chart. This allows to deploy the architecture in hybrid setups, from
deploying it to edge device IIoT gateways to on-premise servers or even to the cloud (e.g. Azure AKS)
Let’s take data from a light bridge sensor as an example. As you can see in the image above, the microservice sensorconnect is reading out the raw data from the IO link gateway and converts it into a Kafka message and sends it to the kafka broker. the Kafka broker then can send it to Node-RED if you have a Node-RED workflow, which subscribes to the correct topic. You can then put out different messages with processed data, such as how often items have passed the light bridge in the last couple of seconds as output.
The Kafka broker can then either send it directly into the TimescaleDB database or send it to a central Kafka broker, which handles the same thing.
When you want to visualize the data, factoryinput reads out the database and sends it to the microservice UMH datasource, which integrates the data into Grafana, where you can then make fancy dashboards with the data input.
The United Manufacturing Hub can be deployed in three forms: cloud, on-premise server and in an edge device. Please proceed with the present article in order to understand each option.