Data Flows

Data Flows are a type of software, built with the beeta.builder and structured in a block-based manner. A Data Flow is made up if several blocks, minimum one of each kind:

  1. Input Blocks: These are the data sources for the flow, e.g. sensors or structured data.

  2. Processing Blocks: These blocks manipulate the data and pass it on to the next block.

  3. Output Blocks: These blocks send the data from the edge device to a specified endpoint, such as a cloud database, dashboard, or messaging service.

The input blocks determines the data source(s), the processing blocks decide how the data is processed, enhanced or manipulated, and the egress block identifies the data destination(s)

To deploy a Data Flow, it must be set up and configured to meet the platform requirements and the data input type and capacity of the hardware executing the flow.

Each block adheres to a communication specification to receive and forward data. This specification outlines the interface between each block. The blocks are standardised so that the beeta.agent can establish and maintain communication between them.

In the beeta.builder, a Data Flow is designed using a drag-and-drop interface, with connections represented as connection lines.

The deployment of Data Flows, the beeta.agent receives a message from the beeta.builder describing the Data Flow. The Agent downloads any required container images for the blocks composing the application. These images are stored on the host storage. The Agent then runs each block and establishes all connectivity required. Similarly, the Agent can stop or delete Data Flows.

Last updated