How do you feel about this article? Help us to provide better content for you.
Thank you! Your feedback has been received.
There was a problem submitting your feedback, please try again later.
What do you think of this article?
This project continues our series of setting ROCK boards in an industrial context, showcasing innovative IIoT solutions that can integrate into existing industrial automation, smart agriculture and building management infrastructure or form the building blocks for new solutions.
We add database capabilities to an existing ROCK 4SE MQTT server using InfluxDB and Telegraf which we built in a previous project. This enables capturing and storing sensor data, making it available across the network to other clients.
We also introduce a new network node based on the ROCK 3A with a Raspberry Pi 7-inch Touch Screen display using Grafana to construct useful and attractive dashboards for analysing and presenting sensor data to users.
Difficulty: Advanced | Time: 6 Hrs | Steps: 16 | Credits: None | License: Various |
Parts Needed
Part | Description | RS Stock Number |
---|---|---|
ROCK 3A | Okdo ROCK 3 Model A 2GB Single Board Computer | (256-3910) |
7 inch LCD Touch Screen | Raspberry Pi LCD Touch Screen with 7in Capacitive Touch Screen | (899-7466) |
QC Power Supply | Okdo 36 W PD + QC Multihead PSU Plug In Power Supply 5→20V dc Output, 1.75→3A Output | (243-6356) |
USB-C Cable | Deltaco USB 2.0 Cable USB C to USB C Cable, 2M | (276-7734) |
SD Card | Sandisk 32 GB MicroSDHC Micro SD Card | (283-6581) |
Screen Case | Raspberry Pi 4 Touchscreen Display Case | |
Host PC | Linux / Mac / Windows Host computer | |
Internet / Router | Internet connection |
Step 1: System Overview
There are two system components to this project.
First is the MQTT Server hosting a Mosquitto MQTT service, an InfluxDB database and a Telegraf plug-in, all running in separate system processes on a ROCK 4SE board. This component will collect data from sensors using MQTT protocol and store the readings in the database. The database will be accessible over the network.
The second component is the Display Client which will host a Grafana instance to create Dashboards for displaying the stored sensor data on a Touch Screen. A ROCK 3A board connected to a Raspberry Pi 7-inch Touch Screen will perform this role.
Using this approach it would be possible to have several displays all linked to the same central database, depending on requirements.
Because there are a number of software components to setup, each step has a test to confirm that it was completed successfully, so by the end, you should have a working system.
Step 2: Display Hardware
You could use either the ROCK 3A or ROCK 3C boards for this project depending on network connectivity requirements. The ROCK 3A will give better performance as it is available with 4GB RAM and runs at a higher clock speed than the ROCK 3C.
I tested using the ROCK 3A with 2GB RAM. This is based on the Rockchip RK3568 SoC, a 64-bit, quad-core Armv8.2‑A Cortex®‑A55 CPU running at 2.0GHz.
Both boards have a small form factor, Gigabit network throughput and more than enough CPU and memory resources for this application and support SD card or eMMC storage.
Both the ROCK 3A and ROCK 3C fit nicely into the Raspberry Pi 7-inch Touchscreen Display case with only light modification, making it easy to deploy them.
We showed how to add touch screens to the ROCK boards in this guide: How to add a Raspberry Pi 7-inch display to Radxa ROCK Single Board Computers
Test that you can run the Xfce desktop on the display and connect the ROCK to your network via Ethernet and check that you have SSH access using your favourite terminal emulator. On Linux I use Terminal and on Windows you can use CMD (you don’t need Putty any more)
ssh radxa@rock-3a
Step 3: MQTT Server
In order to collect data from sensors connected to the network via MQTT protocol we need an MQTT Server. We showed how to build the Mosquitto MQTT server on a ROCK 4SE in this project: Radxa ROCK Secure MQTT Edge Gateway
Here’s the ROCK 4SE server connected to the network via Ethernet:
Configure the MQTT Server to accept anonymous connections from Localhost in /etc/mosquitto/conf.d/broker.conf like this:
sudo vi /etc/mosquitto/conf.d/broker.conf
With these contents:
# Enable settings by listener
per_listener_settings true
# Allow anonymous access on port 1883
listener 1883 localhost
allow_anonymous true
Restart the Mosquitto service and check it is available:
sudo systemctl restart mosquitto.service
sudo systemctl status mosquitto.service
Step 4: Mock Sensors
To make testing easier I wrote a bash script that mocks temperature sensor data and sends it over MQTT. Of course, you could use real sensors as well.
On the ROCK 4SE create the script below named random-temp.sh. It generates two random temperatures and publishes them in CSV format on separate topics every 10 seconds:
vi random-temp.sh
Bash script…
#!/bin/bash
# Publish random temperature values between 0 & 19.9 over MQTT
while true; do
a=0; b=20
t1=$((a+RANDOM%(b-a))).$((RANDOM%9)); echo "T1: $t1"
t2=$((a+RANDOM%(b-a))).$((RANDOM%9)); echo "T2: $t2"
mosquitto_pub -h localhost -t test/t1 -m "$t1"
mosquitto_pub -h localhost -t test/t2 -m "$t2"
sleep 10
done
Install the mosquitto-clients package on the ROCK 4SE, make the script executable then run it - the formatted output is echoed to stdout:
sudo apt install mosquitto-clients
chmod +x random-temp.sh
./random-temp.sh
Step 5: InfluxDB Installation
Now that our MQTT server is up and running with test data being published we can turn our attention to setting up the InfluxDB database. This will also be installed on the ROCK 4SE and it will run as a separate system process.
High-quality documentation about how to do this is available on Influxdata.com here: https://docs.influxdata.com/influxdb/v2/install/
Open a Terminal session on the ROCK 4SE and get the Linux package for Ubuntu/Debian ARM64, install it, then start it up and check that it is running:
Note: The current InfluxDB2 version is 2.7.X - this will change so the commands will need to match the latest version
cd ~/
curl -LO https://download.influxdata.com/influxdb/releases/influxdb2_2.7.10-1_arm64.deb
sudo dpkg -i influxdb2_2.7.10-1_arm64.deb
sudo systemctl start influxdb.service
systemctl status influxdb.service
Step 6: InfluxDB CLI
InfluxDB CLI is a separate package containing all the tools required to administer the database and users from the command line. We will use it to setup users for our different applications and to administer our databases.
Again there is excellent documentation here: https://docs.influxdata.com/influxdb/v2/tools/influx-cli/?t=Linux
Install it on the ROCK 4SE from a Terminal session. Download the CLI client for Arm64, extract the binary and copy the client to the local bin directory:
cd ~/
wget https://download.influxdata.com/influxdb/releases/influxdb2-client-2.7.5-linux-arm64.tar.gz
mkdir influxdb-client
tar xvzf influxdb2-client-2.7.5-linux-arm64.tar.gz -C influxdb-client
sudo cp influxdb-client/influx /usr/local/bin/
Now InfluxDB can be initialised. This is done by creating an admin user, passing in parameters for the password, organisation name (defined by you to reflect high-level grouping of databases) and a primary bucket name. Buckets are where you store a particular type of data in InfluxDB jargon. An organisation can contain many buckets:
influx setup \
--username 'influx-admin' \
--password '123456-xyz' \
--org 'okdo-projects' \
--bucket 'sensors' \
--force
This generates an API Token for the admin user stored in ~/.influxdbv2/configs and makes this token available for that user when running the CLI client.
While we are creating InfluxDB users, we should create separate users for the Telegraf and Grafana applications which we will install shortly. Each of these users will have a slightly different set of database privileges according to their role.
Create a user to manage the Telegraf agent. This user will have read / write access to any data bucket and be able to read & write Telegraf transactions:
influx user create -n 'telegraf' -p '123456-xyz' -o 'okdo-projects'
influx auth create \
--org 'okdo-projects' \
--user 'telegraf' \
--read-buckets \
--write-buckets \
--read-telegrafs \
--write-telegrafs
Finally, create a Grafana user - this one will only be able to read data buckets, nothing else:
influx user create -n 'grafana' -p '123456-xyz' -o 'okdo-projects'
influx auth create \
--org 'okdo-projects' \
--user 'grafana' \
--read-buckets
Step 7: Buckets
Now our database users are in place we need to set up the InfluxDB buckets to hold our data.
For testing purposes, we will create a separate bucket for test data. It doesn’t need to store the data for long, so this command creates a new bucket named test:
influx bucket create --name "test" --org "okdo-projects" --retention 72h
Grafana follows InfluxDB 1.x conventions to query InfluxDB 2.7 buckets so we need to create Data Base Retention Policy (DBRP) mapping for the test bucket.
Obtain the bucket ID for the test bucket with the following command:
influx bucket list
In our example the test bucket ID is efa827c8dd4bec4e
Now create the DBRP mapping for the test bucket where the name is mapped to test-rp which is the name for the bucket that Grafana will refer to. Then check the mapping exists using this dbrp list command:
influx v1 dbrp create \
--db test \
--rp test-rp \
--bucket-id efa827c8dd4bec4e \
--default
influx v1 dbrp list
Step 8: Telegraf Installation
Telegraf is an open-source software agent that makes it easy to read and write data to InfluxDB from lots of different applications, including MQTT.
We will install it on the ROCK 4SE and use it to capture the sensor data published to the MQTT broker into InfluxDB.
Visit the Telegraf home page and click the download button to get the URL of the latest version for Linux on ARMv8: https://www.influxdata.com/time-series-platform/telegraf/
In a Terminal session on the ROCK 4SE download the latest tarball, check the directory structure and extract the archive to the root directory by stripping off the first 2 leading directories from the archive path. Finally checking that telegraf executable is in the path:
Note: Before executing the tar xzvf command check the tarball structure in case it changed in future versions
cd ~/
wget https://download.influxdata.com/telegraf/releases/telegraf-1.31.3_linux_arm64.tar.gz
tar tf telegraf-1.31.3_linux_arm64.tar.gz
sudo tar xzvf telegraf-1.31.3_linux_arm64.tar.gz --strip-components=2 -C /
which telegraf
Step 9: Telegraf Configuration
Now that it is installed, we can configure Telegraf and its plugins, which is a bit of a dark art. There are some examples documented here: https://github.com/influxdata/telegraf/blob/release-1.31/plugins/inputs/mqtt_consumer/README.md
Plugin data formats are documented here: https://github.com/influxdata/telegraf/blob/release-1.31/docs/DATA_FORMATS_INPUT.md
Create a Telegraf configuration in /etc/telegraf/telegraf.d/test.conf that contains entries to an Input plug-in to consume MQTT data in CSV format and an Output plug-in that will store the data in the test bucket in InfluxDB:
sudo vi /etc/telegraf/telegraf.d/test.conf
Here are the contents of the file…
[agent]
interval = "5s"
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [
"test/t1",
"test/t2"
]
data_format = "csv"
csv_header_row_count = 0
csv_column_names = ["temperature"]
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "${INFLUX_TOKEN}"
organization = "okdo-projects"
bucket = "test"
Here’s how it works:
Files in /etc/telegraf/telegraf.d/ override any settings in the default config which is in /etc/telegraf/telegraf.conf
The Agent section is just setting how often data is written to InfluxDB.
The inputs.mqtt_consumer section defines the MQTT host and topics to listen on. It also specifies using CSV data format and the fact that there is no header row in the data. Our test data only has a single field and that is mapped to the name temperature. This creates the label for the values stored in the bucket.
The outputs.influxdb_v2 section is specifying InfluxDB V2 format along with the host URL and port where InfluxDB is running (in this case on the same host). The access token for the telegraf user is read from the environment and the organisation and bucket where we want the data to end up.
Step 10: Telegraf Startup
There’s still a bit of work to do to make Telegraf start on bootup and connect to InfluxDB.
To ensure security, we add a Telegraf system user that has a nologin account at the OS level. Then we can store the InfluxDB access token for the Telegraf database user in an environment variable so it can be accessed by the systemd startup units. Finally we modify and install the provided systemd unit so that Telegraf starts on bootup.
Add an OS system user to run Telegraf:
sudo useradd -r -s /sbin/nologin telegraf
List the InfluxDB authorisations and copy the Telegraf database users token to the clipboard:
influx auth list
Create the environment file in /etc/telegraf/telegraf.d/telegraf-token.txt setting the INFLUX_TOKEN environment variable to the Telegraf database user’s token:
sudo vi /etc/telegraf/telegraf.d/telegraf-token.txt
INFLUX_TOKEN="m3i1NGxoZ2k5cWxHs4BL6kjUnWOXFlhlGa-jom95VpFM8i9qq3Dk57UX12C_XpPMAAmMGdFgmGwYOAIK9bt8RQ=="
Change ownership of the token file and make it read & write for telegraf only:
sudo chown telegraf:telegraf /etc/telegraf/telegraf.d/telegraf-token.txt
sudo chmod 600 /etc/telegraf/telegraf.d/telegraf-token.txt
Copy the Telegraf service unit file so systemd can read it, then replace the EnvironmentFile reference inside it so that it points to the environment variable we just created:
sudo cp /usr/lib/telegraf/scripts/telegraf.service /etc/systemd/system/
sudo vi /etc/systemd/system/telegraf.service
Here’s the final contents of telegraf.service:
[Unit]
Description=Telegraf
Documentation=https://github.com/influxdata/telegraf
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/etc/telegraf/telegraf.d/telegraf-token.txt
User=telegraf
ImportCredential=telegraf.*
ExecStart=/usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d $TELEGRAF_OPTS
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartForceExitStatus=SIGPIPE
KillMode=mixed
LimitMEMLOCK=8M:8M
PrivateMounts=true
[Install]
WantedBy=multi-user.target
Make sure the MQTT broker is running and allowing anonymous access to Localhost on port 1883.
Now enable the service, start it up and check that it’s running as expected - any issues will show up in the status view:
sudo systemctl enable telegraf.service
sudo systemctl daemon-reload
sudo systemctl start telegraf.service
sudo systemctl status telegraf.service
This is what good looks like.
If you manage to get Telegraf running ok, one of the most tricky parts is over. If not then check out the Troubleshooting tips at the end.
Step 11: InfluxDB Testing
Let’s do a quick check to see if we are receiving data into InfluxDB.
Assuming your MQTT broker is running along with InfluxDB, Telegraf and the mock test script, in a Terminal on the ROCK 4SE run the following commands using Influx CLI:
When the shell opens, set the name of the bucket at the prompt and press return, then add the select query and return. You should then see the results of the query. Press q to return to the prompt, then type exit to end the session:
influx v1 shell
> use "test"
> select * from "mqtt_consumer"
> exit
The session should look like this after exiting:
The query results for the test bucket should be like this, where you can see the temperature column name added automatically by Telegraf and the timestamp added by InfluxDB to the values sent over MQTT:
Fantastic, so now you are able to collect any kind of data that is being published to your MQTT server and store it away in InfluxDB!
In the next steps, we show how to use that stored data for analysis and display in Grafana so that it becomes useful.
Step 12: Grafana Installation
Now we will turn our attention to the ROCK 3A which will host Grafana OSS which is a versatile Open Source display and analytics application. It’s relatively easy to create fancy dashboard displays full of useful analytics in a browser-based terminal. It’s also very well-documented and supported by online video tutorials.
In our example, Grafana will source its data from InfluxDB running on the ROCK 4SE and display the sensor data as both its current value and as a time series. We will automate the software so that it displays on the attached Touch Screen display on bootup.
The dashboard can also be viewed by other hosts on the network using browser access.
Install Grafana by visiting the downloads page to obtain the URL for the latest version for Debian on Arm64: https://grafana.com/grafana/download?edition=oss&pg=graf&platform=arm&plcmt=deploy-box-1
The instructions are on the page but here they are (the upgrade takes a while!):
sudo apt update
sudo apt upgrade
sudo apt-get install -y adduser libfontconfig1 musl
wget https://dl.grafana.com/oss/release/grafana_11.2.0_arm64.deb
sudo dpkg -i grafana_11.2.0_arm64.deb
Enable Grafana, start up the service, then check it’s running:
sudo systemctl daemon-reload
sudo systemctl enable grafana-server.service
sudo systemctl start grafana-server.service
systemctl status grafana-server.service
Step 13: Grafana Configuration
Grafana runs as a web interface so open the following URL in your host's browser and login with admin / admin then change the password: http://rock-3a:3000/
Note: If the hostname is not found, use the ROCK 3As IP address instead
You will be redirected to the Grafana welcome screen where you can add a datasource:
As you might expect a Grafana Data Source defines where the data to be used is sourced from. It also handles any authorization to that source, which in our case will be InfluxDB running on the ROCK 4SE, using our grafana user’s token.
If the setup is successful the same Data Source can be used across many different Dashboards.
- Click on Data Sources which displays a list of all the possible Data Sources that Grafana supports
- Add the InfluxDB datasource
Then fill out the top of the form like this:
- Give the Data Source a name - we called ours InfluxDB-test
- Set the Query Language to InfluxQL and the URL of the InfluxDB, ours is http://rock-4se:8086
Leave all the other settings as defaults. Here’s a screenshot of the top part of the form:
Fill out the bottom of the form like this:
- Click Add header button and in the Header field enter Authorization
- In the Value field enter Token then a space followed by the InfluxDB grafana user’s token like this (all on one line):
Token ..J5fy-yge6FDc9zSmqYmuO0EnTu2biUgrp3gKfvU2niEpRUBnqAVKoSZS6tmAg==
Note: Get the token by running this command on the ROCK 4SE:
influx auth list
- Set the database name to test (the bucket name in this case) and the HTTP method to GET.
There’s no need for a user and password as the token does the authorisation. When you press the Save & Test button you should get the green tick to say everything is working.
If you have any issues documentation about using Grafana with InfluxDB is here: https://docs.influxdata.com/influxdb/v2/tools/grafana/?t=InfluxQL#view-and-create-influxdb-v1-authorizations
Step 14: Dashboards
Once a Datasource is defined, you can go on to create some fancy dashboards.
- From the Burger Menu select Dashboards and Add Visualization
- From the RH sidebar select Gauge as the Visualisation type
In the Query Inspector tab:
- Set InfluxDB-test as the datasource.
- Set the query to read (test-rp is the mapped bucket name this time which is a bit confusing):
FROM test-rp mqtt_consumer WHER Topic::tag = test/t1
SELECT field(temperature) mean()
GROUP BY time($_interval) fill(linear)
- Leave all the other settings as Defaults.
- In the Properties section in the Right Hand panel setup the Gauge to your liking by experimenting with the settings.
- Click apply to save the changes.
You can do a similar thing for a Time Series display. Use the same query settings as above for FROM, SELECT and GROUP BY. Again experiment with the Properties area to achieve the desired look.
Step 15: Automation
Once all the display elements are in place and the dashboard layed out, attach a keyboard and mouse to the ROCK 3A, login and fire up Chromium on the touch screen and login to Grafana: http://localhost:3000
You may need to adjust the Dashboard a bit to get it to display properly on this smaller screen.
If you want the display on the ROCK 3A to automatically show the Grafana dashboard on boot up there are a few tricks that can be used to do that.
First of all setup auto login in the display manager for the radxa user so that the desktop loads on bootup without having to login:
sudo vi /etc/lightdm/lightdm.conf
Edit this line in the [Seat:*] section so it references the radxa user:
autologin-user=radxa
Now setup a desktop file ~/.config/autoconfig/chromium.desktop for the radxa user with the following contents. It executes a bash script when the desktop loads:
Tip: Before you do this part make sure you can SSH into your ROCK 3A otherwise it can be had to break out of kiosk mode!
cd ~
mkdir -p .config/autostart
vi .config/autostart/chromium.desktop
[Desktop Entry]
Encoding=UTF-8
Version=0.9.4
Type=Application
Name=Chromium
Exec=/home/radxa/start.sh
TryExec=/home/radxa/start.sh
Create the bash script to load Chromium in kiosk mode - there needs to be a delay to allow the Grafana service to start up first:
vi start.sh
#!/bin/bash
sleep 20
/usr/bin/chromium localhost:3000 --kiosk
Finally, make it executable:
chmod +x start.sh
Now reboot. The first time Grafana starts, login with the Grafana admin username and password, but once you have done this, next time you restart the device it will login automatically so you can remove the keyboard.
In the Chromium settings choose to Continue where you left off.
In Grafana, open the Dashboard that you have created and set it to Kiosk Mode with the Monitor icon in the Title Bar.
Use the Power button on the ROCK 3A to shut it down safely and restart it
When the ROCK 3A reboots it will load Chromium in Kiosk Mode, which in turn will load Grafana with your default Dashboard!
If you set up an unprivileged user in Grafana you can give them read-only access so no one can tamper with your handiwork!
Here’s a Dashboard I made earlier running on the ROCK 3A with the 7-inch Raspberry Pi Touch Screen:
Step 16: Troubleshooting
There are a lot of moving parts in this project so it’s easy for a step to go wrong.
1. First of all try turning it off and on again
2. Make sure each of the services is actually running by checking its status in systemctl. On the ROCK 4SE you should have Mosquitto, InfluxDB and Telegraf all running plus the mock data script.
On the ROCK 3A you should have Grafana running.
You can check for example that InfluxDB is running with:
systemctl status influxd.service
3. Check that InfluxDB is actually receiving data into the bucket by repeating the test in the Step 11: InfluxDB Testing.
4. Check you are using the correct InfluxDB Tokens and that they have sufficient access rights. You can delete tokens and create new ones - see the InfluxDB documentation.
In our setup the telegraf user’s token is stored in the file /etc/telegraf/telegraf.d/telegraf-token.txt to configure the environment variable INFLUX_TOKEN for the telegraf.service unit.
To configure the Grafana Data Source we used the InfluxDB grafana user’s token. This must be entered in the setup form in the format:
Token xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Tokens can be viewed on the ROCK 4SE using the Influx CLI with:
influx auth list
5. Check the directory and file permissions of configuration files and directories to make sure the relevant user has access rights.
6. Check the logs in /var/log/<app_name>
/var/log/<app_name>
7. Search ChatGPT with any error message to see if there are suggested solutions.
Summary
In this project, we have shown how to capture mock sensor data over MQTT protocol from remote devices on the IoT network and store the information in a centralised database using InfluxDB and Telegraf.
With these two Open Source applications it should be possible to handle many sensor capture scenarios from multiple distributed devices, which can scale according to needs, all running on low-power ROCK servers.
We then showed how to use the information as a data source for Grafana dashboards for analysis and display purposes. The example automated a dashboard using a ROCK 3A in conjunction with a Touch Screen display.
References
InfluxDB: https://www.influxdata.com/products/influxdb/
Telegraf: https://www.influxdata.com/time-series-platform/telegraf/
Grafana OSS: https://grafana.com/oss/grafana/
Comments