Skip to main content

OKdo ROCK 5 AIO Getting Started Guide

AI image processing is technical.

One of the emerging use cases for AI is recognising significant events captured in video media streams for further analysis. Number plate recognition and intrusion detection and identification are examples.

Implementing these usually involves integrating camera feeds with cloud services and complex AI modelling, which is both expensive and technically challenging.

ROCK 5AIO is a new SBC with a 3 Tops NPU that’s designed to take away some of this complexity by bringing AI image processing to the “Edge”. It comes pre-loaded with a Debian OS and media processing software to simplify creation of image processing pipelines using pre-configured AI models, which anyone can use.

In this project we show how to setup the ROCK 5AIO, connect it to the OStream management web UI and create “drag & drop” image processing pipelines capable of identifying events in video streams. We also configure a remote RTSP (Real Time Streaming Protocol) camera feed using a Raspberry Pi camera module and show how to test capturing event data using the webhook service in OStream, for further processing.

Difficulty: Medium

Time: 4 Hr

Steps: 11

Credit: None

Licence: Flask - BSD-3-Clause license

Parts Needed:

Part Description SKU
OKdo ROCK 5 AIO Edge AI Media Board (267-4908)
USB-C Power supply (5V / 3A) OKdo Multihead Dual Port USB Quick Charge Power Supply (PSU) 36W with USB-C (243-6356)
USB-C to USB-C Cable DELTACO USB-C to USB-C Cable (276-7733)
Raspberry Pi 4 OKdo Raspberry Pi 4 4GB Model B Starter Kit (202-0644)
Camera Raspberry Pi camera module V2 (913-2664)
PC Host computer Windows/Mac/Linux  

Step 1: OStream Director

Before connecting your ROCK 5AIO, create an OStream Director account. This is a web UI application for managing and controlling your ROCK 5AIO.

Sign up for the free account by visiting https://app.ostream.com and enter your credentials:

Ostream Director login

Once logged in, you will be redirected to the application Home page where you will manage all of your ROCK 5AIO connections and create your image processing pipelines.

Ostream Director homepage

Step 2: ROCK 5AIO Power On

It’s now time to power up your ROCK 5AIO. This is simple as there’s no flashing of SD cards or software downloads. Debian OS comes pre-installed on the onboard 16GB eMMC memory so you just have to power it up:

  • Connect an Ethernet cable to the RJ45 connector. This also supports 48V POE if you have this available.
  • Connect a 5V / 3A power supply using a USB-C cable to the USB port next to the RJ45 jack.

The blue LED status indicator will light up and after 1 - 2 minutes the board will have booted up.

ROCK 5 AIO Board Powered on

Step 3: Connection

Once your ROCK 5AIO has booted, you can begin connecting it to your OStream where it’s referred to as a Node. Each board comes with a card in the box listing your board's unique Serial Number which you will need for this step:

  • In OStream’s menu bar, click Join Node
  • Enter the Serial Number from the card in the box
  • Click Find Node
  • Your board should now be listed in the Node List with its Enabled and Online status indicated by a green tick, showing its connected

Join Node

Ensure your boards firmware is up to date the first time you connect:

  • Click the Edit icon for your Node and select the Status tab
  • Check if Software Version is Latest (1053)
  • If not, enter the version number and click Upgrade Agent and wait 1 minute
  • After receiving the Execute OK message, click Reboot Host
  • After 2 minutes click the status tab until it comes back on line in the Nodes List menu

Ostream Software Check

Step 4: SSH Access

Remote access using SSH is useful for testing and debugging. Find the hostname and IP address of your ROCK 5AIO from your network router console.

The hostname for my board was ocom-cecaf1 with default username: ocom and password: Physico22

Open a terminal and login via ssh using your board's hostname or IP address, with one of the following commands. Entering the password when prompted :

$ ssh ocom@ocom-cecaf1

or

$ ssh ocom@ocom-cecaf1.local

Now change the password:

$ passwd

Step 5: MP4 Test

Time to test that everything is working as expected. There’s a detailed walkthrough of setting up your first pipeline at OStream so we will cover the basics but the interface is very intuitive.

Let's create a pipeline to analyse one of the test videos installed on the board. From your ssh session issue the following commands to create a streamSources directory and copy the test file to that location:

ocom@ocom-cecaf1:~$ sudo mkdir -p /userdata/ostream/oa/streamSources
ocom@ocom-cecaf1:~$ sudo chown ocom:ocom /userdata/ostream/oa/streamSources
ocom@ocom-cecaf1:~$ cp /userdata/ostream/oa/media/tutorials/courtyard.mp4 /userdata/ostream/oa/streamSources/

Navigate to the Stream Sources item in the menu:

Stream Sources on Ostream Director

Click the Create button to open the Stream Source creation template and fill out the options:

  • Give your source a name
  • Set the Source Pool to your Pool
  • Choose the Low Code Pipeline
  • Set the type to On Board MP4
  • Set the file path to courtyard.mp4 for our example file
  • Click Save / Update

Here’s one I created earlier:

Ostream Pool Example

You end up back on the Steam Sources page.

  • Click the Video Camera icon to open the Live Viewer page
  • In the Pipeline, drag the RK Person AI Service in between the Cap and Encode video items
  • Click the Start icon on the Pipeline to see your video playing in the viewer

Each time a person is recognised, a coloured bounding box will show up surrounding the identified object.

There are 92 different AI Services that can identify all sorts of objects to try out. Just replace the Person Service with the one you want to try and press Save then refresh the browser 

Ostream Director Live View

Step 6: Search

Every time your AI pipeline identifies an object, it saves that image frame to OStream and you can use the Search facility to retrieve the image using search terms. The capture data is also available in JSON format so it can be integrated into applications you might develop (see below).

Now you can search through all the inference events where people were identified in the video stream:

  • Select the Search item in the menu
  • Enter a search term starting with the ‘$’ character eg: $person

All the video frames containing the objects identified are shown. You can filter these by time.

If you untick the Media option you can see the relevant JSON metadata.

Ostream Director Search

Step 7: RTSP video

As well as being able to connect cameras directly to the CSI connector on the ROCK 5AIO board, you can attach RTSP media streams through the OStream UI.

The board acts as a remote client for the server where the camera is attached. This allows the ROCK 5AIO to connect any remote RTSP camera’s running on the network and process the video through its AI pipeline.

I setup an RTSP camera with a Raspberry Pi 4 (Pi OS Desktop 32 bit) and V2 camera module using the Open Source Mediamtx project.

Download the mediamtx_v1.8.0_linux_armv7.tar.gz file and copy it to your home directory then unpack the archive:

$ tar -xvzf mediamtx_v1.8.0_linux_armv7.tar.gz
$ cd mediamtx

Now make a copy of the config file, mediamtx.yml:

$ cp mediamtx.yml mediamtx.yml.orig

Then add the following stanza at the EOF using your favourite editor. This configures your camera and path. cam is the resource name in your RTSP url:

paths:
  cam:
    source: rpiCamera
    rpiCameraWidth: 1280
    rpiCameraHeight: 720
    rpiCameraVFlip: true
    rpiCameraHFlip: true
    rpiCameraBitrate: 1500000

Make sure the Mediamtx server has execute permissions:

$ chmod +x mediamtx

Start the server with:

./mediamtx

The video stream is available on port 8554 at your Pi’s IP address for example: rtsp://192.168.1.100:8554/cam

You can open another terminal and view the video stream using ffmpeg to test:

ffplay rtsp://192.168.1.100:8554/cam -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop

Tip: The version of vlc (v3.0.16) installed on my Ubuntu Laptop doesn't work with RTSP!

I installed the latest snap package and this worked:

$ snap run vlc rtsp://192.168.1.100:8554/cam

You should now have a working RTSP video stream.

Step 8: RTSP Pipeline

Once you have a working RTSP camera feed you can attach it to your ROCK 5AIO for AI processing.

In OStream, add a new Stream Source in a similar way to the MP4 Test in Step 5.

  • Select Stream Sources and click Create
  • Set the Source Pool to your Pool and the Application to Low Code Pipeline
  • Set the Type to Remote Camera RTSP
  • Set the RTSP URL to the URL for your remote camera. In the example above that would be:
  • rtsp://192.168.1.100:8554/cam
  • Click Update

Ostream Director RTSP Feed

The Stream Source will now appear as an entry in the menu bar.

Select your source and run it in the Live Viewer

Ostream Director RTSP Live Viewer

Step 9: Webhooks

If you want to build a complete image processing pipeline you will need access to the inference metadata each time an image “event” occurs.

OStream has a webhook function that fires each time there’s an inference event when your model identifies an object. The sensitivity for this is set in the AI object properties by clicking on it in the pipeline, where you set the confidence percentage.

You need an HTTP endpoint that OStream can POST the data to. As a test I used Hookdeck.com to setup a local proxy for the webhook to a Flask server running on my local network.

I’ll explain the steps briefly but all the details are in this blog post: https://dev.to/hookdeck/local-webhook-development-using-hookdeck-cli-1om

First of all create a Hookdeck account if you don’t have one, then download and install the Hookdeck CLI application on your local host.

I used my laptop running Ubuntu Linux to download and install the .deb of the latest version. There are also versions for other systems. Hookdeck CLI runs in the terminal.

https://github.com/hookdeck/hookdeck-cli/releases/tag/v0.8.6

Now setup a simple Flask application which is a Python web server. You will need the Flask package which you can put in a virtual environment:

$ mkdir flask && cd flask
$ python -m venv venv
$ source venv/bin/activate
(venv) ~/flask$ pip install Flask

Copy the following contents into a file called main.py in the flask directory:

import logging
from flask import Flask, request, jsonify

logger = logging.getLogger()
app = Flask(__name__)

@app.route('/webhook', methods=['POST'])
def webhook_receiver():
    data = request.json  # Get the JSON data from the incoming request
    logger.info(f"Objects detected: {len(data['objects'])}")
    logger.debug(data['objects'])
    return jsonify({'message': 'OK'}), 200

if __name__ == '__main__':
    format = "%(asctime)s: %(message)s"
    logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S")
    logging.getLogger().setLevel(logging.DEBUG)
    app.run(debug=True)

Start the server on default port 5000

$ python main.py
 * Serving Flask app 'main'
 * Debug mode: on
 * Running on http://127.0.0.1:5000
14:48:45: Press CTRL+C to quit

Now login to your Hookdeck account using your browser then and create a connection to the Hookdeck CLI by running the following command in a terminal on your local host:

$ hookdeck login

Then configure Hookdeck CLI to forward requests to your Flask server on port 5000:

$ hookdeck listen 5000

This will prompt you to tell Hookdeck about the URL path which in our example is set to /webhook for POST requests.

Once this is configured, Hookdeck prints the URL to use for the webhook in OStream:

  • Click your Output Stream in the OStream menu and add your webhook URL including the path

Ostream Director Output Stream

Make sure that you have the Output Stream selected in your Stream Source -> Hub Settings:

Ostream Director Hub Settings

Now whenever your pipeline runs, each time it recognises an object, OStream will POST the event data to your application, in this case the Flask server. Once you have the JSON data, you can process it in any way you like.

In this example, we are counting the number of people recognised in each video frame and also logging the whole metadata for inspection.

Here’s some typical output in the Flask terminal, showing the number of objects detected, which is a calculated value from the metadata in the Flask function, followed by the whole objects list:

Ostream Flask

If you overlay the Flask terminal on top of the Live Stream viewer in OStream you can see that the response to each inference event is almost instantaneous.

Ostream Output

Once you have this information you could create alerts or use the timestamp to lookup sections of the original data stream in a recording.

I also tested the webhook using an AWS Lambda instance which is a bit more complicated to setup, but again it worked almost instantly.

It’s the starting point of a sophisticated image-processing pipeline…

Step 10: Shutdown

Shut down the system properly using the poweroff command and allow 2 mins before removing the power supply as the blue LED remains on even when the system has shut down.

$ sudo poweroff

Step 11: Troubleshooting

I had a few teething problems with my board but there’s responsive support available via email at support@ostream.com

Here are a few tips:

OStream Node offline - If your Node is showing offline then the ObjectAgent process may have failed.

Ostream Node Offline

  • Wait for 2 minutes after bootup
  • Make sure your Ethernet cable is connected
  • Check if the ObjectAgent service is running - here it has failed
ocom@ocom-cecaf1:~$ sudo systemctl status ObjectAgent
● ObjectAgent.service - ObjectAgent Service
   Loaded: loaded (/etc/systemd/system/ObjectAgent.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2024-05-01 16:41:02 BST; 1min 26s ago
  Process: 475 ExecStart=/userdata/ostream/oa/bin/ObjectAgent (code=exited, status=255/EXCEPTION)
 Main PID: 475 (code=exited, status=255/EXCEPTION)
…
May 01 16:41:02 ocom-cecaf1 ObjectAgent[475]: [ERROR] Wlan ping response not valid
May 01 16:41:02 ocom-cecaf1 ObjectAgent[475]: [ERROR] Eth or wifi test failed above
May 01 16:41:02 ocom-cecaf1 systemd[1]: ObjectAgent.service: Failed with result 'exit-code'.

If ObjectAgent is trying to ping your wlan and it is not configured, override this check by creating the file startupValidationDone in /userdata/ostream/oa/bin/

ocom@ocom-cecaf1:~$ sudo touch /userdata/ostream/oa/bin/startupValidationDone
  • Reboot and check ObjectAgent is running.
ocom@ocom-cecaf1:~$ sudo reboot
ocom@ocom-cecaf1:~$ sudo systemctl status ObjectAgent

Live View won’t restart - Restart the ObjectAgent:

ocom@ocom-cecaf1:~$ sudo systemctl restart ObjectAgent

Logs - There are in /userdata/ostream/oa/bin/ and /userdata/ostream/rd/bin/

Summary

ROCK 5AIO in conjunction with OStream is an innovative new solution to reducing the complexities of AI image processing.

The board supports attaching directly to a CSI camera or via remote RTSP media streams running on the network, allowing it to handle multiple cameras.

Image streams are processed locally with only the event frames being transmitted to the OStream service. This reduces bandwidth and improves privacy. It also enables the ability to search through capture events for further inspection, with the JSON metadata being made available via webhooks.

The board comes with all the software pre-install making it easy to setup and using the OStream web UI is very intuitive.

In this project we showed how to setup the ROCK 5AIO, create AI image processing pipelines with remote RTSP cameras and accessing the image events detected by the AI for further application development.

With the ability to easily access the AI inference data, your imagination is the limit to your AI vision processing application!

References:

OStream site: https://www.ostream.com 

Hookdeck site: https://hookdeck.com 

Flask: https://flask.palletsprojects.com 

I'm an engineer and Linux advocate with probably more SBCs than a Odysseus moon lander