Skip to main content

RS IoT Blockchain Demonstrators Part 5: Host Software

Main26_80184f3ab8eccb4bed07828542c381117531fa8d.jpg

Using Python to integrate sensors and outputs and interact with the blockchain.

This series of posts looks at the design and build of a set of demonstrators for the bi-annual Electronica trade fair and conference, which show how blockchain technology can be used to create a secure, decentralised data platform and more for the Internet of Things.

In previous posts, we’ve covered the mechanical and electronics build, and following which creating a private blockchain network and deploying smart contracts to this that will support our four particular use cases. In this post, we now take a look at the Python applications which drive LEDs, read buttons and sensors, and finally interact with our Ethereum smart contracts.

Note that rather than cover each Python script in its entirety, we will instead just look fragments which show how key parts of the application work.

Configuration

application:
    role: carcrash
    impact_trigger: 10000
    leds_port: 5558

buttons:
    leds_port: 5557

mqtt:
    broker: miner

blockchain:
    boot_node: "enode://3d8007b5099e2ee9ae384aac17ff508d6827a1aec956131344df\
                6eded933656bac0a9f51768fce908580b7b293ff0d6737633e794bc7f29a\
                8683371709f98ec5@123.1.1.2:30531"
    network_id: 555
    account: "0x2BC19750cdf3991D0A27d45304276Cd4D71F6975"
    contract: "0x6636bbD9B3C364d96B8a6CCFc9e6DAcc76c316CC"
    leds_port: 5556

It was decided to use YAML markup for configuration files, since this gives a bit more structure than a simple INI file, yet is easier for humans to parse than, say, JSON. Above we can see the /etc/iotbc/config.yml file for the Car Crash demonstrator.

The use of a config file allows common parameters to be shared across different Python scripts and for these to be quickly updated during development. However, it should be noted that there will be room for optimising the parameter set used and this is something that might be worth investing time in, were the number of nodes expanded upon or this to be used in a production capacity.

Ethereum node

# geth APIs to expose

APIS = "admin,db,eth,debug,miner,net,shh,txpool,personal,web"

# Build the geth command

## Base parameters

gethcmd = (['/usr/local/bin/geth',
           '--datadir', '/data/bc',
           '--networkid', str(cfg['blockchain']['network_id']),
           '--bootnodes', cfg['blockchain']['boot_node'],
           '--unlock', '0',
           '--password', '/dev/null',
           '--nat', 'none',
           '--rpc',
           '--rpccorsdomain', '"*"',
           '--rpcapi', APIS])

## Append params for miner and set text to identify new block

if cfg['application']['role'] == 'miner':
    miner = True
    gethcmd.extend(('--gasprice', str(cfg['blockchain']['gasprice'])))
    gethcmd.extend(('--targetgaslimit', str(cfg['blockchain']['targetgaslimit'])))
    gethcmd.append('--mine')
    newblock = 'Successfully sealed new block'
else:
    miner = False
    newblock = 'Imported new chain segment'

# Command to tee the output from geth to a non-blocking FIFO

teecmd = (['/usr/local/bin/ftee', '/tmp/geth.out'])

Above we can see how the parameters for the geth Ethereum node software are built in the Python script that runs this, eth-node. The network ID and boot node address are taken from the aforementioned configuration file. The role parameter is used to decide whether we need to configure the node as a miner, and also to define the string that we’ll look for in geth’s output that will indicate when a new block has been mined or imported.

The ftee utility is a non-blocking version of the UNIX/Linux standard command, tee. We pipe the output from geth to this and from there it is directed first to a FIFO that we can read from using e.g. cat, to observe geth node activity, with the second destination being our Python script.

    ethnode = subprocess.Popen(gethcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    fifofeed = subprocess.Popen(teecmd, stdin=ethnode.stderr, stdout=subprocess.PIPE)

    for line in fifofeed.stdout:
        out = line.decode('utf-8')

        if miner == True and 'txs=' in out:
            txs = out.split('txs=')[1].split()[0]
            if int(txs) > 0:
                leds.send_string('red,{0},{1}'.format(led_period, txs))

        elif newblock in out:
            leds.send_string('green,{0},1'.format(led_period))

Above we can see how Python’s subprocess module is used to execute geth and ftee, with the latter taking its input from the former — and then its output parsed to check for new blocks and in these transactions, while also being sent to a FIFO at /tmp/geth.out.

A function named leds uses Zero MQ to communicate with another process, in order to instruct it to blink the green Ethereum LED when a new block is mined/imported. The blockchain > leds_port parameter in our YAML configuration specifies the port number to connect to.

Mining_716f8c29d1053f68211b4da4c781f16cddec704c.jpg

With this we can quickly ascertain the operation of the network at the miner or one of the demo units, by confirming that the green LED is flashing every 5 seconds. By reading from the FIFO we can get much more detailed information and on the miner unit, we have a simple script that redirects this output to the console, so that we can closely observe the mining process.

Buttons and LEDs

def confpins():
    if device == 'miner':
        with open('/sys/class/gpio/export', 'w') as f:
            for pin in pins:
                if os.path.isdir('/sys/class/gpio/gpio{0}'.format(pin)) is False:
                    print('Exporting pin: {0}'.format(pin))
                    f.write(str(pin))
                    f.flush()

        for pin in pins:
            with open('/sys/class/gpio/gpio{0}/direction'.format(pin), 'w') as f:
                f.write("out")
                f.flush()

    else:
        GPIO.setmode(GPIO.BCM)
        for pin in pins:
            GPIO.setup(pin, GPIO.OUT)

As with the eth-node Python script, it was desirable to have common scripts eth-leds and buttons that ran on each unit, with their behaviour configured for the hardware platform via the YAML file. With Raspberry Pi nodes we have the luxury of the Python RPi.GPIO library for driving GPIO. However, there is, as far as I could ascertain, no such library for the Intel NUC. Although this did not present a major issue as under Linux we can toggle and read GPIO via sysfs.

Above we can see the function that is called in order to set up GPIO for LEDs; if running on the miner this is done by writing to sysfs, whereas on a Raspberry Pi it is handled by the RPi.GPIO library. A similar function takes care of setting up pins as inputs for reading button state.

As mentioned previously, commands sent via ZMQ to eth-leds are used to flash an LED a set number of times and with a specified period, else to simply turn it on or off.

The buttons script is very similar, only of course it reads I/O pin state instead of setting it. The purpose of this is to provide a quick and easy way of rebooting the node and, if required, resetting the blockchain database to a known state. Since the last thing you’d want to have to do at a trade fair is to have to attach a keyboard and monitor and start entering Linux commands!

def sysrestart():
    cmd = ['/sbin/shutdown', '-r', 'now']
    restarter = subprocess.Popen(cmd, stdout=subprocess.PIPE)
    out = restarter.communicate()[0]
    print(out)

With the Intel NUC we have a hardware reset pin available to which we can can connect the reboot button. However, the Raspberry Pi does not have this and so when running on a Pi the buttons Python script will read pin status and initiate a reboot when the corresponding pin is pulled low. The function that takes care of this can be seen above.

def blockchainreset():
    print('BCR!')

    leds.send_string('red,0.1,20')

    print('Stop Ethereum node')
    subprocess.run(['/bin/systemctl', 'stop', 'eth-node'],
                   stdout=subprocess.DEVNULL,
                   stderr=subprocess.DEVNULL)

The blockchainreset() function is triggered when the BCR button is pressed and the code fragment shown above first flashes the red LED in rapid succession to provide feedback, then stops the Ethereum node software. Following which:

  • The R/W data partition is unmounted
  • The data partition is restored from a clean backup
  • The data partition is re-mounted
  • There is a 3-minute delay to allow all units to be reset to the same state, should this be required
  • The Ethereum node software is re-started

Once again this feature is for convenience and to suit our demonstrator scenario. In production blockchain networks it’s more likely that this would be fully automated and/or with other strategies in place for avoiding data corruption.

Peripheral integration

DesignSpark_Pmod_Library1_a4c132e84a37716d0b0eceb993b4975c00e913aa.jpg

All four use case demonstrator units are Raspberry Pi-based and three of these employ the DesignSpark Pmod HAT for physical interfacing. To recap the Pmods used are:

  • PmodOLEDrgb (Machine Failure + Temperature Alert)

  • PmodAD1 (Machine Failure)

  • PmodTC1 (Temperature Alert)

  • PmodLVLSHFT (LeakKiller)

The first three are all supported by the DesignSpark.Pmod library and this greatly simplifies integration. The fourth is simply a voltage level shifter and is used to interface the Adafruit Dotstar addressable LEDs on the LeakKiller unit, which are driven via the DotStar Pi module.

The Car Crash unit instead makes use of a Click shield, together with the Accel Click and 8x8 Click. The former is driven using example code from MikroElektronika, while the latter uses the excellent luma.led_matrix library.

Applications

from web3 import Web3, HTTPProvider
from web3.contract import ConciseContract
from web3.middleware import geth_poa_middleware

So now we finally get on to the actual use case applications and blockchain integration! We saw earlier that geth was configured to expose a number of APIs, with one of these being named web. This is a HTTP/JSON based API that we could interact with directly using various different Python libraries, where we manually construct and parse payloads. However, there is a web3 Python library that makes interacting with Ethereum smart contracts much easier.

Since we are using proof-of-authority we do also need to inject a middleware layer to add support for this, otherwise we will get an error, since proof-of-work is currently the default.

w3 = Web3(HTTPProvider('http://127.0.0.1:8545'))
w3.middleware_stack.inject(geth_poa_middleware, layer=0)

CarCrash = w3.eth.contract(
    contract, abi=abi, ContractFactoryClass=ConciseContract)

def IoTBCwrite(data):
    CarCrash.setImpact(int(data), transact={'from': account})

Above we can see that we connect to geth on port 8545 for the API. After this the aforementioned middleware layer is injected. Next we set up our smart contract and with this specify its address. This is stored in the variable, contract, which has been set via the YAML configuration file. We also pass the ABI definition in JSON format, which has been stored in abi.

At this point, we can now create a function that when passed an integer, will, in turn, result in a function being called in our smart contract and a variable will be updated and persisted to the blockchain. It really is quite simple once the infrastructure is in place and we have nodes participating in a private network, accounts configured on them and with Ether, smart contracts deployed, and a mechanism by which we can then interact with them via the API.

And how would we read the contents of that variable? Again, simple.

def IoTBCread():
    data = CarCrash.getImpact(transact={'from': account})
    return(data)

Only this time it would need to be executed from the miner, or at least a node with its account configured, since in our smart contract we stated that only this account could call getImpact().

The applications running on the other demonstrator units are very similar. With these we are storing the time of machine failure, temperature alert or last leak, and for which UNIX time is used, i.e. the number of seconds since the epoch of 00:00:00 1st January 1970.

Potential improvements

If we wanted to build on this to provide a more advanced demonstration or for use in production, areas, where improvements could be made, include:

  • Configuration file format/structure

  • Blockchain data management, e.g.:

    • Use of a R/W filesystem more impervious to unclean shutdown and power cycling.

    • Fully automated recovery from filesystem corruption.

    • Use of “light” synchronisation mode, whereby minimal blockchain data is stored on the device and special sync nodes are set up to operate with a full database.

  • Much more sophisticated smart contracts

Regarding this last point, here we have purposely kept the smart contracts very simple for the sake of clarity. However, in practice they would almost certainly be much more complex and involving numerous different stakeholders at different stages in service provision.

Previous articles in this series

The design and build of the demonstrators is covered over the course of a total of five posts:

Andrew Back

Find how our connected stand was built

Open source (hardware and software!) advocate, Treasurer and Director of the Free and Open Source Silicon Foundation, organiser of Wuthering Bytes technology festival and founder of the Open Source Hardware User Group.
DesignSpark Electrical Logolinkedin