Skip to main content

An air quality sensor alongside an eInk display showing a picture generated by an AI model

The aim of this project was to create an engaging way to monitor current environmental conditions, by having an artificial intelligence draw an artistic picture representing the output of DesignSpark's air quality sensors. It also provides an additional interactive element, allowing viewers to vote upon whether they believe a particular image was worth the resources required to create it. Encouraging people to consider not just the local environmental conditions, but also how AI and machine learning can potentially contribute to global climate change. The equivalent distance that could be travelled in an average car is provided to put this in the context of other everyday activities.

You can see all images generated by the project by following the @airqualityart Twitter bot, which tweets each picture as soon as it is created.

Hardware

DesignSpark provided me with a beta kit of their new Environmental Sensor Development Kit (ESDK) which includes sensors for measuring: CO2 levels, temperature, relative humidity, volatile organic compounds (VOC) and particulate matter (PM). To this, I added an Inky Impression 7 colour eInk display and a second Raspberry Pi (182-2096) .

To give the final result a slightly more artistic look, I then 3D printed a frame for the eInk display using biodegradable wood filament (910-7040) . The design used for the frame can be found on Thingiverse. This was keenly supervised by Nelly.

3D printing of the display frame being supervised by my boss, Nelly

Although the frame now makes it look much nicer, some of the pictures it generates can still give you nightmares...

An image generated from the prompt "Nature is happy"

(This was generated from the prompt 'Nature is happy'...)

Software

I made some modifications to the ESDK's dashboard, first to allow people interacting with the device to see graphs of recent readings directly on the display, which has now been included upstream in the official ESDK image for everyone to make use of. I then added the ability for users to vote on whether they consider a particular piece of art as being worth the CO2e (carbon dioxide equivalent), required to produce it.

I had the ESDK generate a sentence based on the current environmental conditions, so if the amount of CO2 was high it might produce a sentence like: "The air quality is poor", or if the humidity was high: "The air is wet", high particulate matter could result in: "The air is polluted". For each condition, it has a number of different phrases to randomly select from. These phrases are then passed to a machine learning model to produce an artistic representation of the current environmental conditions.

Generating Images

I trialled a couple of different approaches to image generation. Initially, I tried using ClipDRAW, which generates an image by manipulating a set number of vector paths (you can play with ClipDRAW directly via replicate). The following image shows ClipDRAW's output after I burnt some toast in the kitchen next to the workshop:

Image generated for the phrase "The air is polluted"

While this approach produced reasonable results, the images it created were very similar to each other in terms of both style and content. To achieve a more diverse range of image outputs I switched to using pixray, with a mix of the vqgan and pixel drawers (you can also experiment with pixray for yourself with replicate). This was able to produce a much wider range of images in some very interesting styles:

A collection of four different output styles

To increase the variety of images, the prompts produced by the ESDK were randomly altered to include stylistic hints. So for example "The air quality is poor" might become "A watercolour painting of the air quality is poor", or "The air smells nice, by M C Escher" to get a particular artist's style.

Both of the image generation models mentioned require a considerable amount of RAM and ideally GPU acceleration, so couldn't be run directly on a Raspberry Pi.

AWS Approach

Initially, I configured an AWS g3s.xlarge instance for the models to run on, which costs roughly £0.60/hour to run. To reduce costs, and unnecessary energy wastage, I set up a second miniature web service on the server I host a number of my other projects on. This service used the boto3 library to start the AWS instance, wait for it to be ready, make an image request, and then once complete it shut down the AWS instance again. Since the system only generates an image once an hour, and images require 5-10 minutes to generate this drastically reduced the cost to around £0.05/hour. Below is a simple example script, for if you wish to take a similar approach in any of your own projects:

import boto3
import requests
from botocore.config import Config

# Replace these with your own values
AWS_KEY = "yoursecretkey"
AWS_SECRET = "yoursecrettoken"
AWS_REGION = "eu-west-1"
INSTANCE_ID = "yourawsmachine"

config = Config(
    region_name = AWS_REGION
)

# Connect to AWS
ec2 = boto3.resource('ec2',
    config=config,
    aws_access_key_id=AWS_KEY,
    aws_secret_access_key=AWS_SECRET
)

# Start our instance and wait until it's ready
instance = ec2.Instance(id=INSTANCE_ID)
instance.start()
print("Waiting for server...")
instance.wait_until_running()

# wait_until_running() just waits until the server has booted,
# we need to wait further until the webserver providing an interface
# to our AI model has started
retries = 15
retry_delay = 10
retry_count = 0
while retry_count <= retries:
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    result = sock.connect_ex((instance.public_ip_address, 80))
    if result == 0:
        print("Server up")
        break
    else:
        print("Still waiting for server...")
        time.sleep(retry_delay)
        retry_count += 1

if retry_count > retries:
    instance.stop()
    sys.exit("Service did not start correctly")

# Make a call to the web server running on the AWS instance,
# modify this to set the appropriate parameters for your AI model
response = requests.get("http://%s/?your_parameters=here" % instance.public_ip_address)

# Shutdown the AWS instance
print("Shutting down image server...")
instance.stop()
print("Server shut down")

# The 'response' variable will now contain the result of your request,
# for image data you can access this through response.content,
# for JSON data through response.json() and for plain text through response.text

 

Replicate

I was fortunate enough to be given early access to Replicate's new API service. Replicate provides a service for easily hosting machine learning models, offering a simple web interface allowing people to experiment with a model quickly and easily. In addition to this, they now offer an API, providing programmatic access to those hosted models. This allowed me to replace my slightly complicated AWS set-up with a few simple API calls using the replicate python module. For example, to get an image from the pixray model is as simple as:

import replicate
import requests

model = replicate.models.get("dribnet/pixray")
prediction = replicate.predictions.create(
    version=model.versions.list()[0],
    input={"prompts": "A watercolor of a happy gray cat."}
)

# Wait for the prediction to complete
prediction.wait()

# Fetch the final output image
response = requests.get(prediction.output[-1])

 

Twitter

A screenshot of a tweet

As a finishing touch, I created a twitter bot, @airqualityart, which tweets out the generated images as soon as they're created. I used the tweepy python module for this. One thing to note is that Twitter's v2 API doesn't currently support media uploads. So it was necessary to request elevated developer permissions from Twitter so that I could make use of the old 1.2 API which does support this. The following snippet shows how to send a tweet with an image based on the response returned by either the AWS or Replicate method above:

import tweepy
from io import BytesIO

# Authenticate with Twitter (credentials can be obtained by signing
# up for a Twitter Developer account)
auth = tweepy.OAuthHandler(YOUR_CONSUMER_KEY, YOUR_CONSUMER_SECRET)
auth.set_access_token(YOUR_ACCESS_TOKEN_KEY, YOUR_ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)

# Upload media directly from response
media = api.media_upload(BytesIO(response.content))

# Post tweet
api.update_status(
    "Look at this cool picture I tricked a computer into drawing for me!",
    media_ids=[media.media_id]
)

 

DesignSpark Cloud

Grafana dashboard for the DesignSpark Cloud

All of the data collected from the sensors is stored in the DesignSpark Cloud. While I wasn't able to display the images natively within the cloud dashboard, I was able to add a link to the latest image provided by my web service. Metadata for all the images, including when they were created, is also recorded by my web service, so the data in the DesignSpark Cloud can be used to retroactively determine what the environmental conditions were like at the specific time a particular image was created.

What next?

I'll be taking the device to a variety of public events so that people can interact with it in person in a range of contexts and place their votes on artworks. Once I've taken it around a few different communities (tech, artistic, environmental, etc.) I'll report back on people's impressions of the project and the art it generates.

I'm a software engineer from Newcastle upon Tyne. I work for Metis Labs, where we develop solutions for reducing waste in industrial settings using a mixture of computer vision and machine learning. In my spare time, I also build odd things like props for stage shows, strange toys for friends and other vaguely artistic endeavours
DesignSpark Electrical Logolinkedin