Skip to main content
shopping_basket Basket 0
Log in

Industrial Security Part 6: The Challenge-Response-Concept

Part five did show you the embedded application of all the cryptographic methods we've learned in part one to four. You could see the parallels between secure internet connections in the IT world and the use of the same algorithms and methods in an authenticator chip. I would like to try another approach to embedded security in this part by first defining a general requirement and then the general method to meet these demands. I highly recommend you read the first five parts first. It will be much easier to understand the details of this part when you are familiar with the underlying cryptographic principles.

Mission: authenticity

shutterstock_422701081_fa289b4797d7c43288b12a28cfa145b8d36710ad.jpg

A core requirement for security is the ability to authenticate commands and data. In the world of IIoT, there is no longer a single dedicated cable which guarantees that the value reliably comes from a particular sensor. And it's no longer an authorised service engineer connecting his programmer's cable to the PLC. Updates are coming over the internet. And who guarantees an authorised user has issued the heater-on command if it no longers comes from a switch but via the internet?

But not only IoT applications need authentication. It has become hard for a manufacturer to protect his OEM peripheral products or consumables against counterfeiting by using product-specific housings with exotic shapes in times where you can use maker equipment like 3D scanners and printers. All these examples show the demand for authentication. But how can embedded devices authenticate to each other?

Solution: Challenge-response-authentication

shutterstock_531394852_c53a29e9880b699845c186cb30b8056db6aadc2c.jpg

Let's take a system which needs to check the authenticity of a device. The system can use cryptographic keys in combination with a challenge-response-procedure to proof the authenticity of the device. It creates a random message (called challenge) and sends it to the device. The device answers with a response. If the challenge or the response is encrypted, then both (system and device) would need to have the ability to encrypt or decrypt the message. So the possession of a secret key would be the proof for authenticity.

In symmetric-key cryptography, both (system and device) would need to have the same key. The system, e.g. sends an unencrypted challenge to the device. The device encrypts it using the secret symmetric key. The encrypted message would be the response. The system you decrypt the response with its key and compare the result with its own challenge. If they both match then the device is proven to be authentic.

challenge_response_symmetric_6c05e967b0383b1aca9cc46a8e05e4e189c5458a.png

In asymmetric-key cryptography, the system sends a random challenge message to the device. The device calculates the hash of this challenge plus some individual data from the device (like the series number). It uses a random number and the private key of a key pair as inputs for an ECDSA (Elliptic Curve Digital Signature Algorithm) which generates a signature of the challenge. This signature is the response message which the device sends back to the system. The system uses its challenge and the individual device data (which it can read from the device) to calculate a hash.  This hash, the public key of the key pair and the signature from the device, are used as inputs for an ECDSA verification procedure.

challenge_response_asymmetric_61258e9dbd50b50f5b0d0def966ed944e36a3a73.png

Most use cases will profit from the asymmetric-key cryptography when it comes to finding proper ways of mass deployment. Being able to disclose one of the keys without any security risk does allow many more processes compared to keep this key secret.

This leads us to an important question: Where are the weak points of challenge-response procedures, and how can we address such concerns?

The pitfalls

shutterstock_1612473391_aaac12baad6a1204dc04b7f91805fc937d6dcee0.jpg

Many of the algorithms used in asymmetric cryptography are "hungry". They consume lots of computational power and memory. You can reach the limits of an embedded controller very soon. So cryptography is obviously a perfect use case for specialised co-processors. In the case of counterfeit protection of a consumable, you often have no processor. In such cases, the complete algorithms and resources need to be integrated into one chip (ECDSA verification/signature calculator, TRNG, SHA256 calculator).

shutterstock_794340445a_75e896c3dd05407650e34b5a2bacc7ebbc6aed62.jpg

As always, we humans can be the weakest point when it comes to security. The decision to save computing time and memory by re-using the challenge message and thus knowing the correct response without repeated calculations is a severe security risk. Although it might seem to you obvious, this mistake was made by a leader of the video game market. The conclusion that no one will discover your tiny little secret is fatal. They always will as long as the data exchange between system and device is public. In part one, we've learned a fundamental concept of digital key security: We do not keep procedures secret but only the keys for the procedure. By re-using the challenge, you no longer rely on a protected secret key. You rely on the hope that no one will tap the communication between system and device because he could easily detect that the systems challenge is always the same and thus could simply copy the device's response to use it as a counterfeit's fake response.

Re-using the challenge is probably an excruciating faulty design. But renouncing a TRNG (true random number generator) for the signature calculations and using a poor "source of entropy" instead might result in similar problems. In part four, you have learned the exceptional characteristic of ECC (elliptic curve cryptography): ECDSA always needs a third random key ("ephemeral key")  to be secure. Without this random factor, the two generated signature values (r and s) would allow calculating the secret key. An inferior RNG with predictable output could corrupt the goal to eliminate this gateway for key mining.

A signature of the random challenge alone would only prove the possession of the private key. In the real world, most of the time, you will have many devices out there, possessing the same private key. So such authentication would only prove the origin of the device. I can't reliably prove the authenticity of a single individual device. You can facilitate such an individual authentication, by enabling the device to send some kind of individual data like its serial number. The system includes this individual data together with the random challenge into the hash for the ECDSA verification. The device does the same when calculating the signature. The result is a proof of origin and the individual property (e.g. serial number). Imagine a system which communicates with several sensors over a public network. Such a system must rely on the authenticity of an individual sensor, not just on the origin of a sensor.

shutterstock_766222318a_d1131bf561f4b14c11deba01799f8816f0b00ff5.jpg

Deployment of a private key into the devices during production time produces a security risk. A single little negligence could compromise the complete system. If the private key is no longer secret, the authenticity of all devices carrying this key can no longer be guaranteed. To avoid this worst-case scenario, we've already got to know the possibility of using "chip DNA" to generate a unique private key. A "PUF" (physical unclonable function) is used to derive a private key from the intrinsic random physical properties of an individual chip. Any attempt to probe low-level chip electronic properties during operation ("side attacks") does alter the PUF output. So using PUF technology, you no longer need to deploy the private key during production. It is already intrinsically built-in. But now you would need to deploy the public key of each individual device. To overcome this problem, we need to introduce a two-phase procedure which is explained in detail in part 5 for the DS28E38 chip. During production, the device gets a general certificate copied into memory which is generated by the production's CA-computer during EOL test of the device. The CA computer uses its private key for this certificate. The corresponding public key is copied into the systems during their production time. The systems can then authenticate the devices by verifying their certificate. The trick is now to place not only individual data like the serial number into the certificate but also the public keys corresponding to the devices' PUF generated private keys. This second asymmetric key pair is used for a second ECDSA verification process of a device's response to a system's challenge.

Another scenario

Until now, I've only talked about a scenario where a system tries to authenticate a device which could be a sensor, peripheral or consumable. But let's think of a different scenario: Your embedded controller is connected to the internet. A remote master system is issuing commands to the embedded controller or sends firmware updates. The embedded controller needs to verify the authenticity of such commands or updates. So how could we use the cryptographic methods we've learned for this task? I'll give you a perfect example to solve this problem with another Chip from Maxim Integrated: The DS28C36 (168-2897) .

command_authentication_f051d23cee3e2418333d6788b7ba5ebffbb6d1b5.png

Your controller reads the message (updated firmware or commands to be executed) from the internet. The sender needs to send the message together with a signature of this message. The issuer used a private key A to sign. The corresponding public key was copied into each security chip of the embedded controller during production time. The embedded system uses an I²C interface to forward the message and the signature to the DS28C36 chip. The chip performs a complete ECDSA signature verification process by calculating an SHA-256 hash of the message and using this hash together with the built-in public key A to authenticate the signature. The pass/fail result drives a digital output pin of the chip. The pin can be read by the controller or even directly drive any actuator.

In a scenario where you need to prevent any physical manipulation of the embedded system (e.g. faking the digital output pin of the DS28C36), the chip offers a secured digital transmission of the verification result. It uses the result, a TNRG, and a built-in private key B to calculate a signature over the result. The embedded system knows the corresponding public key B to verify this signature together with the state of the output pin.

This last example perfectly shows how easy it is to add state of the art security to an embedded system. The embedded programmer would not even need to know any of the underlying cryptographic calculations. Forwarding the message from the internet and waiting for the Boolean response is all it needs. So don't be afraid of security demands!

The next part will describe the evaluation kit for the DS28E38 and give more examples for crypto chips on the market.

Previous parts 1, 2, 3, 4 & 5

Volker de Haas started electronics and computing with a KIM1 and machine language in the 70s. Then FORTRAN, PASCAL, BASIC, C, MUMPS. Developed complex digital circuits and analogue electronics for neuroscience labs (and his MD grade). Later: database engineering, C++, C#, industrial hard- and software developer (transport, automotive, automation). Designed and constructed the open-source PLC / IPC "Revolution Pi". Now offering advanced development and exceptional exhibits.
DesignSpark Electrical Logolinkedin