Inter-Chip Communication: Design Considerations to Mitigate Commonly Overlooked Attack Paths

Introduction

At Praetorian, we perform security assessments on a variety of Internet of Things (IoT) devices ranging from commodity home “smart” devices, medical devices, critical infrastructure, and autonomous vehicles. While previous blog posts have discussed a general methodology we’ve followed for a complete assessment, the nature of our “chip-to-cloud” security has led us to one particularly interesting attack vector: a review of the inter-chip communication within a single device. Due to the need to communicate over multiple different wireless interfaces (ex., Wi-Fi, BLE), some devices are designed with multiple MCUs, each of which supports a different protocol and, by necessity, needs to communicate with each of the others over local inter-chip communication on a circuit board.

This blog post attempts to explore commondesign patterns that use inter-chip communication, scenarios where attacks against this interface are particularly relevant, and design considerations that protect this communication. Some of the concepts discussed herein, such as spoofing and replay also work more broadly against the various communication links in an IoT ecosystem. While other hardware-specific attack vectors such as dumping flash memory or debugging via JTAG are also interesting and often used in conjunction with attacking the inter-chip communication, those are outside the scope of this blog post and won’t be discussed here.

Defining a Threat Model for Inter-Chip Communication

A device that communicates over two or more different wireless interfaces commonly features in IoT security assessments. The most ubiquitous interfaces are Wi-Fi and Bluetooth Low Energy (BLE), but some others might include a cellular modem, Zigbee (or other 802.15.4 mesh network protocols), or Near-Field Communication (NFC). Without loss of generality, let’s assume for the remainder of this post that we have a device with the first two – Wi-Fi and BLE.

A typical use case for such a device might be that the BLE interface is responsible for communicating with a mobile application, another device in the vicinity, or a bridge/hub type device. The Wi-Fi interface might then be used to talk to a local Wi-Fi access point to directly access cloud services when the BLE interface is disconnected from all nearby devices.

In most scenarios, such a device might be designed with a microcontroller with Wi-Fi capabilities, a microcontroller with BLE capabilities, and optionally some external flash memory for storage of non-sensitive data. For a low-risk device, this could result in a diagram with the following trust boundary around the entirety of the IoT device, as shown in Figure 1.

inter-chip threat model image 1

Figure 1: Trust boundary for low-risk device.

However, we wish to consider scenarios where the trust boundary is not at the exterior of the device but instead constrained to each individual MCU. Developers sometimes overlook these types of interactions, so they can be used for interesting vulnerabilities. For this type of scenario, we can adjust our diagram as shown in Figure 2.

inter-chip threat model image 2

Figure 2: Trust boundary for high-risk device.

Figure 2 also has the link between the BLE and Wi-Fi MCUs labeled as “UART” – the simplest hardware communication protocol common within a variety of devices. The remainder of this post will assume that these two MCUs use UART, but the same concepts should apply to any other protocol such as I2C, SPI, or USB.

Having defined the basic layout of communication with our IoT device, we can turn our attention to which portions of a threat model might include an attack involving inter-chip communication. To do this, we’ll answer a series of questions that inform aspects of our threat model, such as which user roles are relevant and what goals an attacker might have. Since this blog post focuses on inter-chip communication, we can assume that the attack surface is defined as a physical attack on the hardware, which might require extended access.

Will the IoT device reside in a public or semi-public place? Devices located primarily in public places could include a kiosk in a retail store or a scooter/bike share in an outdoor area. Devices located in semi-public places could be cameras in an office environment, locks for hotel rooms, or medical equipment in hospitals. If the intended use case calls for the device to be in one of these locations, then the risk profile for the device should be elevated since it might be outside of the owner’s control for an extended period.

Is there an expectation that untrusted users will have access? While the distinction between public/private devices is important, the owner might closely hold or monitor some devices in public areas. For example, a key fob is likely to be always in the possession of its owner since possession of it gives access to the owner’s car, home, or other locked device. Similarly, a personal medical device might always be attached to the owner, so an untrusted user should not be able to access it without the owner’s knowledge or permission.

Is the type of data sent between two MCUs sensitive? Any keys, tokens, or PINs for a single user or an account would be considered sensitive information. Other secret information entrusted to the device might include a Wi-Fi SSID and password or other configuration information for the specific use case of the device.

Does the data sent between two MCUs require integrity? State-changing information, like “lock/unlock”, configuration information, and firmware updates, needs to be trusted when sent between MCUs. This information might not necessarily be sensitive – as in, an MCU reports that the device is “locked” or debug mode is “disabled” – but the recipient needs to be able to trust that the information is an accurate representation of the state of the device.

Do a large set of IoT devices use the same secret information? Many attacks on IoT devices involve an attacker deconstructing a single device, discovering that the cryptographic keys are identical on all devices, and then leveraging that knowledge into attacking all other IoT devices of the same make/model. If the manufacturer has a history of using “fixed keys” or if the developers are unaware of secure cryptographic principles, that is relevant to the attack paths of a threat model.

Security Assessment

Once we define the threat model for the target IoT device and identify a relevant attack path involving inter-chip communication, we can begin performing a review of this interface. While source code review often is faster than a hands-on hardware review, attackers can find the latter instructive. It might be necessary to confirm that the source code provided matches the same data which could be captured by an attacker.

If  the developer, an internal team, or a trusted partner pursues this assessment then they may have access to the source code for the firmware of each MCU. In general, data sent between two MCUs is likely to be structured in some manner with a header, data frame, or defined message format. When reviewing source code, best practice is to identify the library which defines these structures and work backward to functions that call into these libraries with relevant data.

When source code is not available, the attacker must resort to a “black box” approach to testing. This starts with a deconstruction of the device by removing the external casing to expose the printed circuit board (PCB) and then decomposition of the MCUs and traces on the PCB, as we have documented in our other blog posts,. The most common tool for capturing inter-chip communication is a logic analyzer such as the Saleae. If the pinout of the device is already known, then an attacker could connect a less expensive device such as a USB-to-UART serial cable directly to the PCB so that a computer could interact directly with the UART interface. The security community already has performed extensive work on injecting data into this communication channel, as Highland, Kienow, and Barry describe in their whitepaper .

While setting up the capture or injection capabilities is important, another useful attack involves forcing  the device to send data between chips using the device’s external functionality. When passively capturing inter-chip communication, Praetorian has found that the following actions often result in interesting data. Note our best practice is to perform the same operation more than once both on the same device and across two different devices associated with different accounts. This aids in the data analysis step that follows.

  • Booting the device
  • Factory reset and provisioning the device to a new account
  • Forcing a firmware update from a mobile device (via Bluetooth)
  • Forcing a firmware update from a related account (via a web interface on the device or directly with a cloud service)
  • Performing state-changing operations directly with the device, such as enable/disable, lock/unlock, or record
  • Changing the configuration of a device via some remote management utility (typically a web interface or direct API calls to a cloud service)

Once the attacker completes the capturing, their next step is to review the traces for recognizable patterns or known values for the device. Solid data analysis of this sort is an art, and too difficult to describe in a blog post. Nevertheless, there is often low-hanging fruit to look for, assuming the communication is not encrypted. This includes common data structures (ex. protobuf, cbor), frame information (ex. data length, CRCs), known values for the device (ex. serial number, MAC address), or simply easily recognizable values (ex. strings or ASCII). The main objective of this entire process is to identify information described in the threat model developed earlier – whether that be sensitive information such as keys/tokens or data requiring integrity such as firmware updates. Additionally, if this analysis reveals some data not previously expected or included in the threat model, then the threat model should be updated as appropriate.

At this point, if any sensitive information has been identified purely through a passive review, then this might constitute a security vulnerability. Praetorian advocates for a responsible disclosure process whenever possible, and if we identify this vulnerability during a security assessment, then we provide the results directly to our clients.

Attack Demonstration

With the assessment piece complete and proper identification of commands or messages sent between two MCUs, some hands-on work still may be necessary. The attacker would inject new communication in order to demonstrate an attack path or vulnerability. Even if the data analysis of passive captures is incomplete, then an attacker may try to experiment with fuzzing or replay attacks described here.

One-to-Many Attacks: One of the most common issues identified in IoT devices is the reuse of keys or missing cryptography altogether. In practice, this means that an attacker can perform a thorough analysis of an IoT device they own and then leverage that knowledge to attack another device. If key material is shared or not used at all, then a message sent by one device could be repurposed for another device to perform a variety of actions that might compromise the second device.

Replay Attacks: This attack occurs if the IoT device sends a command which can be replayed again to perform the same action. If this action is some security control specific to the device, such as “unlock” or “stop recording”, then the attacker might be able to spoof that security-sensitive request again without proper authorization.

Security Downgrade: One of the most common pieces of data sent between two MCUs is firmware or software updates. An attacker might be able to force a device to install an outdated firmware bundle with known security flaws. Alternatively, some devices might also send commands over inter-chip communication related to the manufacturing or provisioning state of the device. If commands exist which allow for security features like a JTAG lock or debug output to be toggled, then an attacker might be able to send these commands between MCUs to enable further analysis or access to the device. The attacker might also further generalize this  “downgrade” attack,to alter any configuration information sent between MCUs.

Device or Account Impersonation: Another attack path unique to inter-chip communication might be the ability to forge data sent to a cloud service. For instance, if the BLE MCU wants to send a request to the cloud, it might first have to send that data to the Wi-Fi MCU, which would forward the request via a configured Wi-Fi AP. Using this same channel, an attacker might be able to inject fake data into the account configured for the device. This might warrant a security vulnerability depending on the context of the device.

Recommendations and Security Controls for Inter-Chip Communication

The main recommendations for preventing attacks against inter-chip communication involve cryptography in one form or another. However, an effort to minimize sensitive data exchange between MCUs is an essential precursor to designing a protocol with cryptography. In order to minimize the attack surface, items like internal configuration state, tokens, and passwords should only travel along this interface when necessary. Furthermore, when possible, data should be stored on an MCU in an internal flash so that it persists across a power cycle.

Nearly all data exchanged between two MCUs should include a cryptographic signature to ensure that the data has integrity. The first option is a symmetric “signature” such as an HMAC. This would require that for each inter-chip communication link, the developer provision the MCU on each end with the same secret key value. Ideally, this provisioning step should result in each MCU pair containing a unique key that cannot be changed by an attacker after the device is associated with an account. This could happen within a factory environment, during a factory reset, or when provisioning a device to a new account/user.

The second option for signatures is an asymmetric “digital signature”. Given that most communication transmits over a single link and not to a wide variety of devices, this method may be a poor cryptography fit for most use cases. However, firmware images are the exception to this rule. Since a trusted party constructs these, and distributes them to many devices, it is often necessary to store the public key for firmware signatures on each device and only have the trusted party hold the private key.

Sensitive data which should not be presented to a user will require encryption. This could include data such as a Wi-Fi SSID & password, an authorization token for cloud API requests, or other secret values associated with a device or its linked account. For most other data, encryption will not be necessary as an attacker will know or suspect the contents of the message. For example, if we press a button on a device and see a message at the same time, we can surmise that the message sent was, “the user pressed the button.” Because the information is easy to guess, it does not need to be secret and therefore does not require encryption. When messages do require encryption, an authenticated encrypted scheme such as AES-GCM or AES-GCM-SIV is most likely the best option.

The set of requirements for secure key distribution, both for encryption and signature purposes, is another important factor in choosing cryptography. Provisioning a unique key during manufacturing as described for the symmetric “signature” above could sidestep the need for key distribution later in the device’s lifecycle. In fact, this method is used often when a “root” key or certificate is written to a device.

If key distribution is a requirement, the developer could provision each MCU with its own private/public key pair (possibly chained to a trusted root) and then share the public portion of that key with the other MCU. Using each pair, the MCUs could derive a shared secret using a Diffie-Hellman key exchange and then proceed with their chosen encryption method. Developers should proceed with caution, as the task of setting up a secure key distribution scheme is a hard engineering problem. Custom implementations have resulted in many pitfalls surrounding root(s) of trust, machine-in-the-middle attacks, and weak parameter choices.

The prevention of replay attacks is the final consideration for inter-chip communication messages we will address herein.  One common technique–to include a strictly increasing value within a signed message and check that newly received messages always contain a strictly higher (not equal) value than a previous one–is particularly useful in combating replay or downgrade type attacks. These values could be used within the encrypted data itself or as a nonce or associated data portion of an AEAD cipher. For practical implementations, a developer might use a “time” value for this purpose, assuming that the clock on each MCU cannot be reset to an earlier value.

For all the cryptographic recommendations in this section, the general advice regarding types of encryption are likely to remain valid. However, advances in cryptography routinely result in upgrades to the specific algorithms or modes for each encryption method, so any new device should attempt to use the most current cryptographic guidance.

Conclusion

When designing an IoT device with multiple MCUs, developers must account for many considerations beyond the simplicity of a single MCU with its own internal flash storage. If the threat model dictates that physical attacks present a risk to the device, then ensuring that communication between MCUs on the device is secure warrants the extra effort.

icon-praetorian-

See Praetorian in Action

Request a 30-day free trial of our Managed Continuous Threat Exposure Management solution.

About the Authors

John Novak

John Novak

John is in Praetorian's Architecture and Engineering practice. His specialties include IoT assessments, cryptography, & other advanced service offerings.

Catch the Latest

Catch our latest exploits, news, articles, and events.

Ready to Discuss Your Next Continuous Threat Exposure Management Initiative?

Praetorian’s Offense Security Experts are Ready to Answer Your Questions