Rust for Security and Correctness in the embedded world
2024-1-9 18:0:0 Author: research.nccgroup.com(查看原文) 阅读量:8 收藏

Increasingly large companies are utilising Rust in their systems, either existing or new. Most uses focus on how it can help in managed environments, such as within a system with a running OS to handle memory allocations, allowing for an increased level of abstraction and useful tooling that can take advantage of functionality that the kernel can provide. Less discussed is the applicability of Rust to a low level environment such as embedded devices or operating systems. This article will focus on the usage in the embedded space (for a discussion of Rust in the kernel space, see the excellent Rustproofing Linux series by my colleague Domen Puncer Kugler).

With the spread of IOT an increasing amount of devices with limited processing power and potentially little to no memory protections are being exposed to input sourced from the internet (often indirectly, however relying on external applications to sanitize data is a risky strategy) the need for safe handling of input is more pronounced than ever before. On managed devices garbage collected languages can help here, but these are not practical in embedded devices (there are subsets of these languages that run on certain embedded devices but they are 1. constrained and 2. available on a very limited subset of devices).

As most embedded devices run C based firmware (typically not the most recent standard of C either) they rely heavily on programmers knowledge of all edge cases of the language to protect themselves against potential issues. This does not always work. Predominantly this is the result of the complexity of interactions within modern code, especially in an embedded context. Rust can help mitigate some of these issues. As with all languages, escape routes for certain protections are available (through abuse of pointer de-referencing available through unsafe blocks in the case of Rust), but through the use of Miri can help catch these in Rust (although again, in an embedded context this will require some additional work to get around not running in hardware). The borrow checker, often referenced as the main point of the Rust language (and the contributor of much of the complexity of the language), in the embedded context allows for control over IO devices, by protecting against multiple accesses to hardware devices in software, without additional overhead for the developer (and with no additional runtime checks required, as it is implemented at compile time).

Overflows (whether buffer or numeric) can be protected against in rust, as it has core library functions (the core library contains functions that don’t need allocations or an operating system). Slices (Rusts array reference type) will panic if an attempt is made to read past the end of the array, rather than providing whatever exists at that offset from the slice base. Numeric over/underflows carry a variety of protections, in debug mode these panic, but in production there are a variety of associated methods that can protect against issues (checked if an overflow should not produce a result, overflowing if just knowing that wrapping occurred is enough, saturating for when you want to stop at the limits and wrapping to deliberately mark that the behavior is desired and not just a mistake). This is all implemented in core, meaning that #![no_std] environments can benefit without having to add in an additional library (which thanks to Cargo is quite straightforward, but will imply a less actively monitored implementation).

The core library of Rust implements most iterators, fundamental types and their associated methods, including endianness operations, which are very commonly performed on embedded devices for communication purposes (and often a source of annoying bugs). Iterators allow for handling of slices in a way that can elide bounds checks, as it can prove (by the way iterators function) that it will not run off the end. Iterators are not immune to being stuck returning items (as discovered in this issue for Rustix), which can potentially lead to DoS attacks. Additionally, the core Async traits and types are within core, which has allowed for the creation of Embassy which allows for asynchrony in an embedded context, without any dynamic memory allocation.

Cargo (and the crates.io repository) allows for easy use of libraries for various purposes, avoiding the danger of rolling your own and repeating the mistakes of the past, especially in a cryptographic context. Currently most cryptography is provided in software for OpenSK. One implementation is a locally written set of primitives, but there is also a wrapper around the rust_crypto implementations that are relevant. rust crypto provides a set of traits that can be written to, for implementing a primitive that can be used, and some primitives themselves. OpenSK uses their AES implementation (which provides a constant time implementation) but also the ed25519_compact crate, which implements the rust crypto traits. the primary issue here is that being a young language, no real standard implementations have been settled on that are guarantied to be supported (or keep up with the compiler), for instance, the ed25519_compacts last tagged release (which is what you will receive if you use crates.io to put it in a project) is from Oct 11, 2022. Time is no guarantee of issues, and fast moving software is no protection against faults, but unsupported software is historically where issues arise, especially where security is concerned.

The OpenSK project is working to create an as pure Rust implementation of a FIDO2 compliant security key, being able to be run as a Tock OS application or a library to provide functionality for hardware. Currently all cryptography (except for one instance during initial boot) is run in software, but hardware acceleration on Nordic nRF52840 chips will occur in future. Use of the Secret type to handle automatic zeroization in a generic and portable manner taking advantage of the Rust compilers methods to ensure that these are not compiled out. Here is an example of the common benefit of the unsafe block. Thanks to safe Rust being incapable of dereferencing pointers, the unsafe block in volatile_write shows that if there are any mishandling of pointers, it will occur here. Note also that ptr::write_volatile requires that both types to be written are the same, and thanks to the sized requirement, the call to z::default() will instantiate a correctly sized block of memory, of some default value. DefaultIsZeroes is misleadingly named, but requires that the type has some sort of default value.

impl<Z> Zeroize for Z
where
    Z: DefaultIsZeroes,
{
    fn zeroize( mut self) {
        volatile_write(self, Z::default());
        atomic_fence();
    }
}

#[inline(always)]
fn atomic_fence() {
    atomic::compiler_fence(atomic::Ordering::SeqCst);
}

/// Perform a volatile write to the destination
#[inline(always)]
fn volatile_write<T: Copy + Sized>(dst:  mut T, src: T) {
    unsafe { ptr::write_volatile(dst, src) }
}

Rust provides Enums, which are sum types, which allows for useful techniques. The following method is associated with an Enum that can have two types, a PrivateKey::Ecdsa, which contains a buffer with the key in a Secret wrapper, and if the feature ed25519 is enabled can have a PrivateKey::Ed25519 variant, which contains an ed25519::SecretKey instance. This has its own internal version of the zeroize trait which writes in a loop, rather than using the write_volatile ability to write to an arbitrary type.

/// Returns the ECDSA private key.
pub fn ecdsa_key<E: Env>( self) -> Result<EcdsaSk<E>, Ctap2StatusCode> {
match self {
PrivateKey::Ecdsa(bytes) => ecdsa_key_from_bytes::<E>(bytes),
#[allow(unreachable_patterns)]
_ => Err(Ctap2StatusCode::CTAP2_ERR_VENDOR_INTERNAL_ERROR),
}
}

Prevents viewing an Eddsa key as an Ecdsa key
Using Rust traits as generic restrictions allows for code reuse in a way that the compiler can check, rather than relying on void* and casting as in C, which can be hard to reason about and strips all protections (again, Rust can imitate this act using core::mem::transmute, which allows any type to be changed into any other type of the same size, which is still more restrictive, but mostly unneeded thanks to provided traits like into and try_into, which allows for types to change but with a known method, and the opportunity for a failure to occur). The primary issue here is monomorphization, which creates a separate instance of generic functions for each usage. This is an issue on general devices, but can be crippling on an embedded device, as it can greatly increase binary sizes if many generic functions are used with many types. However, the reward for this is that we can constrain to only allowing types in a manner of our choosing, and only if they implement the correct type (so here, both parameters must be of the same type, and that type must implement Xor).

One useful Rust provides as well is the ability to check for certain conditions to be true in the compilation context. Here, #[cfg_attr(test, derive(PartialEq, Eq))] states that only if the program is run with --test will the following macro derivations be carried out. This enforces that only when running in a test state that private keys can be compared for equality in a non constant fashion. C allows for a similar act with #ifdef blocks, however due to the spatial remove from the relevant data (as in C there is no notion of derives or associated methods) the potential for missed implementations is far greater. Note that the #[derive(Clone, Debug)] attribute is always carried out. This can help assure that while equality checks can be carried out when required for testing purposes, even a debug build would not contain these implementations of equality, preventing these being accidentally used elsewhere and exposing the device to side channel attacks.

/// An asymmetric private key that can sign messages.
#[derive(Clone, Debug)]
// We shouldn't compare private keys in prod without constant-time operations.
#[cfg_attr(test, derive(PartialEq, Eq))]
pub enum PrivateKey {
    // We store the key bytes instead of the env type. They can be converted into each other.
    Ecdsa(Secret<[u8; 32]>),
    #[cfg(feature = "ed25519")]
    Ed25519(ed25519_compact::SecretKey),
}

With all that said, why not use Rust? primarily, a lack of support for specialized devices. STM32 based devices are well supported, and SVDs are relatively easily available (and generally accurate), Specialized security focused chips are not so well supported. Historically this has been an insular field; with most documentation restricted, and NDAs preventing the spread of any developed tooling adapting Rust infrastructure to these microcontrollers is beyond the reach of most companies. Additionally, currently allocators are assumed to be infallible (that is, they will always return memory when requested). While broadly true in the context of a desktop or server environment, the embedded world cannot make such assumptions. Work is underway to create fallible allocation (spurred on by the Rust for Linux project) support, but currently this is not easily used and is not stable. Memory leaks are not considered an issue by the Rust team, meaning that if allocation is permitted then care must be taken in an embedded context, as even small amounts of memory being leaked can lead to significant problems.

Overall, the security Rust provides suggests that its use in embedded contexts is worth the investment. With the spread of IOT and the increased use of more general purpose microcontrollers in potentially hostile environments (Devices connected to the internet, smart meters, vehicles), security for low level code is increasingly relevant. With many security issues being traced to mistakenly handled data, and with no operating system to rely on, embedded devices will carry on after astonishing overreads that will cause a segmentation fault in a general purpose OS context.

Here are some related articles you may find interesting

Technical Advisory – Multiple Vulnerabilities in PandoraFMS Enterprise

Introduction This is the third Technical Advisory post in a series wherein I audit the security of popular Remote Monitoring and Management (RMM) tools. The first post in the series can be found at Multiple Vulnerabilities in Faronics Insight, the second post can be found at Multiple Vulnerabilities in Nagios…

Retro Gaming Vulnerability Research: Warcraft 2

This blog post is part one in a short series on learning some basic game hacking techniques. I’ve chosen Warcraft 2 for a variety of reasons: With those things in mind, most older RTS games work in a similar manner, and you should be able to apply these techniques to…

Public Report – Security Review of RSA Blind Signatures with Public Metadata

During the Autumn of 2023, Google engaged NCC Group to conduct a security assessment of the white paper entitled “RSA Blind Signatures with Public Metadata”, along with the corresponding IETF draft for “Partially Blind RSA Signatures”. The work is inspired by the growing importance of anonymous tokens for the privacy…

View articles by category

Call us before you need us.

Our experts will help you.

Get in touch


文章来源: https://research.nccgroup.com/2024/01/09/rust-for-security-and-correctness-in-the-embedded-world/
如有侵权请联系:admin#unsafe.sh