In case you missed the last article, click here
This article is also about the small red FPGA module. Sorry for using the image again and again :-)
Security is not an easy topic, because the securing has to be done on the lowest hardware-level but I try to explain it as good as I can.
One of the main aspects of the ICCFPGA (short for IOTA Crypto Core FPGA) project was security.
Unlike a server that is secured by physical measurements —for example, in a data-center — the FPGA module could potentially be used outside in the field and be exposed to physical attacks.
Imagine the FPGA module used as payment processor in a vending-machine selling snacks or coffee — it would be disastrous if someone could break or steal the vending machine and gain access of the seeds.
Of course, if the FPGA module is used as a co-processor, the main-processor has to be as secure as the FPGA module, but the strength of this module is that it can be used as the main application processor since you can write your own code running on the module secured.
There are several security aspects I would like to explain.
Soft-CPU and Memory Protection Unit
The following picture shows what is on the FPGA-module — especially inside the FPGA.
In the middle of the picture is the RISC-V (Soft-)CPU which is connected to memories, peripherals (like I2C, SPI, …) and the debugger.
The CPU executes code located in the ROM and uses the RAM as memory-space for data. ROM stands for read-only-memory but in practice FPGAs don’t have ROM, they have to emulate ROM with RAM by limiting access to the memory.
By default, the RISC-V implementation I used, doesn’t really distinguish between ROM and RAM*, so it would have been possible to inject code from extern (e.g. via JSON-Api) by exploiting programming mistakes and provoking Buffer Overflows.
Fortunately, the VexRiscV could easily be extended with a memory protection plugin written in Scala which introduces access management to the memories and I/O-space.
In short, it blocks write access to ROM and blocks code execution from RAM.
Additionally, the RISC-V (it’s part of the ISA) supports different privilege levels. There is Machine mode (highest privilege level), Supervisor mode and User mode. The memory protection plugin also differentiates between these three modes and makes memory and peripherals which are only accessible from Machine- and Supervisor-mode possible.
RISC-V also supports SCALLs which are calls to code running in Supervisor-mode.
For example, code running in User-mode (lowest privilege levels) can call code running in Supervisor-mode to sign a transaction. This privileged code could access the Seed, sign a transaction and return the signed transaction to the code running in User-mode.
Although memory protection already provides quite good security, it would be not possible to get access to the seed by exploiting vulnerabilities, since the implemented security mechanism works at the lowest hardware-level and can practically not be circumvented (except by exploiting faulty supervisor code).
Moreover, since memory protection also makes peripherals —such as I2C, which is used to communicate with the Secure Element — to be accessed only from supervisor mode possible, the code running in User-mode would not even have access to the interface to which the Secure Element is connected.
Last but not the least, the debugging interface runs in Machine-mode and can be used to upload code to the ROM. This is very convenient as it saves a lot of time by not having to generate and upload FPGA bitstreams to the FPGA, as the code can be updated (temporarily) by uploading it to the ROM instead.
Later, when development is complete the FPGA-system would be put into lock-down mode. Only setting a variable in the synthesis tool is needed, but physically everything that is part of the debugging interface would disappear completely from the hardware because it would be completely optimized away by the synthesis-tool.
Best protection against messing around with the debugging interface is to not have a debugging interface at all^^
*: There is a standard extension for a Memory Management Unit (MMU) that was not used here because no virtual memory is required. But it would be used in a Linux-enabled configuration of the VexRiscV.
Most FPGAs do not have internal flash that could be used for data storage. This must be done externally, for which a Secure Element can be used.
Such elements are like flash memory (often I2C-EEPROM pin-compatible) which are protected from all kinds of physical attacks.
The FPGA-module also has such an element on the PCB which can be used to securely store up to 8 seeds.
Seeds are read and written from and into the SE using seeded and rotating AES keys, which not only secure communication but also protect against replay attacks.
The decryption key to the SE can be stored in a memory area that only code in supervisor mode has access to.
Most FPGAs are only SRAM based and have no non-volatile memory, so they get their configuration (bitstream) from an external SPI flash every power-up cycle.
It contains everything like the hardware-description itself, program code, encryptions-keys, … It’s essential to protect the bitstream from attacks.
Xilinx FPGAs (in this case, but Intel/Altera can do the same) can be fused (permanently programmed) with an AES encryption key, which is then used at the startup to decrypt the bitstream. If the bitstream cannot be decrypted with the internal key, the configuration is rejected and the FPGA is not started. Also, if the hash of the decrypted bitstream is wrong, the FPGA also won’t boot.
This has nice implications:
- It’s only possible to use bitstreams with the correct AES key
- It’s not possible to extract data from the bitstream (like keys)
- It’s not possible to tamper with the bitstream (like replacing the ROM)
- but it’s easy and secure to distribute the bitstream and update the FPGA.
In the last article I wrote about the migration from Cortex M1 to RISC-V.
The new soft-CPU not only is faster and completely open-source, it also has some advantages over the old CPU — like as memory protection or three privilege levels code can run in.
The next step is to separate the code in the code running in User-mode and code running in Supervisor-mode, move Seed management into Supervisor-code and build an internal API to call Supervisor-functions.
Finally, some nice news: The IOTA Foundation will order some of the FPGA modules. They will be made by a professional PCBA service. There also will be a Raspberry Pi HAT which can be used as a development board for applications running on the FPGA module.
Thank you again for reading!