Picture by Sebastian Ahmed (Alcatraz Island, San Francisco)

Challenges and opportunities for open-source in Silicon — Part 5

Sebastian Ahmed
8 min readDec 31, 2020

--

Continuing with the theme of the big opportunities started in Part 4 where we discussed the potential benefits of an open-source RISC ISA, in this installment, we look at how embedded security may benefit from open-source approaches

Embedded security is no longer the realm of specialized chips or markets (such as banking or military). These days, any general purpose microcontroller or application processor needs to provide security supporting not just application-level security (such as required by communications protocols), but also secure boot, secure key storage and beyond.

Security in general, is a never-ending process.

It is a process because it is stateful and ever-changing. There is no such thing as absolute security and the ways in which security can be compromised continues to surprise us. For example, low-level cryptographic primitives such as AES despite being mathematically intractable, were easily overcome with side-channel analysis. Long-standing processor micro-architecture approaches were eventually found to leak secrets via software-based attacks as demonstrated by the Meltdown and Spectre bugs.

Security through transparency

Photo by Sebastian Ahmed (source-code credit: OpenTitan, Apache 2.0)

Does open-source silicon hardware and associated firmware make things less or more secure? When it comes to security, there is never a definitive answer. But it is worth to consider some of the properties of open-source that may provide significant value in security.

“Given enough eyeballs, all bugs are shallow” — Linus’ Law

In hindsight, every security flaw is indeed a bug. Whether it is an implementation flaw, an architectural flaw or a fundamental algorithmic flaw, they are all bugs. We thus go back to the concept of the value of “many eyeballs”. The more eyes on the problem, the better. As we discussed in Part 3, there is no place to hide for bad decisions and poor implementations, so the level of scrutiny is unbound and continually evolving.

In closed-source implementations you are likely at the mercy of a small team of developers whose priorities and possibly competency-level can shift over time. There is no such thing as absolute security, so the way in which secureness is measured, is a never-ending process. It is thus possible that such an unbound problem is not well suited to be solved by so few. Note, that this is not a statement of governance, but eyeballs (transparency and number of testers).

So a key question to ask is whether hiding implementation details is a form of security. This author’s opinion is that nothing is further from the truth, and so

“Security through obscurity” is a dangerous fallacy

This notion was first introduced by 19th century Dutch cryptographer Auguste Kerckhoffs:

“A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” — Auguste Kerckhoffs

  • There is no security through obscurity. This is a false premise. At best, it buys some time, but once the “secret is out”, there is no remedy. More so, it falsely reduces the burden on the developers to make the solution inherently secure. A “false sense of security” as it were, reduces incentives to do things right.
  • Glass-box testing provides the opportunity to test and attack an implementation in a more controlled and comprehensive level. This may include a variety of approaches in simulation, FPGA emulation or even via formal proofs. This would include RTL source and firmware.
  • Exposing implementation details, accelerates both the ethical and non-ethical hackers’ ability to find vulnerabilities. This obviously requires the attention and competence of ethical hackers much more so than un-ethical hackers, but even un-ethical hackers may not be able to resist exposing their finds.

When it comes to glass-box testing in particular, the gamut of tools and approaches is so broad, that no one enterprise can do all these things well. The potential benefit of open-source is that it could allows experts in various domains and with access to specialized skillsets, tools or compute resources to provide a continuum and perpetuity of testing that is never limited by any one group or enterprise.

Timeliness and frequency of iterations

Consider the lifecycle of a silicon-level security flaw — from creation to observation. The flaw is typically designed in at the module-level IP development stage (or even earlier, such as during the architectural stage prior to the RTL IP development).

When do we typically hear about a publicly discovered security exploit in relation to such a flaw?

Flaws are usually found in and reported against an end-product, long after silicon development has completed.

What happens after this? If the silicon provider is lucky, there is some low-level protected firmware that can patch this bug and hopefully such firmware is indeed patchable in the field (connected devices such as mobile phones or IoT products are good examples of having this capability). If firmware patching is not an option (either by lack of connectivity or the flaw being un-addressable by means of a firmware fix), the next step is to correct the issue on a subsequent silicon revision. Now, we are in for the long haul.

Such a loop to detect and correct what is often a “software” issue (in the sense that RTL code is a type of “software” which can be fully described and exercised through simulation or formal proofs), seems to be unnecessarily lengthy and costly.

We can thus identify two parameters which may describe the timeliness of publicly discovered exploits

  • Time from design of flaw to detection of a flaw — this is latency
  • Turnaround time from detection to observable correction of the flaw — this is bandwidth

The figure below shows how the phases and loops might look when we compare closed-source RTL versus open-source. The timelines are approximate, mostly to show the relative scales of each phase.

Drawing by Sebastian Ahmed

This is not a comprehensive model of reality, but it does illustrate that there is potentially, an order of magnitude delta both the latencies and bandwidths from the perspective of a constituent piece of technology (the “IP”).

No doubt, some security flaws are more physical in nature but even such flaws can exhibit hints through source-level access to an implementation.

The sooner these are discovered and corrected, the better.

In order to realize such benefits, the open-source approach does not require all chip-IP to be open-sourced. This is neither practical or required, but the parts that are specifically dealing with security stand to benefit from enabling the IP-level to be exposed to testers and hackers alike (the “many eyeballs”) and actually “hacked-upon” completely independent of any silicon or end-product development cycle.

Avoiding Fragmentation

Drawing by Sebastian Ahmed

To successfully garner the attention of the “many eyeballs” and truly benefit from the variety and span of testing that can occur with glass-box analysis,

it is important for the industry to not have too many “irons in the fire”.

This suggests that the industry take a lesson from the book of RISC-V (which was discussed in Part 4). The fundamental primitives and sub-systems for embedded security (comprised of RTL hardware and firmware) need not be done in umpteen different ways or be tied to a few commercial providers.

There will always be room for innovation and proprietary solutions for some of the more specialized aspects of security, but striving towards a common base platform (which can garner those eyeballs) appears to be an opportunity for the silicon industry.

Commonality and standardization on the base elements of security could manifest through open standards. The RISC-V ISA is a good example of an open standard which is also modular. Could something similar be done for embedded security? Perhaps.

So what is going on in the landscape of open-source implementations around silicon security? A couple of noteworthy developments by some rather large companies taking public positions on open-source silicon security:

  • Google’s OpenTitan project which is concerned with providing a complete silicon Root of Trust (RoT). Anyone can access and use all the RTL and software source code. It is released under an Apache 2.0 license. Notably, the main processor is based on the open-source RISC-V core which is being developed by lowRISC. Here we see the intersection of leveraging an open-source ISA as a key building block for a base security RoT.
  • Thales group, which is a French-based multinational with revenues of over $18B, dealing with aerospace, defense, transportation and security, announced in 2018 that they joined the RISC-V foundation with the goal to “Help Secure Open-Source Microprocessors” (source). Thales is also an active member of the OpenHW group (source).
  • RISC-V in general, is ripe with security-oriented initiatives not just around the cores, but also the ecosystem of software and tools.

“As is” Basis

Before we close out on this topic (which, by definition, is not possible since security is never “done”), it is worth discussing the caveat of open-source licenses agreements, in that such license agreements typically provide no warranties or other protections.

But just how much of a practical issue is this? Even commercially supported distributions of the Linux Kernel only provide warranties for the physical media on which the software is distributed (hardly a sufficient remedy in the case of a security breach in a data-center for example).

In order for a warranty to be have value, it must have the ability to properly remedy the cost of a security flaw which results in damages being incurred (such as a breach that may affect companies or individuals). Such protections may likely be better addressed by insurance policies.

Summary

Security is a massive topic, and it often seems that when one discusses or writes about it, we are left with having only scratched the proverbial surface. Nevertheless, when it comes to security, if one takes the position that transparency in the implementation has value, then open-source provides some very compelling advantages through a combination of vast opportunities for testing (in nature and volume) as well as the unique property of minimizing security flaw discovery latencies and correction bandwidths.

--

--

Sebastian Ahmed

Technology Leader | Silicon Architect | Programmer | Cyclist | Photographer