Information Security

Defending the digital infrastructure

Sergey Nivens - Fotolia

Marcus Ranum decodes hardware vulnerabilities with Joe Grand

Computer hardware designs with dangerous security flaws? That's no surprise to renowned hardware hacker Grand.

Security researcher Joe Grand focused on hardware vulnerabilities long before Spectre and Meltdown made headlines. An electronics guru who co-hosted the popular Prototype This! TV series on the Discovery Channel, Grand has educated people about hardware hacking since his teens. Known as Kingpin, Grand was part of the hacker collective L0pht -- named after the group's loft in Boston's South End. The underground security researchers tested the limits of technology and cyberspace, and promoted responsible disclosure. The group, including Grand, warned a Senate Governmental Affairs Committee in 1998 that hardware and software linked by networks and the internet posed a serious security threat that was hard to solve and would only get worse.

The members of L0pht joined with venture capitalists to form @stake, a security firm that was acquired by Symantec in 2004. Along the way, Grand earned a bachelor of science in electrical engineering from Boston University. Since 2005, Grand has taught a two-day course at Black Hat: Hands-on Hardware Hacking. The annual training course is focused on reverse engineering and "defeating the security" of embedded systems and devices. He is also the engineer who designed DEFCON's hackable circuit-board badges, first used in 2008. In addition to his work as an engineer and hacker, Grand is an inventor and founder of Grand Idea Studio. Here, Grand discusses hardware vulnerabilities in the wake of Intel's flawed chip designs, and the ramifications for mobile devices and the internet of things (IoT).

The Spectre and Meltdown bugs turned out to be a huge problem for complex CPUs, and the software side of the world is struggling to fix hardware vulnerabilities. Should we expect to see comparable problems with less-powerful hardware like IoT devices or phones?

Joe Grand: Those are highly complex architectural design flaws, and I think we've only seen the beginning of those types of attacks -- things that jump through many hoops to get something done. We're seeing them on servers and systems, where it's easier to focus because they're powerful. But we're going to see those types of attacks move down the hardware spectrum, sort of a trickle down to less-capable devices. There are going to be some that are specific to specific architectures.

The real problem is that we don't even need such complex attacks for most of the internet-connected devices that are out there. We don't need to jump through hoops to get into them; there are so many other ways you can get in.

Should we assume that mobile phones are suffering the same sort of problems and they just haven't been explored yet?

Grand: Mobile phones are highly complex, not as complex as a desktop or server but much more complex than an IoT device. We're going to start seeing more of these potentially damaging design-flaw exploits that result from subtle interactions in the hardware.

I saw a demo at the CanSecWest Applied Security Conference back in the early 2000s where someone exploited a buffer overrun in a smart antenna controller chip in a cellphone, then created a running UID=0 process [unique ID superuser privileges] in the Linux kernel. I walked out of there thinking, 'We are so screwed.'

You don't need to be a hardware engineer to exploit these devices anymore -- you can be an operating system-level hacker because there's an operating system in there too.
Joe Grandfounder, Grand Idea Studio

Grand: Yes, and the thing is that there are so many subsystems in devices now. In the rare instance that you have a hardened system, you're going to have some peripheral, or some module, or something that could be vulnerable, and it's all connected. Another great example is a hack that was demonstrated at Chaos Communications Congress, where Hunz [the hacker] was hacking an Amazon Dash and found a way of compromising the device using a hole in the audio processing function. He could execute arbitrary code by sending sounds at the device. There are other examples of game consoles, which tend to be some of the most secure consumer devices out there because so many people hack on those; attacks can work against peripherals for the device -- attack the controller or a USB audio device. It just takes one vulnerability in one thing, and it's very hard to anticipate where everyone is going to attack.

There was an incident where someone figured out a way around Digital rights management by exploiting a bug in a graphics card driver. The driver was signed by the manufacturer, so Windows would cheerfully load it into kernel space. The attack would take over the driver, and presto! the entire system memory was available for capture, including digital rights management keys and data.

Grand: I think it's going to get worse. What I see happening a lot is people who are not traditionally hardware hackers starting to get into embedded system hacking because the barrier to entry for getting involved has gotten much lower. Every device that's connected to a network is a computer now. You don't need to be a hardware engineer to exploit these devices anymore -- you can be an operating system-level hacker because there's an operating system in there too.

There is a continued lack of security understanding by engineers, and because so many products are becoming connected, people are slapping in network connectivity, which makes it vulnerable. But you can't blame engineers because it's a completely different and very challenging field. All these things combine to continue to make it happen even more.

As a systems administrator, I look at the same scenario you're describing: All these devices are going to be running some operating system, and people are going to just assume they don't need to worry about software reliability or operating system flaws.

Grand: That's exactly right. People take a piece of hardware and take it for granted because it's in a nice box; it has some LEDs. Connect it to the network, plug it in, and it's up. A lot of these devices don't have any sort of patching capability or firmware update capability, and if they do, it's probably insecure. That opens up a whole other world of attacks. It ends up being a system administration issue that is very hard to deal with because there are so many entry points into the platform.

I remember last year when I said, 'Hardware is the stuff you run software on.' You looked at me and said, 'It's all software.'

Grand: The hardware, if it's not a PC or a server, has a slightly different set of entry points above and beyond what you would with a network-connected type of thing. If you have physical access to a hardware device, you can possibly get access to debug interfaces like JTAG, [an Institute of Electrical and Electronics Engineers communications standard named after the Joint Test Action Group and used to test circuit boards], which would then give you direct access to memory. You've got access to console output, backdoors and reset codes -- all of those are entry points into the hardware electronics, but the real goal is to get access to software. Then it becomes a software problem.

And, on top of all that, some software engineer is going to leave a debugging backdoor in the software layer, for their convenience, and they forget to turn that off in production.

Grand: Or it might be intentional because then that way, if they have to reset a device for a customer, they have an entry point. A lot of vendors leave that in for convenience purposes -- it's the 'security versus convenience' problem we've talked about for decades, and convenience usually wins. But it's also that developers are not necessarily thinking, 'I'm going to leave that backdoor in and someone is going to use it for malicious purposes.' They leave it in for their purposes and aren't trained to think about what happens if someone else uses it as well.

For me, finding out about the Intel Management Engine (IME) was a heart-stopping moment of terror. I still wonder if someone thought that up and said, 'This is a good idea,' or if the NSA helped encourage that mistake.

Joe Grand, founder, Grand Idea StudioJoe Grand

Grand: The complexity of that, and the complexity of CPUs, that's where this stuff hides. It's a CPU inside a CPU, and it's running Minix in there. Probably nobody in the world except a few people knew or cared that was in there. It's mind-blowing that you have these layers of computing functionality even within a single device -- the IME, Spectre, Meltdown -- those things mean we're going to have to keep trying to educate people about security and risk. It's terrifying. Being security researchers, we're supposed to be at the state of the art, and these kinds of things keep getting dropped -- it's phenomenal!

You're working with all these new microprocessors that are highly integrated -- that have all kinds of capabilities, like complete TCP/IP stacks, waiting to be invoked. I heard someone talking about using a component that had an IP stack. They didn't need an IP stack, so they were just going to leave it there. What could possibly go wrong? It sounds like [hardware vulnerabilities] are going to be a gift for the hackers that keeps on giving.

Grand: I think it might come down to education. That's something I teach people in my hardware hacking class: Nothing is secure, but you can choose the right sort of thing. If you think about software development, people will write what they need and will make mistakes, but software can be patched. With hardware vulnerabilities, the bugs are baked into physical silicon and it can't be changed. I don't think the hardware developers are malicious; they're just working in a medium that doesn't change. If you have a bug or a 'feature' like IME in your hardware, you're completely screwed if it can't be turned off or fixed.

There are similarities in the software layer. People take an operating system and assume that it has a reasonable set of features and mostly works. They don't immediately go through the entire operating system loadout wondering, 'Should I delete PowerShell? Is it possible to write a virus in embedded macros?' They assume it's all well-intentioned stuff with a purpose, but it's not accidentally dual-purpose.

Grand: On the hardware side, a lot of what we see is vendors making it as easy as possible for engineers to use their chips. They create a bunch of reference code -- samples of code designed to show off the features of a chip -- so if an engineer says, 'We need to use encryption in the chip,' they'll use whatever sample code was written by the vendor to work with [the] particular module on that chip. The problem is that those pieces of code are proof of concept, not production-quality code. It has disclaimers: 'This should not be used in a product.' But engineers take these chunks of code and slap them in, trusting that they are right. Sometimes it is written by interns, sometimes it is written to just show how to use the hardware model, and sometimes it is well-written. It's the same problem: People are trusting code, and hardware, that was not vetted for security.

When I start developing a product that has a microcontroller, I'll go through the data sheet and, as part of my start-up code, I disable every peripheral that I don't need. Most people will just take whatever they're given and let it run; that way, there may be things they didn't ask for -- open ports, configuration issues, things that haven't been locked down. It's just human nature nowadays: 'It doesn't need to be internet-connected, but I may as well leave that in.'

Your approach is more expensive. You have to actually understand what you're doing instead of taking the shortest path.

Grand: It all boils down to managing the entry points into the system. And with today's systems, we can't rely on our ability to control the entry points.

That's not a cheerful note to end on.

Grand: There are edge cases where progress is being made. You do have a lot of chip vendors that are starting to take security more seriously, and they're making things better for the general populace. You'll always have governments and high-end individuals that can break into things, but they're making it to a point where engineers don't have to understand security -- it'll work well enough. We are seeing little steps being taken, and even though there are so many potential underlying threats, if you look at the history of the last five or 10 years, there are lots of things that you can do that make your systems better.

Article 4 of 4

Dig Deeper on Threats and vulnerabilities