Some Thoughts on Securing IoT Devices
Tags: Cryptography, Embedded, Programming, Security
Security in the Internet of Things (IoT) leaves much to be desired. Some of the recent DDoS attacks such as those through Mirai on DNS provider Dyn or on popular security site KrebsonSecurity have been possible due to weak security measures in things like network connected cameras. There are many reasons why the situation is what it is today, but that will not be the topic of this entry. While we have seen some initiatives, notably the security guidelines (PDF) by NIST and some comments made by Bruce Schneier, I feel that this leaves a lot of people wondering what practical measures to take to secure their devices. Many companies in the IoT are start-ups lacking a proper understanding of what security in the embedded field entails, and might lack (or didn't plan for) the budget to hire dedicated security people. The goal of this blog entry are to (hopefully) lift the veil on some of the methodologies that should be employed to create more secure IoT systems from a very practical point of view.
What this entry will not be is a comprehensive guide - that would simply be impossible. This also means that this entry is by far complete, but it should give people an idea where to start looking to get down the rabbit hole. It will also look at security from a device point of view, meaning that anything server/cloud related will be a little out of scope. Needless to say, the server/cloud bit is just as important since the security of an entire system can be seen as a chain, whereby the weakest link can break the entire set-up.
For starters: keep it simple; as simple and straight forward as possible. The more features you add, the more code your project contains, the more potential attack vectors you introduce. You can not exploit what is not there. This means in practice that the number of external libraries, frameworks, etc. should be minimal as well. In essence, you need to have what is known in the field as a trusted computing base, or TCB. The smaller the footprint, the better it can be verified for vulnerabilities. Does your device really need to output JSON? Does it really need to accept commands? Does it really need OTA updates? Does it really need those features you think are really cool?
Let's for example take a remote OBD-II analyzer, for example one with Bluetooth. If you want to use this device to give you information on RPMs, status, fuel, etc. for maintenance management, of fuel tracking, you want to make sure that only those commands can be sent on the CAN bus. Anything else that has to do with e.g. door control, motor parameter tuning, etc. should not be allowed no matter what. The easiest way to do this is just to hard code the allowed commands. That way, the other commands will never be able to be sent on the CAN bus because they are simply not there. Should your authentication with the device thus be compromised, the attacker will only be able to send your predefined commands, and not any potentially harmful ones.
If you add lots of libraries into the mix, you have the potential of introducing issues that are hard to fix once they are deployed. Remote over the air (OTA) updates are HARD! People need to understand this before just saying "oh, but we can fix that with an OTA update". A remote update feature is a dream of an attack vector, because an attacker could potentially replace your code altogether. Remember the OBD-II above? Yes, make a mistake in the update procedure and the attacker can just replace your carefully crafted firmware with his own that does away with all the security features you had in place. OTA updates need to be encrypted and signed, and that with preferably different keys from your normal traffic (coming to that next). This is needed because you have to make absolutely sure the firmware update comes from a trusted source and has not been tampered with. You also need to be able to fall back to a previous version if something goes wrong during the process as well. To do all of this correctly takes a lot of attention to detail, and is very easy to get wrong...
For many applications, an OTA update process is not just overkill, it can be dangerous and add more security issues than it would not having it. A simple temperature sensor that sends data once an hour or so should not need any remote commands available at all. What it should do though, is encrypt its data properly. Any IoT device should never send any data or commands in plain text. Even if you're already using TLS or some other means, the data itself should still be encrypted and signed; you want to make sure the data comes from where it says it comes, and you want to make sure no one can tamper with it in transit (in both directions!). That is even important for that simple temperature sensor not only to prevent fake data, but a simple temperature reading can provide a wealth of information for someone with bad intentions (figuring out if someone is home or not for example). The good news is that many micro controllers come with dedicated hardware to perform cryptographic operations. The bad news is that even with those features, it still requires some good understanding of how and when to use what operations, and importantly, how and when not to. We read for example that default passwords are bad - and they most definitely are. However, that does not unnecessarily mean that hard coded encryption keys are bad - as long as they are unique per device and not per product. You can have for example a couple of keys (one for day to day data, one for signatures, one for OTA updates) hard coded or available in something like a secure key storage component. If one of the devices gets compromised, it's just that one device, and not the entire product line.
I will not be able to tell you in a paragraph or two what cryptographic algorithms, what components, etc. to use. It tends to be unique to individual situations, and if you're at this point in your product development where that comes up and don't really know how to go on: please hire a security expert. Yes, they are expensive - no one said security wouldn't be. There are things that you can do however. First of all, design a system with security in mind from the start of the project. It is not something you can bolt on afterwards. Some of the prime things you should pay attention to are buffer overflows, still a main attack vector, especially on embedded devices. This is where tools like static code analyzers can help out, but these issues should be able to be found with proper code reviews. Every buffer in your code should be accounted for, and its boundaries known. Every bit of code touching said buffer should be verified to adhere to these boundaries. You want to skip dynamic memory allocation on embedded devices altogether; use statically allocated buffers, or a memory pool mechanism.
Not everything needs Linux! Yes, Linux is nice, but it does bring a bunch of problems. It's also overkill for a lot of projects I have seen. You might want to investigate alternatives such as L4 kernels for some projects, or smaller hardware with a (real time) operating system such as FreeRTOS. When it comes to L4 kernels, the object capability model of modern L4 kernels like Fiasco.OC can help you keep your TCB small, and could even allow you to run trusted and non-trusted code on the same system without the non-trusted part impacting the trusted part. Again, I'm not saying that doing this is easy, no one ever said security is easy - but it might open a door for you.
There is a heck of a lot more to write on this subject, but in order for this entry not to get too large, I'll end here for now. Maybe I'll write a series of articles going in more details and in depth, time permitting. I am convinced however that if IoT developers would take security more serious and follow even some of what I wrote here, the entire situation would be much better than it is now already.