Media Details

Securing a World of Physically Capable Computers: Six Lessons in Security

2024-03-23 | 30:21

Computer security is no longer about data; it's about life and property. This change makes an enormous difference, and will shake up our industry in many ways. First, data authentication and integrity will become more important than confidentiality. And second, our largely regulation-free Internet will become a thing of the past. Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation. Given this future, it's vital that we look back at what we've learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future.

Participants

Bruce Schneier

Bruce Schneier

Bruce Schneier is an internationally renowned security technologist, called a “security guru” by the Economist. He is the New York Times best-selling author of 14 books -- including Click Here to Kill Everybody -- as well as hundreds of articles, essays, and academic papers. His influential newsletter Crypto-Gram and blog Schneier on Security are read by over 250,000 people. Schneier is a fellow at the Berkman-Klein Center for Internet and Society at Harvard University, a Lecturer in Public Policy at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation and AccessNow, and an advisory board member of EPIC and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Transcript

This is a provisional transcript. Copy may not be in its final form.

SECURING A WORLD OF PHYSICALLY CAPABLE COMPUTERS: SIX LESSONS IN SECURITY

Broadcast on October 26th, 2022 

BS: We are creating a world where everything is a computer. This phone is a small portable computer that happens to make phone calls; and your refrigerator is a computer that keeps things cold. Your microwave oven is a computer that makes things hot, an ATM machine is just a computer with money inside, your car is a computer with four wheels and engine, actually it's a hundred plus computer distributed system with four wheels and engine. This is more than the Internet. It's actually more than the internet of things. It's an immersive computational world that we live in 24/7. And this means, it means two things, it means that internet security becomes everything security, and it means all the lessons from my world of computer security become applicable to everything everywhere.

So I want to start with six lessons about why computers are still hard to secure. So lesson one, most software is poorly written and insecure and this is basically economic. We don't want to pay for quality software. Good, fast, cheap; pick any two. We tend to pick uh fast cheap and features over security that's what the market wants. Now poor software is full of bugs some of those bugs are security vulnerabilities some of those are exploitable some of those are exploited and this is why the software you have always has security flaws and exploits, and this is why you're seeing operating systems patched every month because it's full of bugs and full of vulnerabilities.

Lesson two, the internet was never designed with security in mind. Now this is kind of crazy when I say it today but in the early days the internet two things were true. One it wasn't used for anything important ever, and two there are organizational constraints that limited who had access to the internet. There was a conscious decision to ignore Security on the internet and leave it to the endpoints, and then we did something really weird. We connected single user personal computers that were never designed to be on a network onto this network that expected the endpoints to have security. And we are still living with the effects of this decision, in the domain name system, in that routing packet security, email addresses, these are all insecure protocols because they were designed that way.

Third lesson is that the extensibility of a computerized system means that everything can be used against us. Now extensibility is an important attribute of computers. Basically what it means is you can't constrain the functionality of a computerized device because it runs software. When I was a kid I had a telephone in the home; big black thing attached to the wall, great device it couldn't do anything other than be a telephone. Right this is a computer that makes phone calls it can do whatever you want. Apple's first slogan for the iPhone was "there's an app for that". Right you can download new functionality on your phone with software and that makes this very hard to secure. Designers can anticipate every use or condition and you can upgrade this with new features and some of those features are insecurities. That's what malware is. It's an additional feature on your phone that you didn't ask for, you didn't pay for, and works against you and that's because of extensibility.

The fourth lesson is complexity. The complexity of computerized systems means that attack is easier than defense. A lot of reasons for this, basically complexity is the worst enemy of security. The more complex the system is, the harder it is to secure the easier it is for an attacker to find a vulnerability. Complex systems have larger attack services.

The fifth lesson is that there are new vulnerabilities and the interconnections as we connect things to each other. Vulnerability isn't one thing affects other things, and we see this again and again in cyber attacks, vulnerabilities here allow a hacker to penetrate a company and they move through and get to some important part of the network we see that in governments. Criminals have lots of examples of this and we'll talk more about it when we talk about supply chain attacks. These are really hard vulnerabilities to fix because you don't actually know what's going on until you're actually in the exact network.

So lesson six is that attacks always get better, easier and faster. Some of this is Moore's law computers. Get a password that was strong 10 years ago, might not be strong today, not because we're smarter at password guessing you're just faster at it but we do get smarter. Attackers get smarter and adapt and expertise flows downhill. What today is a top secret NSA program tomorrow someone's PhD thesis, the next day it's a criminal hacker tool and we see this again and again in all sorts of areas. So none of these six lessons are new, and we've been living with these for decades but up to now it's really been a manageable problem and I think for a Confluence of reasons. We're reaching a crisis and that's what I want to talk about, physically capable computers, because automation, autonomy and physical agency bring new dangers.

So traditionally in computer security, we're concerned with confidentiality. Actually let me take a step back, there's something called the CIA Triad, which are the three properties that we are supposed to provide: confidentiality, integrity and availability. Normally confidentiality is the big concern, so privacy, data theft, data misuse. When you see a computer security story in the news, it is usually a confidentiality story. But the threats come in many forms, there are availability threats, ransomware is an availability threat you can't get to your data, it's also a privacy attack but it's also availability.DDOS availability. There are integrity threats if I hack a bank and change the bank balances. I'm not stealing any data but by manipulating the Integrity of the data, I'm stealing money.

So all three are important but today integrity and availability threats are much worse than confidentiality threats. Their effects are greater, and there are real risks to life and property. So I'm concerned if someone hacks the hospital and steals my confidential medical records, but I'm much more concerned they change my blood type. That's a data Integrity attack. And I don't want someone to hack my car and eavesdrop on the conversations I have through the built-in microphone, but I'm much more concerned if they remotely disable the brakes. That is a data availability attack so cars, medical devices, drones, weapon systems, thermostats, power plants, Smart City systems. Integrity and availability matter much more, and we're concerned about DDOS attacks against critical systems like hospitals, we're concerned about ransomware against your car. There's a fundamental difference between your computer crashes and you lose your spreadsheet data and your pacemaker crashes, and you lose your life and it might be the exact same CPU and operating system and application software and vulnerability and attack tool and attack. The only difference is what the computer is doing and what it's attached to. And these Trends are even more dangerous as our systems become even more critical.

So I have a seventh lesson: that computers fail differently. They all work perfectly until one day when none of them do. So if you think about cars (and normal cars fail occasionally), and we have auto manual auto repair shops that can deal with car failures. We don't have the ability to deal with all the cars broken, or all the cars of one make or model or year, we just can't do that kind of failure of protection because you're not used to it. So we're worried about crashing all the cars, showing them all the power plants and so on.

So at the same time these seven lessons are true, and computers are becoming more physical, some of our long-standing security systems are failing. I want to talk about three of them. The first one is patching. So our computers and phones are as secure as they are actually for two reasons. The first is their engineers at Microsoft and Apple and Google, who are designing them to be as secure as they can in the first place. And two is those teams are able to quickly and effectively deliver security patches when someone discovers a new vulnerability. And you patch your iPhone all the time you patch your Windows machine the second Tuesday of every month. So this works, and it's how we deal with low quality software but it's not true for low-cost embedded systems like DVRs and home routers .Those are generally built at a much lower profit margin, often by third parties in some other country. They don't have dedicated security teams associated with those devices. Even worse, many of them have no way to be patched. Right now the way you patch your home router is you throw it away and buy a new one, that's the patch mechanism we don't have another one. And actually that works pretty well, we do get security in the fact that we replace our computers and phones every couple of years. But this is not true for embedded systems.

You're going to replace your DVR what, every five to ten years, your refrigerator every 25 years. I bought a new home thermostat, I expect to replace it approximately never. And maintaining old stuff is hard. Imagine you buy a car today, I'm gonna make this up, the software is three years old, you're gonna drive over 10 years and sell it. Somebody else buys it, they drive for 10 years, they sell it. At least in the United States, what will happen is it’ll get put on a boat or ship to somewhere in Latin America, someone else buys it, they drive it for 10 to 20 years. Right, so think about this. Try to find a computer from 1979. Try to boot it up, try to make it work, try to make it secure. We have no idea how to maintain 40 year old consumer software. Microsoft and Apple both depreciate their operating systems after about 10 years because it's really hard to maintain the old stuff.

The problem is even worse for low-cost consumer devices. You don't have the big companies, you have little companies going out of business. You just don't have that same ecosystem. And those vulnerabilities will remain in our network, and be used to attack other third parties. The second thing that's starting to fail is authentication, and that actually kind of only just barely worked. Human memorizable passwords are no longer suitable, in many situations. Two-factor authentication you know, works where it does and doesn't where it doesn't. Back to authentication, it’s terrible, but the amount of authenticating that we're about to do is going to explode exponentially.

So normally when you authenticate it's one of two things, actually I'll demonstrate them both. okay so I logged into my phone, and it is Apple and uses face ID, and then I check my email. So that was me authenticating to an object, and my authenticating email service, and you saw that works really well. What's going to change is the Internet of Things, because those things are all about talking to each other behind our back. things authenticate to things. That's what 5G is for and that's what we're going to see a lot of. If you have 100 iot devices that need to talk to each other, that's ten thousand authentications. You have a thousand devices, that's a million authentications. You know we're not going to need that many, but we don't know how to authenticate things to things at scale. We have a system, when I walk into my car, this device automatically authenticates to the car right and uses the microphone and speakers and if you have a car you probably have that too. You know how that works, that's Bluetooth. But if you think about how that worked, you set it up manually. Bluetooth requires manual setup. It doesn't scale. 25 things for sure, 100 things, a thousand things, not a chance. This is also an IoT controller hub. if you have an IoT anything, you control it from an app from your phone. That also doesn't scale to a hundred or a thousand things, and we don't have a really good answer there.

The third thing that's failing is supply chain security, which is actually insurmountably hard. In the United States and other countries, the debate is a lot about Huawei. Should we trust Chinese networking equipment? A few years ago in the United States, it was about Kaspersky, should we trust Russian antivirus equipment? Now this is not just in the U.S.  In 2014 China, banned Kaspersky and the U.S company Symantec. 2017, India banned a bunch of Chinese smartphone apps. I remember 1997 there was debate in the United States about a company called Checkpoint, which was Israeli. Should we trust an Israeli security company? Now this is actually an important security question, can you trust technology from a country whose government you don't trust?

But that is just the beginning of a much more complex problem. This is a U.S device, but its chips aren't made in the U.S, it's software, its Engineers probably have 100 different passports. It's a very International device. And we found back doors in U.S products. Juniper firewalls, D-Link routers, right so it's not just the country where the company is, because all these companies are international. And there's more, we have to trust the development process. We're seeing many hack software libraries that are making their way into products. You have to trust the distribution mechanism. There are fake apps in the Google Play Store. You have to trust the update mechanism. The solarwinds attack, that was a hacked update that the company pushed out. You have to trust the shipping mechanism.

There's a famous uh Snowden document, showing NSA employees putting a back door into Cisco equipment destined to a Syrian telephone company. I mean these risks are so complex that we don't even know what's true. There's a Bloomberg story from 2018, about hacked chips in motherboards being destined for cloud providers around the world. A really weird story, lots of people denied it, and we don't even know if it was true or not. What this means is you have to trust everyone, yet you can't trust anyone and there's no better solution than this. This is actually a hard problem.

So this is a perfect storm really, security is failing just as everything is becoming connected and we've been okay with an unregulated tech space because it didn't matter. And that is no longer sustainable. So primarily, this is a policy problem, and getting the policy right is critical. Law and technology have to work together. If there's one important lesson from Edward Snowden, this is it. We always knew that technology can subvert law, and he showed us how law can subvert technology. This I've written about extensively, and there are lots of different parts to making this work. Standards, regulations, liabilities, international treaties, you know the devil is in the details and there are a lot of details here so I want to highlight three principles.

The first one is a policy principle, and that is that defense must dominate. or the way I like to say it: one world, one network, one answer. When the NSA was formed in the 50s, they had a dual mission: to defend U.S and allies Communications, and to eavesdrop on Soviet and Warsaw Pact Communications. And they were able to do both because they were different things. Those days are over. And everyone uses the same stuff. We all use MS word and TCP IP and PDF files and iPhones. So either it's security for everyone, or security for no one. Or putting it another way, either everyone gets to spy, or no one gets to spy. You can't build a system where the good guys can spy and the bad guys can't. And we see that problem again and again. One example are imsi catchers, previously called stingrays, basically they're fake cell phone towers. This was a big secret in about the year 2000. In the U.S the FBI was using them domestically to eavesdrop on people, and their existence was a big secret. We would drop court cases rather than the evidence becoming public. The problem is, it actually wasn't really a secret, it was being used by other countries and was being used against the U.S. I think in 2014, a magazine drove around Washington DC and found lots of these run by we don't know who. And today you can build one yourself. There's software available on GitHub if you have a radio card for your computer. You can buy one at alibaba.com, they cost about a thousand dollars. Right so you either have to build the cell phone network to prevent these kinds of attacks, or you're vulnerable to them. You can't use them and keep it secret. And so we have to design for security, and not design for surveillance. It's a really important principle.

I have a tech principle, and that's we need to build in resilience. We need to assume insecurity, and design systems that work anyway. There are lots of ways to do this. We kind of know what they are, defense in depth, compartmentalization, avoiding single points of failure, ways to fail safe, fail securely, removing functionality, deleting data. Think about systems that monitor other systems. And there's really an interesting research question here, and I think it's one that rivals the internet itself. The internet back in the 60s 70s, was created to answer this question: Can we build a reliable Network out of unreliable Parts in an unreliable world? I have a similar question: Can we build a secure network out of insecure parts in an insecure world? The answer isn't obviously yes, but it's not obviously no either and we should find out.

And then I have an economic principle, and that's where we need to build in security incentives. So let me tell you the story of Solarwinds.Solarwinds is a network infrastructure company who were hacked by the Russians a couple of years ago. The company was owned by a private Equity Firm, Thoma Brava (a Brazilian billionaire). And the way private Equity tends to work is they buy up companies that are sticky, where it's kind of annoying for the customers to leave and then they make the products and services as bad as possible without losing their customers too quickly, pulling as much money out. And that's what happened with Solarwinds, and that's the reason that security was so bad that the Russians were able to penetrate the company, put a back door in an update, and get that sent out to 14,000 networks worldwide. And this kind of economic incentive problem isn't an anomaly. Tt's why banking websites are insecure, it's why phones are so vulnerable to sim swapping attacks, it's why Facebook and social networks really do a terrible job at removing hate & misinformation, it's why IoT devices are so insecure. The economic incentives are for insecurity. We need to raise the cost of insecurity, and thereby incentivise security, and this will spur innovation in security.

The real question actually I think, is how to get from here to there? Markets can't solve this, they're short-term, they're profit-motivated at the expense of society, they don't solve collective action problems. Government is the entity we use to solve problems like this. Government is how we act collectively as citizens and not individually as consumers. So of course there are going to be problems, it's actually really hard for governments to be proactive. Regulatory capture is a thing. And there's a real problem between security versus safety. Difference between a static safety environment, an intelligent and adaptive human adversary. Hurricanes, tornadoes, don't change their behavior based on your security, but a human adversary does. And that is a big difference, and also how to regulate security in this fast-moving tech environment. And that's not easy. So there are a lot of details, and I don't have them here but the alternative is not viable any longer.

I give this talk when I'm in Silicon Valley which is very libertarian, and what I tell them is that governments are getting involved regardless. The risks are too great, the stakes are too high. Government is already involved in physical systems, cars, planes, and consumer goods. The physical threats of the IoT will spur them into action.  Our choice isn't between government involvement and no government involvement, our choices are between smarter government involvement, and stupider government involvement. And I want us to think about this now, the pros and cons, the details, so we're not surprised with bad regulation. Because good regulation incentivises private Industry.

“Regulation will stifle Innovation”, that's a threat you hear whenever anyone talks about regulation. And we've heard it again and again, with restaurant sanitation codes, automobile safety, everything. There's actually no evidence, because if you do it right, regulation leads to innovation. You regulate outcomes, and that spurs innovation in achieving those outcomes. So Europe is moving in this direction, they passed the GDPR, the General Data Protection Regulation, with strong privacy and security requirements and real penalties, financial penalties. They're imposing on companies. They're regulating AI ,they're regulating Internet of Things.

The United States is not doing much, the states are leading the way. Specifically, California, New York, and Massachusetts are where we're seeing regulatory innovation. The existing federal agencies in the U.S are starting to get involved. So the FAA for Aviation and DoT for cars, the FDA for medical devices, and so on. That's where you're seeing movement. I haven't seen much elsewhere, but the international considerations are really interesting here. And software, software is different. So the car I buy in the United States, is not the same car I buy in Mexico. The environmental laws are different and the manufacturers will tune their engines to match the local laws. That's not true for software. The Facebook I get in the United States is exactly the same Facebook I get in Mexico, because it's easier that way. So California passed an IoT security law, I think it came into effect in 2020. Basically, one of the provisions was no default passwords. So if you sell an IoT anything, let's say a thermostat, in California you can't have a default password. And I assure you that every company that makes thermostats, they don't have two versions, one for California one for the rest of us, they remove their default password and we all benefit.

When GDPR came into force, I was working for IBM  and they said we're going to implement GDPR worldwide, because that is easier than figuring out who is European. Smart regulations in a few large markets improve cyber security everywhere. and again, I don't see an alternative. There isn't an industry in the past 150 years that has improved safety or security without being forced to by government: cars, planes, pharmaceuticals, food production, medical devices, restaurants, consumer goods, workplace conditions, most recently the United States, financial products. It takes government to step in and impose standards and force companies to raise their quality, that's the way it works.

So for us right now, technologists need to get involved in policy and I think this might be my most important point. As IoT security becomes everything security, internet security technology becomes more important to overall security policy. And all the security policy issues will have strong technological components. We will never get the policy right if policy makers get the technology wrong. And we see that again and again. The going dark debate: Should law enforcement have the ability to have back doors in our communications? The vulnerability equities debate: When someone like the NSA finds a vulnerability in a system we all use, should they keep it secret and use it to spy on others, or should they make it public and allow all of us to defend ourselves?  You have to pick one, you can't do both. What to do about voting machine security? A huge area of policy and technology. What to do about driverless car security, another huge area.

We can go down the line a couple of years ago, there was a debate between Apple and the FBI about doors, and again policy makers and technologists talk right past each other. The debate about the DMCA and copyright protection has the exact same problem. Policymakers want one thing, not understanding the tech technology is another thing.

There was a Facebook hearing recently and Mark Zuckerberg is in front of uh U.S senators and someone asked them “How does Facebook make money”. All those policy makers had no idea how to regulate the technology, and we need to fix this. We need what I call public interest technologists, people who work at the intersection of technology and policy. Technologists need to get involved in policy discussions, policy makers need technological expertise, not all of them, but enough so we can bridge this gap and actually create policy that works for technology. And you know we need technologists everywhere, on legislative staff, in government agencies, at NGOs, as part of the press. And this is actually a bigger issue than security because all of the important policy issues of our century will have strong technological components.

We need to build a world where this is a viable career path, where we call public interest technologists. If we don't do that, bad policy happens to us.