Podcast Operations EP075 - Ethical hacking to secure IoT systems - Ted Harrington, Executive Partner, Independent Security Evaluators

EP075 - Ethical hacking to secure IoT systems - Ted Harrington, Executive Partner, Independent Security Evaluators

Nov 24, 2020

In this episode, we discuss the ethical hacking IoT cybersecurity attack service and the best practices for securing IoT products. Steps system operators and end users can take to ensure system security as they progress through digital transformation. 

 

Ted Harrington is an Executive Partner of Independent Security Evaluators. ISE is an ethical hacking firm that identifies and resolves cybersecurity vulnerabilities. ISE is dedicated to securing high value assets for global enterprises and performing groundbreaking security research. Using an adversary-centric perspective, ISE improves overall security posture, protect digital assets, harden existing technologies, secure infrastructures, and work with development teams to ensure product security prior to deployment. ise.io/research

 

Contact Ted:

ted@ise.io

https://www.linkedin.com/in/securityted/

 

Ted’s new book: hackablebook.com 

 

Subscribe

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE. And our guest today is Ted Harrington, Executive Partner of Independent Security Evaluators. Independent Security Evaluators is an ethical hacking firm that identifies and resolves security vulnerabilities. In this talk, we discuss the IoT, cybersecurity attack surface, and best practices for securing IoT products. We also explore the steps systems operators and end users can take to ensure that their systems remain secure as they progress through their digital transformation.

If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Thank you. Ted, thank you for joining us today.

Ted: Of course, thank you so much for having me.

Erik: So, Ted, before we get kicked off into our conversation of IoT security, I want to do a bit of a deep dive into your background. I think you actually graduated in psychology. How did you make it from there now to the executive partner of Independent Security Evaluators? What was it that led you towards the cybersecurity track?

Ted: Yeah, I've just always been an entrepreneur on the path to entrepreneurship my whole life. And actually, it's funny, you mentioned a very few people actually asked about my degree, because it was a long time ago.

But the reason that I even studied psychology was for two reasons. One, because I wanted to understand how people think, and two, I didn't know it at the time, but it was going to wind up being such a centrally important part of my career in security, which is fundamentally, I mean, that's the absolute core of everything that we teach, and we talk about and we preach is you have to think like an attacker, you have to be able to put yourself in the shoes of someone else understand what motivates them, how they make decisions. And so in an unexpected way, that very unrelated undergraduate degrees serves me tremendously today.

Erik: So it's not so much the technical challenges, but it's thinking through from the hacker perspective, and I guess also from the end user perspective. What are the mistakes that an end user would make the ways they might use the technology that would leave them vulnerable to a breach? Just the technical know how it's not sufficient to secure an environment.

Ted: I've found really interesting over the many years of doing this, is to find that why systems get broken, there's obviously a long laundry list of technical reasons. But the fundamental common thread that weaves throughout all of them is assumptions. People who build things assume that people who use things will or won't use them in certain ways: they will do this or they'll never do that. And what winds up happening is that attackers, they actually look at those things that a normal person would do, they try to find those things and do them the opposite way.

I mean, I'm in conference rooms, boardrooms all the time with companies who will be walking them through their attack model And we'll be able to say, okay, well, what if someone did X? And people will literally say in response to that question, they'll say, oh, well, no one would think to do X. And I'm like, I literally just did, and I just asked you about it. So if I thought of it, someone else has to. And that kind of stuff happens all the time. And so those assumptions are really one of the things that is woven throughout so many of security challenges today.

Erik: And then you set up this company Independent Security Evaluators in 2012, what was it about that timing? Was it just the right group of people coming together? Or was it that you saw some particular need in the market? What were you doing then to set up the company?

Ted: At that point, I was running a different technology company. I was actually in a different sector. I was in water. And my now business partner, he reached out to a friend of his that he went to the Ph.D. program with at Johns Hopkins. And I knew that guy as well through a friend of mine from Georgetown, where I went to school. And the next stage of my career, and my business partner was looking for somebody to help him who had essentially the capabilities that I had. And so he had actually started the company about maybe like, I can't remember exactly, I think it was about six years prior to that. And so he had already decided he wanted to restart it.

And so together, that's what we set out to do, was to take this company that had succeeded and had some really amazing customers, but had really just not gone anywhere in terms of its growth and success. And we said, hey, there's something here, there's a lot of people need this service. And we said, let's do it. And you look at the world in 2012, and people really need what we wind up doing, and what we are doing, which is companies need to find their vulnerabilities. They need to find somebody who can help them understand how an attacker will break their system. And it's pretty fun because people pay us. We're the good guys who gets to do the bad guy stuff, we get to do the hacking and not go to jail.

Erik: Can you give us a couple of background stats? And what has been the evolution of whether we're talking about the amount of money at stake, the number of hacks, whatever that might be, what's been the evolution from a top down perspective of the cybersecurity threat over the past decade or so?

Ted: Well, it's definitely changing. The attack landscape is definitely evolving, and it will forever evolve, because of something that will always be inherent to the human spirit, which is innovation. And so as companies are continuing to innovate and develop new solutions, fundamentally, as those innovations play out in technology, that's going to change the way that a system can be attacked.

And so some of the changes that I've seen over the past several years, one in particular is that companies now are starting to warm up to the idea of what security researchers and what ethical hackers like us can do for them. All the way back to let's say, if you rewind 20 years ago, the way that companies related to security researchers was to sue them. They really didn't like receiving these findings of research. And now there's more formalized programs to work with companies. There's still a long way to go.

But the other really big thing is as we look at the way that companies are changing their relationship with technology and how business benefits are being delivered, and I'm saying business benefits, but I also mean, the business of delivering benefits even to consumers, whether you serve consumers or you serve other businesses, but is that this move to software, and the move to applications as the way to essentially drive businesses.

But I wrote a book that's called “Hackable”, and it's all about how to do application security, right. Because I see that as really, that's the future of technology. And in fact, we already live in that future. But it's only going to become more and more part of the way that pretty much everything operates is software. And so, figuring out today how to secure that, I mean, that's the center of the mission. We got to get that right.

Erik: I love your synopsis here of “Hackable”, some of the challenges of security hurts, UX and design, security slows development, changes endless security isn't your job. There's all these reasons not to properly invest in security because you're focused on getting the functionality to your customer as quickly as possible so you could win that market. And then when you get into these hypotheticals about oh, but if we don't invest in security, we could have some hypothetical challenge down the road. It's kind of intuitive that this can be a challenging topic for companies to we'll prioritize, especially for a younger, kind of startup or high growth companies that have so much pressure from their investors.

But it's like anything, it's like insurance to an extent. And put your seatbelt on, you might only get in a crash once every 20 years. But when you do get in that crash, you want that seatbelt to be on. Maybe we can get into the details a little bit of the company Independent Security Evaluators.

But a good starting point would be, who are you talking to? Are you talking more to the large scale multinationals? Are you talking to the high growth startups that are bringing products to market? And then at those organizations, are you talking to a dedicated Organization for Security or the IT function? Or are you talking to the product teams that might have a lot of other conflicting priorities but trying to educate them about how they can build security and without necessarily sacrificing all these other priorities?

Ted: Well, I'm going to start by challenging you on the way that you just framed security as like insurance. A lot of people actually make that comparison. The very first words in chapter one of my book are literally, “Security is not like insurance.” And then I go on to describe why not. And the essentially, the premise is that insurance doesn't prevent bad things. Security does prevent bad things. Insurance doesn't necessarily deliver benefits along the way. And security is tremendously beneficial along the way.

Instead, I think security is actually more like fitness. What you put in is what you get out. When you do it right, you join a community of enthusiast, it makes you look better and feel better, and others are attracted to you. But it's hard, and not everyone will make it a priority. No, security is not like insurance.

Your question about who we work with in terms of company size, and then the people within the company. So the kinds of companies who would typically hire, they range from funded startups all the way through to Fortune 10 size companies. And they all have the same security problems, in terms of needing to find their vulnerabilities and fix them. But their business problems sometimes vary a little bit. As you, I'm sure, know, the larger enterprises, there's usually a lot more bureaucracy and red tape, and a lot of just some dumb things that get in the way of doing security right.

But smaller businesses that are funded, they have challenges to where they're like I could go out of business in three months, if we run out of money. So I have to really think about this investment in different way. So then within the company who's hiring us, that depends. The larger enterprises, some of our customers who have literally security departments that are bigger than our entire company, just their internal security team that hire us. And for a security consultant company, we're pretty big, we're like today about 50 people. Most types of companies who do what we do are much smaller, it's like a talented person or a handful.

And then obviously, as you move into a smaller company, we might be dealing with the CEO in some cases. But that's generally who are the kinds of people who we're working with. But they all sort of share the same commonality, which is that technology is a business driver, and their customers need them to make sure it's secure. And so we help them do two things. We help them, number one, secure it, and then number two, prove it. And that's actually the core premise of the book that I wrote. The first nine chapters are here's how you have to secure it. And then chapter 10, is if you did those things, only if you did those, but if you did those things, here's how you actually prove it to your customers.

Erik: So I guess the proven aspect, I assume there's a technical part of that? You have to somehow demonstrate that it's been proved. But there's also a strong communication aspect of that you have to communicate that value proposition. Do get involved in both of those? Are you more on the how to technically prove that you've secured it?

Ted: So it's actually both of those things. You need to be able to communicate the technical proof. And so essentially, so our process, we’ll help our customers find vulnerabilities, including, most importantly, the custom exploits and the really unexpected things. Then we advise them how to fix it, they fix it. And then once it's been fixed, we verify that it actually is fixed. And now that's when we give them their deliverables, which is reports, or we can even join meetings with them if needed. But essentially, give them the materials so they can go to their customer and to say, look how seriously we actually take security. Everyone else says, we take security seriously, they use a lot of words, and they say we're highly secure. And all this just garbage that really isn't anything related to actually being secure. They're like I'll show it to you. And so that's essentially gives them. But then ultimately, it's up to them to be able to prove it to their customer.

Erik: I imagine for the first years of the company, you were focused primarily on internet related topics. And of course, this podcast is a bit more related to IoT. So when we bring the internet down to the physical world, what does that look like for you today? I assume the majority of your business is still on more pure internet applications. But how has this evolution of IoT impacted your business? Are you already servicing companies in the space? If so, what are the types of asset classes that you've dealt with and the types of device categories that you've dealt with before?

Ted: I'd argue actually, that IoT is, in fact, application security, because to operate a device, you need to interact with it through piece of software. If we go back to it was about 2014, there's a really famous security research conference called DEFCON. And DEFCON is where like all the coolest new research comes out. Because we're always publishing research, we had some relationships there. And they approached us and asked us to do this little event for them. And they said, bring some routers, because we had just published some research that showed how we could hack all these routers. And they said, can you bring it and somehow make that experiential? And so we said, sure.

So we showed up. And this first year, we were in, imagine a big hotel conference room or convention space. And then as you get into like the smaller and smaller and smaller and smaller rooms, and then there's always that one room in the back that's like crazy small. So we're in the back of that room with a bunch of like other people in the way, literally behind a trashcan. They placed the trash can right in front of our table. I don't want to like, okay, well, I guess we're going to try to change some hearts and minds with this thing that is like the literal definition of the back corner of this conference.

So that year went well. And we started talking with DEFCON after that about maybe expanding the footprint. And that next year, we launched this idea that's called IoT Village. And IoT Village is essentially this hands on hacking experience for IoT. And so we wound up doing is we wound up bringing in different manufacturers to bring their devices and we had speakers and we built all these different contests like a capture the flag style contest, and a zero day hunting contest just where you try to find new vulnerabilities. And then each year after that, it just kept getting bigger and better and more badass.

This year, live events are not happening. But fast forward to last summer, and it was like the Taj Mahal of these events within the event, this IoT Village was and we had all kinds of people. By this point, IoT Village has represented the discovery, I think it's one of the 300 zero day vulnerabilities, which is issues that were previously unknown in these different devices that now the manufacturers knew about and could potentially do something with.

And I think the crowning part of all of this was when DEFCON has this part of their culture, where they give out what's called a Black Badge to people who do something super awesome, super badass, they'll give them a Black Badge. And Black Badge it's kind of like the security community's equivalent of the Hall of Fame jacket.

And so our contest is so well regarded in this community. I mean, we're talking like thousands of people playing this thing over the years that we our contest has awarded the Black Badge, which is like, people rarely get it not once, not twice, but three times. And that's like the highest honor happened three times. And so within our community, everyone's like pretty jazzed about that.

So the reason I'm telling that story going from behind the trashcan to being really, “King of the Mountain” in some sense is I don't mean that we're better than other villages at DEFCON, but just we had achieved the level of success that we wanted. The reason I'm mentioning that is because in that journey we really got exposed to pretty much every aspect of IoT security in terms of not just is it a device issue? Is it a hardware issue or a software issue? But also, what are the business constraints behind it? How is it different for a consumer device versus a medical device? How do security researchers work with these companies? And why do many of these IoT companies have difficulty working with researchers? And so we've really kind of seen it all and it's just been a wild ride, for sure.

Erik: For some reason I have this story in mind where you have to be iPhone. Is that right? I feel like I read that somewhere?

Ted: Yeah, we were the first comment to hack the iPhone. My business partner, he's always like, why are we telling that story? Because at this point, it's like 13 years old, but everyone that we tell it to outside of like our close circle within the security community, they're like, you guys are the first ones that iPhone? If we can rewind in our mind's eye to when the iPhone was first going to come out, just remember how transformational that was. It was the day before there was no such thing as a smartphone in your pocket, and then the next day there was like, that's societal shifting. And we knew that that was what the impact was going to be this device. Or we anticipate that was what the impact was going to be. And so we really wanted one.

And part of the reason we wanted one was because they're awesome. Like whatever else, it’s just cool, wanted one. But the other reason, and probably the main reason was we wanted to hack it. But the problem was, so did every other security researcher. Anyone who was looking at this type of situation, everyone wanted to get their hands on it and wanted to be first. And so the problem was, though, not only were we competing with other researchers, but we couldn't get any advantage. Apple was completely tight lipped about what the new system might entail. We couldn't get our hands on a prerelease device. We couldn't even skip the line. We had to camp out like every other fanatic to get one.

And so what we needed to do was we said we need to create an advantage. And so what we did is we thought the way that a lot of attackers think, is they say, okay, well, can I maybe consider how business practices might influence development practices? And in this case, we said, well, as Apple moves from a desktop to a mobile world, might they carry over some unresolved vulnerabilities? So we're able to research those vulnerabilities and we were able to build in advance some attack scenarios around that.

Then the iPhone comes out, we started with that sort of theory in mind that we felt like other researchers might not be anticipating. And, sure enough, our theory proved to be true. And we wound up finding what's called a buffer overflow vulnerability, which essentially resulted in us taking full administrative control of the device.

So how we prove concepts was working with a reporter from the New York Times. And our lab was in Baltimore, 200 miles away. From our lab in Baltimore, we took over the New York Times reporters phone sitting on his desk in Manhattan, and started showing him how we could delete text messages or send text messages, we could fire up the camera and take pictures. And of course, he was part of this research. We weren't like exploiting an unwitting victim. But of course, that helped him write the story in New York Times as who ultimately was the first one to break that story. And that's how we achieved our goal, which was to be first.

But that's not where the story ends because ultimately, the whole point of security research is to make things better. And that's true about for our customers too. Anyone who hires us, or any research that we do, of investing our own resources, the whole point is to just make things better. It's not to be a jerk. It's not to embarrass anybody. Although some researchers, I think, actually do those sometimes, unfortunately. But that's really not the point. It's to make it better.

And so once we found this issue, we're able to report it to Apple. And to their credit, I'm telling you, man, it was I think, 48 hours, they had issued a patch, and this issue was now completely eradicated. And some people don't like Apple. Some people are obsessed with Apple. But whichever side of the fence that you're on, that's the way that companies need to think about security, is that it's about getting better and you got to find your issues, you got to fix them. Because it's not about just building good products, it's about building secure products.

Erik: I think you're right to push back on IoT security is application security. It's just that you have a different set of interfaces. You have a different compute constraints on the devices, which may be limits to some extent how you build security. And then you mentioned that you look at IoT security from the perspective of different situations. So what are the requirements of a medical device versus a consumer device, maybe versus an industrial application?

If you are encountering a new category, and a new product, a new solution that a client is paying you to assess, before you get into actually conducting that assessment, what's the checklist that you'll mentally walk through to say, okay, what are the things that we have to consider for this particular category? How would you start to assess? Where do we start? What's the scope of the assessment that we have to conduct on this particular IoT solution?

Ted: So every process starts with an exercise, that's called Threat Modeling. And essentially, what Threat Modeling is I like to think of Threat Modeling is like a scouting report against whatever the opposing team is in whatever sport you like playing. And whether that's American football, or it is baseball, whatever you play, there's this idea of scouting reports where one team will evaluate the other team to understand their strengths and their weaknesses and come up with a game plan. And that's kind of what Threat Modeling is like.

And so what happens in Threat Modeling is you, essentially, try to understand three things. Number one, what are you trying to protect? Those are your assets? So that would be is it data, such as personally identifiable information? Is a corporate intelligence? Like what is it? Or is it intangibles like reputation of the brand? Or it might be availability of the service? Some of those more intangible things? So what do you want to protect?

The second question is who do you want to defend against? Are we talking about nation states, organized crime, hacktivist, casual hackers, how does the insider threat fit into all of this? And the reason that you think about each of those individual groups is that they're motivated differently. They have different skills, different resources. And depending on what your assets are, they're going to have different reasons why they might want to come after your company.

And then the third part of a Threat Model is to understand your attack surfaces. So this would be where will the system be attacked? So once to go through that exercise, and that's something that we would typically do directly with any of the companies who hire us, we will walk through that with them. But even if we're doing research, and so we're not now directly working with the company, because when you do research, usually, the company doesn't even know that it's happening, let alone participate in it, we're making some assumptions about what it might protect, and why an attacker might be interested in that.

But once you've done all those things, now you can start thinking about misuse and abuse cases. And so that's really what any sort of security testing comes down to, is how can you take the combination of these things? What does it protect? Who's it defend against? And where will it be attacked? And then try to abuse those different attack surfaces. So that's always where we start.

And then from there, it just depends on what the system is. Should we be talking about admin interfaces? In the case of IoT, certainly, an attack surface is any sort of physical access to it? Is the device like the use case of deploying IoT devices in a hotel room that's like guarantee owned because an attacker can just rent the hotel room and then attack the device, because they can plug into it, and then every other guests after that his own? So those are sort of the kinds of things that you have to think about?

Erik: I think a lot of our audience is coming from the industrial sector somehow, or let's say at least the B2B environment. So for thinking from that perspective, so that means that the environment is going to be to, some extent, controlled by the end user. Let's say, we're not talking about a healthcare environment here, so we have a particular set of IP or system security requirements that an enterprise might have, how would you then start to assess this particular environment? Let me know if it's useful, we can even narrow down deeper to a particular category of products. So maybe that's necessary for you to start thinking through this

Ted: Yeah, let's do that. Why don't we narrow down that focus on something and all live think through what the problem might be?

Erik: So let's think about an AGV in an environment. So this could be in a warehouse, in a factory. It could also be in a hospital or some other care facility. Basically, we have a guided vehicle, has decent amount of compute power on it. It has probably its own system that's guiding it, but it probably is connected to other MES systems, ERP, some other type of system that's exchanging data. And then of course, people also have access to it. So at least the employees do, and I guess in some environments, then other people that are not employees might also have access to this AGV.

Ted: I'm not familiar with that acronym. What's an AGV?

Erik: Autonomous guided vehicle. So it could be like a forklift that's in a warehouse, or it could be autonomous vacuum cleaner at an airport that's just running around, cleaning up the environment, the airport. Well, in Japan, let's say, service robots that are just acting as walking screens that communicate information to patients or to other people in hospitality environments. So basically, we have some mobile equipment that is 100% automated.

Ted: And the perspective you're interested in is the company who's deploying that solution, as opposed to the company who has built that and sold that solution?

Erik: Exactly.

Ted: Yeah. So the first thing I'd say to think about with that is that in this type of scenario that we just laid out, there is a shared responsibility model. And I refer to it as deferral of risk. I see deferral of risk happen all the time. This is what deferral of risk looks like. Where the company who makes that AGV, they say, well, it's incumbent upon our customer to ensure that their network is secured and hardened in these ways.

And the company who is licensing or buying this connected device, they say, no, it's the responsibility of the device maker to license or sell us a secured solution. Now, both of them are right, that yes, each of the other parties play a critically important role in ensuring that this system works the way that is supposed to. But where many organizations actually really stumble, and you mentioned health care a moment ago, but it's rampant in healthcare. It's really bad there. I'm sure it's equally bad in other industries too, where it's almost to the point where some organizations completely abdicate responsibility entirely, to say I can secure this as much as they want. But if you don't bring the US secured solution, then how am I going to do anything with that?

Erik: Are they actually trying to say we don't have any legal liability because we want to contractually put all of the liability on our product managers, or the suppliers? Or are they actually trying to write that into contracts that we have no liability here?

Ted: Oh, there is endless legal battles about what security controls are going into the contracts. But what's funny about that is really doesn't care at all about the legal clauses, the attacker. They don't care at all who's liable in the event of a breach. So that's why I do think that it's a good idea to approach legally how to require different partners to approach security. Yeah, you want them to do certain things. But you have to realize that's actually not going to solve the problem.

I didn't mean to say that companies would necessarily say, well, it's no longer my responsibility because I don't want it to be my responsibility. I actually don't think that's true. I think that most companies do truly want to keep themselves secure to whatever extent they can. But I think a lot of companies feel to some extent, if not powerless, limited in their power, because they're saying like, well, should I even bother investing tremendously in my network?

Or even if I do, let's say, I just built the metaphorical Fort Knox, well, if I deploy this vulnerable solution, which every single headline of IoT would suggest that IoT solutions have security issues, where does that leave me? And so I think that's a bad scenario that that there's sometimes an adversarial relationship between supplier and buyer. But really the answer to that is I think there is a positive and bright future to this, and it's probably largely led by the larger companies to work in a more collaborative context.

Like let's say you're a very large, multi-site type company, and you are a large buyer of startups, hot new technology, well, instead of trying to [inaudible 35:24] with all this legal language or whatever, partner with them. Invest some of your engineering resources, even financial resources to help them get security right, because it benefits both of you. And we actually see that happen in in other non-IoT industries. And it's still sort of this misunderstood, not really widely accepted approach, but it's tremendously powerful where people do it.

Erik: So I suppose in this in this environment, you have a physical device. A fair number of people have access to it by both maybe employees. And then if it's in hospitality, or a hospital environment, probably not employees could get access if they wanted. So you have the device. And then you have the system, the application that's running that device. And then you have other applications that are integrated through to that device. So is there any 80/20 rule where you see the threat landscape? Is it mostly in the application that's running the device? Is it the device itself? Is it third party applications that are integrated into the device? Or does it just depend on the situation where the threat landscape might be focused?

Ted: Well, it definitely depends on the situation. But since you've done a good job of laying out a hypothetical, we’ll use this situation. So where I think the emphasis should be is on any attack surface that would be accessible from basically anywhere in the world. So what does that remove from scope? So that means that someone who can get, and it will sound contradictory, but it won't be. The worst case scenario is that someone can get physical access to the device, because in most cases, if you get physical access, it's probably game over.

Now, some devices have good anti-tamper mechanisms and stuff. But the point is that's usually not a good situation. But there's a limited number of people who can do that. So even if you're talking about in this facility, even if it's like 25 people might be able to get physical access to it in some way or another, okay, well, that's 25 people. But if the application that operates it is accessible to literally billions of people around the world, like that's a much bigger problem, and the barrier for an attacker is much lower. An attacker might not fly to that sites to go try to get physical access to this thing rolling around this facility, but they very well may attack from wherever they live in a non-extradition country in order to achieve whatever their gain is. Maybe it's take the system offline, and ask for ransom or something.

So if I was to say, hey, you've got X dollars to spend, and X is a limited amount of money, I would start with something like that, because that's where your largest pool of attackers are. And if there are issues that can be trivially exploited, in that way, you want to tamper those down pretty fast.

Erik: But are integrations ever a serious security concern? Because in an enterprise environment, often you have the application that's running the particular asset or device, and then that might be integrated with five different systems because did it needs to be shared, and some of those systems might even have some commandability over the system or the ability to issue orders and impact how this application runs. Are those that concern? Or is the hacker just going to focus on the application itself that's directly they operate in the device, an MES that might be sending orders through to the AGV software telling AGV on what to pick up? And were like, when somebody tried to hack the [inaudible 39:17] because maybe they have a previous vulnerability that they've discovered and then use that or would that be a very uncommon situation?

Ted: That's exactly how attackers actually operate. If you think back to something like that mega, mega breach that happened to the retailer Target, it was 20, whatever that was a few years ago, sorry, it got hacked, what you described is exactly what happened, where the attackers rather than going directly at Target, they went after one of Target’s vendors, who had elevated trust and access to certain areas in the corporate environment of Target.

And so that vendor was a small company, didn’t affective security controls. They weren't even really a technology company, thought they had technology as part of their company. And they were compromised. And then they're elevated trust and access is how the attacker was able to get into Target's corporate network, and then from there, they were able to pivot over into the payment area. And then that's how they got all those cards.

So the way you described these integrations, and this is called a ‘Stepping Stone Attack’, and the idea of jumping from one thing to the next, that's exactly how attackers work. And maybe you're more of an ethical hacker than you think.

Erik: So you mentioned earlier that the security landscape is changing, are we moving into an environment that's more secure, that's generally less secure? And then maybe based on that answer, you can talk a little bit about where the environment might becoming more secure, and where it might be becoming less secure. So how is the attack landscape around IoT devices changing or IoT systems?

Ted: About IoT, so I was going to ask a clarifying question, which you just answered. Are we talking about IoT, or just more generally? So within IoT, some companies are in fact getting better at it. They are definitely in the minority. But what's happening is the proliferation of IoT is expanding so fast. The adoption of all these different connected devices, and all these different use cases from cheap little light bulbs or whatever, to autonomous all huge, expensive systems.

And what I'm seeing is that those companies who are getting better, they are a direct result of all the progress that the security community is knowledge transfer that's happening to people who are building things. So that's a really positive, and I don't want that to get lost in the story. Because so much of the narrative about security in IoT is always negative and mostly, it is not good. But there are companies doing it well, and I think those companies, I can't name their names, because many of them are our customers, but they deserve to be celebrated and at least deserves to be pointed out. The companies are doing it right.

Unfortunately, because I mentioned about the proliferation, there's just so such an influx of innovation in IoT, and most of that innovation is really not having security is central to it. And so I actually see on the whole, if you average because everything's security and IoT is actually getting worse over time because of that rapid adoption without the market forces causing security to be prioritized. And that's why I'm advocating for stuff like this. That's why I was really happy to come join you on this show. That’s why we do IoT Village, because we know that it can get better. We've seen companies do it better. But unfortunately, we're seeing the scale at which companies who are not doing it right, increase exponentially. And so that's a precarious situation for an industry in its entirety.

Erik: Now, that's an interesting point, but it's the velocity of innovation around a particular domain would have a direct impact on the security. And I guess in IoT, you have a lot of innovation around the machine learning algorithms and data processing, and then you have a lot of innovation around connectivity solutions. Around the core hardware, there's a lot of innovation. And then, of course, just different form factors coming out, so proliferation of products.

Are there any areas in particular that whenever you're working with an IoT device customer, that you immediately zero in on and you say, okay, here's the three areas that we know there might be a significant security concern, or is it just too fragmented to have that top three or that immediate red flag?

Ted: It's maybe hard to answer it directly the way that you asked it, but there is an indirect answer. Which is, what are the maybe common issues that companies should be dealing with or thinking about? And the first issue has to do with the leadership. So companies who are building things, if security is not a priority of the leadership, it will not be a priority for anybody else.

And I will argue till you're blue in the face with anybody who tries to say that, yeah, but I have a security guy who does that, I have an IT guy who does. I don't know anything about security. But I hired a guy who hired a guy who hired a guy who's five layers beneath me, and yeah, I'm pretty sure we're secure. Never ever going to happen if it's not your priority. So I think we need to start with leadership realizing that just like every other domain in the business is your responsibility.

So to a security, security is the same as finance. If there was a month or a quarter or a year where you didn't perform financially, your job might be at risk. And we should be thinking that security kind of has that same gravitas to it. That doesn't mean you need to be the expert. It does not mean that you need to know how to break a system. Not at all. But it means that you need to know that this is your job. Even if you're not the person doing it, just like finances your job, even if you're not the person doing the accounting. So, number one, leadership. Every company that we've seen, runs into that.

Number two, the stage at which you think about security, and this is a big one. Most companies don't think about security until way after deployment. They say let's design the solution. Let's build it, let's roll it out. Let's see how the market reacts. let's iterate on it. Let's think about it in version two. And what winds up happening when you do that is not only are you deploying vulnerable solutions, but you, actually, are signing up for a tremendously painful road to fix those issues later.

We analyze our own numbers from our customers based on when they actually perform security, or when they started engaging security, and it's 25 times more expensive to deal with a security vulnerability after deployment for something that was say like a requirements or design level issue. So, 25 times the number of hours to fix it. That's not writing checks to consultant. That’s not paying me 25 times. That's you're paying your developers, and it's taking them 25 hours to do something that should have taken them one. That's enormously impactful to companies, and most companies don't realize that. So leadership is one. The stage at which you think about security is two.

And then the third and I know that you were asking for like technical areas within the technology, and I'm not giving you technical answers. It's because this third piece, which is getting it done right. And in fact, we talked about my book earlier, the whole reason I wrote this book, the subtitle is actually ‘How to do Application Security Right’. And even companies who try to do security right, and there's tons of them, and I'm sure many of your listeners fall in this category where they're like, yeah, man, I pay for annual penetration testing, and I do this and I do that, and they can list their initiatives that they do. Those companies usually are doing it wrong. And not because they're stupid, not because they're ignorant, not because they're not trying, in fact, they're doing all the right things. It's the execution that's failing. And most companies don't realize how the execution is failing.

That's why I wrote this book, because it's like, man, if people are trying to do it right, and they're still doing it wrong, it's like going to the gym. I use fitness as a metaphor earlier. It's like you're going to the gym, and you're like, hey, I got to get stronger in my chest and so I'm going to do a bunch of pushups. And you're not even doing pushups at all. You're doing situps. Someone's going to be like, that is not going to help you get to your goal. Your goal is to have a stronger chest. You're not even touching your chest. People are like, but I'm at the gym. Isn't this the right thing? And it's no, the execution is really, really what matters.

So those are the three things that I'd definitely recommend. As far as like, where did the issues commonly arise, that depends largely on different systems, their use cases, their threat models, etc.

Erik: So I know we want to do a couple of deep dives into actual cases. Is there one top of mine that you can walk us through end-to-end from first contact with the client through detecting and resolving that security issue?

Ted: Why don't I share one maybe from research because I can talk about some of the details? So we did a study, it was quite a lengthy study. We're focusing in healthcare and we were looking at medical devices. And what we're really interested in was how could attackers cause harm or fatality to patients? And that was something we saw wasn't really studied very well in healthcare because people were really studying more like privacy of information, for example, as opposed to patient safety.

So first thing that we did was we reached out to a bunch of hospitals, and we said, hey, here's what we want to do. We want to hack some of your medical devices. And needless to say, most of those calls were not returned. So it took like six months, I think, to get these few hospitals on board. We wound up getting about a dozen across the United States, to say, yeah, we want to do that because they care about patient safety. And because frankly, we were going to be giving them a few $100,000 worth of free work. We said, hey, just give us access, you won't pay for any of it, we just want to be able to publish it.

So then, once we had organizations we could work with, we started with, of course, Threat Modeling, like I was describing before. And in particular, we were Threat Modeling around patient monitors, which are the devices that go by the bedside to report the vital signs for the patient. So once we had completed the Threat Modeling exercise, then we started exploring the different attack surfaces.

And one of the things that we found that we could do to this particular device was that we could bypass authentication, which essentially meant authentications like logging in. So there's a technique where we could get past that, and all of a sudden, we could be interacting with the device without any credentials. So that's pretty bad. But where it got catastrophic was where we found that you could perform what's called a Remote Code Execution. And so Remote Code Execution essentially says that from not at the device, I can actually send instructions to the device and make the device do whatever I want it to do.

And so we talked before about what's the 80/20 rule? Where's the worst case scenario? The worst case scenario is someone being able to do something like this. From anywhere in the world, they can issue a command and the device will respond. So the combination of these two things we found meant that an attacker from anywhere in the world didn't need any credentials. They could make the patient monitor do what they wanted it to do.

So from a research perspective, that was interesting because it's not supposed to happen that way. But from an impact perspective, it raised the question, does this matter? Okay, so we can mess with the patient monitor, but it's not attached to the patient. So like, why is this significant? So then we started expanding on the different attack scenarios. Well, what could you do with that?

And there were two really precarious scenarios that exist. So, one is triggering false alarms. So this meant that we could make the patient monitor say that this otherwise stable patient is having a cardiac event. Now, a few things happen when all those alarms go off. I don't know, if you've been in a hospital ever, but there's all these alarms that will signal nurses to come from the nurse station to the patient's room. So one bad side to that is, of course, that distracts nurses from patients who need care to this patient who does not need care. So that's the best case scenario is we wasted their time and drew care from someone who needed it.

Worse, though, would be let's say that there was some sort of rush, or they didn't follow certain safety protocols, the patient was maybe non responsive, and they just administered the electric paddles, now, there are safety mechanisms in place so that that wouldn't be their first inclination. But in a busy healthcare setting, it's not out of the realm of possibility that would happen. And if you administer electric paddles to a patient who doesn't need it, that's going to really hurt or potentially kill that patient. So that was the first scenario, triggering false alarms.

As bad as that is, the opposite scenario is even worse, and that's disabling false alarms. So that would be where a patient does need care, goes into that cardiac events, and it doesn't signal the nurse's station. And so now you've got a patient suffering, time is of the essence when you have something like that. The patient needs that care, doesn't get the intervention, that's going to hurt or potentially kill that patient. There's not regular wraps that they do. But if the timing of the rounds don't line up within the patient needs care, patient is out of luck.

And so of the next step was to summarize the issues, actually report them back to the organizations that are impacted both the medical device manufacturer and the hospital, articulate to them how to remediate these issues. And then it's on them ultimately to fix it. And we've never actually gotten them to tell us whether they fixed issues or not. So I'm going to assume they didn't. Because they would be proud to say like, hey, we fixed it. Do you want to come and just like double check that we did it right?

So in that case, that was research. And you can actually go download that study, is called ‘Hacking Hospitals’. It's on our website. I believe the URL is irc.io/hospitalhacker, something like that, or any of your audience can email me, and I'll send that to them. And yeah, so you can actually read all the details about how we did that.

Erik: Yeah, we'll put that link in the show notes so people can track it there. Any other cases you'd like to share with us?

Ted: Well, maybe I'll leave you with one last IoT story. And this was actually the origin story of how our company ultimately got its beginning. And the reason I'm telling the story is it's just cool. Back in 2005, my business partner, and a few of his colleagues were in the Ph.D. program at Johns Hopkins. And they heard about this system that's called the immobilizer system. The immobilizer system is an anti-theft mechanism in automobiles. And essentially, the idea is that you've got your key. And if you ever wonder why your key has some bulk to it, like it's more than just like the metal piece you would insert, that's because there's a chip in that key. And that chip communicates with the computer onboard in the vehicle.

And the immobilizer system, essentially, is there to make sure that it's the authentic key. So you could present a key. And if it's not the authentic key, the immobilizer system will immobilize the vehicle, will actually make it not work. Well, that system was considered to be unhackable. It was just widely stated that it couldn't be defeated. And Steve and his buddies, they said, challenge accepted.

And so they set out to go study this, it took a few weeks to reverse engineer the cryptographic algorithm, a few more weeks to build a weaponized software radio. And then a few weeks later, there are proving concepts by starting a Ford Escape without the authentic key, just with this weaponized software radio, and a mechanical copy. It shouldn't have worked. And that was a cool moment, certainly launched our company, because when they published the research, everyone wrote about it because that would be cool research today. But that was happening when people weren't publishing research like that many years ago.

And so companies came calling and they said, hey, we have our challenges, can you help us? And so, the company's never looked back. But the funny and final outcome to that story was when we reported it to Ford, they didn't actually believe us. And we sent them the write up, all the technical summary, we sent them a video, literally us doing it. So we're like, we're on this phone call with them, and they're saying, that's cool you guys did that. But there's a little more to it than that.

And the reason I'm telling this part of the story is that's a pretty common belief that people have about their systems, especially intelligent people who build sophisticated systems, which I would imagine many of your customers fall into that category. They're intelligent, and they build cool things. And people who are like that often can become blinded to what they don't know. And I don't mean to say that any of your members of your audience are necessarily blinded. But these people we were dealing with were very smart and were definitely blinded, and they kept saying, there's more to it than that.

And we're thinking we're like, didn’t we show you this video? Didn't you read this thing? And it turns out, we said, wait a minute. And we asked them about this trivial bit, a little piece of error correcting code. It was like 40 bits. It was super small. It was such an insignificant part of the research that I don't even think it had more than maybe one sentence in the study. We’re saying, you're not talking about that, are you? And they say hold please. And they put the phone on the hold. And what followed was what Steve refers to as his favorite five minutes of silence in his life. Well, they obviously were on the other end, saying, oh, man, these guys hacked our car. And then they came back and they said, alright, we're coming to Baltimore tomorrow.

Erik: Well, this market is probably going to keep you and a lot of other companies busy once we start seeing some self-driving vehicles properly on the road. The hackers and the white hats are going to have a lot of fun and stay very busy trying to keep yourself safe. Ted, I really appreciate you taking the time to walk through this with us. Just last question for my site is what is the best way for folks in our audience who want to learn more about what you're doing and maybe look at whether you can help them, what's the best way for them to reach out either to you or to your team?

Ted: Yeah, so I suggest three things. Number one, connect with me on LinkedIn. I'm very, very active, I'll respond to any of your DMs or whatever. So, Ted Harrington, you can find me on LinkedIn. That's easy. Number two is, you can email me, my email address super easy, is just ted@isc.io, so that's like independentsecurityevaluators.io, ISE. And then the third thing is read my book. You can go to hackablebook.com, join the waitlist, it's coming out very, very soon. And pretty much everything that I advocate is in there. And then of course, that's another way through that website you can contact me. And I will respond any way that you guys contact me, so happy to help however I can.

Erik: Thank you, Ted.

Ted: Thank you so much for having me.

Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.

test test