Hardware Security Set to Grow Rapidly

As vulnerabilities become more prevalent and better known, industry standards will want to keep pace.

Experts at the table: The hardware security ecosystem is young and relatively small, but it may experience a first boom in the coming years. As corporations begin to recognize how vulnerable their hardware is, industry criteria are being set, but they will have to allow it. engineers to experiment. As part of an effort to achieve the best path forward, Semiconductor Engineering met with a panel of experts at the Design Automation Conference in San Francisco, along with Andreas Kuehlmann, CEO of Cycuity; Serge Leef, head of secure microelectronics at Microsoft; Lee Harrison, director of Tessent automotive integrated circuit solutions at Siemens EDA; Pavani Jella, Vice President of Hardware Security EDA Solutions at Silicon Assurance (representing IEEE P3164); Warren Savage, a researcher at the University of Maryland’s Applied Intelligence and Security Research Laboratory and is currently the principal investigator of the Independent Verification and Validation Team (IV)

SE: What is the current state of the hardware security ecosystem?

Harrison: Security has become quite mature. There are many teams and technologies to automate much of the security aspect. But we consider that security is in its infancy, where there are experts, but it is something very, very practical, and those experts only work on the vital things. Therefore, it is not yet a commodity. It’s still a very specific niche.

Borza: There are other dimensions that come with security that go beyond what security covers.

Teheranipoor: The ecosystem is young, but it is literally dictated by the amount of cash corporations, app makers, and the government are willing to pay. Before DFT [design for test], it was 5% of their overhead, so corporations said, “No way. ” But then it happened and now everyone uses DFT. When you think about protection instead of production defects, it is quantifiable. But when it comes to security, we are dealing with the intelligence of a human being, which is not quantifiable and is incredibly difficult to model. Everything we communicate is difficult to model. There was a time when there was no market. It was $0 and all those incidents happened. There were reactions from other companies and from DARPA in 2008, 2014, 2016, 2018, where everyone came in and invested money. In fact, 2018 was a vital year for security, as we learned from the Bloomberg story. I used to receive dozens of phone calls to see where the solution was. After reviewing all of those clients, the question came down to one thing: What percentage of the total audit fee are you willing to give up for security? The repeat reaction I gained from each of them was not 30%, it was not 40%, it was between 5% and 10%, and for some, 15%. We take the average of all those reactions and come up with 10% as the number that most of those corporations are willing to give. It literally dictates what the ecosystem looks like. Of course, this is not definitive. So that? Because the next risk may simply increase that figure from 15% to 20%. And the next risk could potentially increase it up to 30%.

Kuehlmann: It took 20 years.

SE: From what we’re hearing, that’s going to happen and it probably won’t take another 20 years.

Kuehlmann: That’s a basic difference. I understand production and all that, but all those distribution curves (production, functionality and strength) are part of an uninterrupted distribution with little excursions. Cyberattacks are a tiered function and we can’t basically design those applications in an economic sense. Safety is very difficult to quantify economically. You don’t need to have that black swan event where each and every one of them has some sort of top-down resolution that compels them to act. The most important limitation it has are regulations and standards. If you look at the automotive sector, the market basically means that each and every chip must be ISO certified, so each and every one of them must ensure safety. Before that, no one cared. No one cares unless there is a genuine leader in the market. Therefore, the state of the ecosystem, in some sectors, is changing in many areas.

Savage: Mark, is your 5% to 15% a cog or a cost?

Teheranipoor: This is the money that the company is willing to withdraw from the ongoing audit and then set aside for security purposes. This is not growth. This can be just the license charge. Let me give you some real numbers. The audit market is now worth $1. 2 billion. If emulation is included, that equates to about $2. 2 billion. According to our assessment, the length of the existing market is approximately $200 million to $300 million for security controls. Companies are not willing to spend more. The standards, the requirements, some other major incident, it’s all going to add up over time, but I say to everybody, “Hey, don’t forget that that number was 0 at one point. At one point, in mid-2015, we only had $10 million to $20 million.

Farahmandi: This kind of awareness exists for security, even if we overlook a catastrophic occasion that motivates everyone to invest more and more in security. Because the option exists and everyone can anticipate that those threats and attacks may occur, we are seeing more and more investments from giant corporations in security. This is most likely due to the fact that consumers don’t find safety easy, especially when it comes to autonomous vehicles. There are many criteria that the autonomous industry needed. Cost is an issue, but we see many EDA responses being developed. Caspia Technologies, IQ3, Riscure, we’re actually seeing more and more corporations investing in EDA responses. We are seeing in academia the progression of vulnerability databases because this resource will be useful when you need to integrate security into the design. From the educational and study point of view, we say that AI is a great help to act in security verification, and AI EDA responses will reduce the security burden.

Witteman: That’s true. Security depends largely on criteria and regulations. We see it all the time. Products manufactured in unregulated markets are very weak from a protection point of view, and products requiring protection certification are at a higher level, and the certification procedure itself goes even further. But why are there regulations? There are two reasons. One is the load and the other is the worry about the load. A typical example is pay television. We have made a lot of profits because pay TV corporations have lost profits due to piracy (20 billion dollars every year) and so they need to invest to make their products more secure. The other market where there is greater concern about prices is the payments market. Banks are very aware of the damage they do to their brand. They need to protect their brand, so they will invest in security certifications just to avoid security incidents. Today we see many more examples.

Leef: The elephant in the room is this: As someone who tried to sell protection products between 2014 and 2017, my opinion is that promoting protection is like promoting vitamins. You’re touting a cure for an unquantified summary threat. It’s incredibly different than promoting, say, cancer drugs, where there’s an imperative and a timeline. How do you create a call in this space? It turns out that what they want to do is to locate the analogue of patients with a genetic predisposition to cancer, who are already aware of it and potentially scared, and are the ones who can shape the first markets.

Borza: We actually have some of that in the security area. Some other people in certain market segments have responded proactively to those issues. There are others who are absolutely ahead of the curve and are still burying their heads in the sand about their duty to do anything. Only in retrospect do other people begin to understand, after a major attack, what the reason for that attack was. This is part of the explanation why they cannot mentally quantify the threat they face by not addressing this challenge at design time, which is several years before this attack occurs. Only after they have experienced a succession of things like this do they realize that it is like buying insurance. You are saving anything from falling in the future. That’s one of the other challenges. If you are literally smart about your security task and prevent successful attacks, what is the price of this attack that didn’t happen? Most people don’t know this, unless there is an equivalent in a business aspect where they were successfully hacked and you didn’t, because you already expected and were able to prevent this attack.

Leef: I would also like to echo what Mark said about the measures. When I was thinking about starting the ACE program at DARPA, I was asked what the signs of good luck would be here. I said, “Well, the scenario is pretty terrible. ” now and will be less so when we are done. DARPA responded: “Well, that’s not how we do it. The way DARPA does it is: here’s the state of the art, here’s the desired state, here’s the technical challenges, and here are the methods to solve the challenges. ” So what are the numbers for the current state and the desired state? “We spend many hours looking to quantify security.

Tehranipoor: I can add a point to what Serge said earlier, because I was aware of his works from 2014 to 2017. I don’t know what there is between nutrients and painkillers. But we’ve progressed through some of the major attacks they’ve had. place. We are not yet a painkiller. The pain will come, because of what Andrea said. Something has to happen for us to have more requirements. When was the last time we forced advertising and advertising teams to do something unless someone asked for it?If you take a look at DFT, DFR, DFx, someone must have asked for it.

Leef: Don’t overstate the regulatory point. When I started combining this at DARPA, I asked, “Aren’t there currently regulations that ensure the security of chips used in defense applications?” “Someone at the workplace looked at all kinds of regulations and found that yes. In black and white it says that defense contractors are expected to deliver something similar to the 2662 agreement. I have not found any defense contractors who pay attention to this There is a regulation, but no one enforces it because if you say to Raytheon, “Hey, you have to comply with this,” they say, “Oh, yeah, that plane we were talking about is going to charge you double?”

Kuehlmann: The challenge is that requirements, at the highest level, will necessarily have to protect data systems, adding software, hardware, etc. This is not yet resolved.

Teheranipoor: This is a vital issue. We saw the popular ones come out. It developed through seven or eight other people in the industry. He arrived at my office. I asked some of our academics to examine this question. Within 3 months, we broke into it. We told all the companies about this topic and said, “Hey, let’s write an article about this. ”  “The thing is, we can extend a popular for production defects in terms of performance, but be careful with the requirements, be careful with the popular, be careful with that kind of thing. If there is a requirement or a policy, nobody cares. matters. ” This was the popular P1735. We published an FCCS article, it caused quite a stir and what happened? The team came back and this time they brought in one of our team members so he could actually fix the problem. It’s a point of pride because we’ve prevented anything serious that could happen later and we’re working with companies to fix it, but what I’m saying is be careful what you wish for, because this is a safety factor rather than. defects and other things we do. If he says that we have a popular because of production defects, no one will complain. As soon as you say we have a security popular, researchers are interested in it because we need to break that popular.

SE: Regarding IEEE P3164, will it be the same?

Jella: The popular does not prescribe security. This provides a framework of things to think about. He describes two methodologies. The popular necessarily suggests two methodologies for acting modeling. If you’re dealing with complex IP addresses, how do you deal with it?

SE: Can we have suggestions or do we want standards?

Tehranipoor: You can’t force attackers to adapt to what we say. We are the ones who have to adapt to what the strikers do.

Borza: It’s vital that we don’t try to dictate exactly what the answers are. Its nature is dynamic. There is also the question of what is the price of what is being paid and what is the cost of that protection.

Tehranipoor: We don’t like the fact that we live in a box where the motivation to break is incredibly strong: 75% of cybersecurity researchers work on attack, 25% on defense.

Jella: The framework is not prescriptive, but when we try to implement it, it becomes very abstract. We still want to locate mathematical algorithms to be able to quantify security issues, and this is the most sensitive part. All sectors will struggle for some time. If you want a model, such as force source metrics or an SRE signal safety metric, there are many quantifiable points. It’s hard to do that in the risk modeling space. It’s abstract, because it’s very dynamic. It is a human who touches the attack surface, which makes it very complex, very difficult to guess.

Borza: The other thing about 3164 is that, in its goals, it is literally about creating a transparent way for a seller or IP author to talk about the safe houses of their product, independently of those houses. This is not a price judgment on whether they are smart or bad, but simply what they are, to someone who would incorporate them into a product. So, an Intel, an AMD, or a Qualcomm, what do they get when they buy a specific intellectual asset? One of the things that was missing was a transparent way to talk about what it was about. The specification comprises language and discussions of risk models, as well as definitions of other types of knowledge objects. So there is a definition of assets, a definition of risks and how they are documented. Again, in this case, what is being mitigated and what is not among those risks? It would possibly be entirely appropriate for someone to supply an intellectual asset that does not mitigate a risk even if the risk is identified, as this would likely not be compatible with the supplier’s intended use case. Or perhaps the right time to mitigate risk is actually during formula integration, not in the IP, either because it is too expensive or because it is not practical to put it into effect in the IP. But you can wrap it at onboarding with anything that mitigates the risk.

Savage: Mike, you’ve discovered my new security workhorse. There are many reductionist philosophies when it comes to security. It’s definitely not safe to be at the top. This is usually a formula issue, not a leaf point issue.

Borza: The problem is that you’re promoting sheets as a provider of intellectual assets, so you have this add-on that’s moving on to anything that can be a component of a larger safe, anything that’s wrapped in a formula, that has a functional requirement independent of security. The formula itself must be secure to ensure that the functional project is carried out.

Wild: It is a wonderful position because you can take awareness of vulnerability to the next level. We can do this for each subformula and after all, you can go to the subformula level instead of the formula level.

Borza: It’s kind of the antithesis of what we talked about before, where risk modeling was top-down. It’s a case where those things come from the back up and they’re designed to integrate and they’re going to face this top-down evaluation.

Bron: We also see it with certification. Certification is often where other people join in, so the formula is made up of an SDK from vendor A and a backend formula from vendor B and the fingerprint sensor from vendor C. Each of those parts is tested separately and the corresponding safety management documentation indicates this. you what to do. There are residual threats. But that’s someone else’s problem. The user who integrates it or who ultimately brings a payment solution to the market bears the monetary threat and the final responsibility in case of a problem. So, at least in the certification realm, it’s not a problem. unknown philosophy to see things that way.

Borza: The concept of those parts is that you can provide documentation for an “out of context” component, as it’s called, that is independent of the overall formula integration. The concept is that those dangers are removed out of context and without mitigation when you access the formula. Sometimes when you finalize the formula, there are still absolute dangers that you are aware of and you rationalize them by saying, “We don’t think this is a risk applicable to our specific application. No one can physically access this, so we don’t have to worry about it. ” possible channels of aspects that you can only measure if you are close and physically to this thing.

Jella: I just need to explain what the popular one is doing lately. There are two things that are transmitted among the popular ones. The first is to help formula designers or integrators identify the recommended assets and methodologies within the popular. The key moment is, if we identify the assets, how can we communicate them to the rest of engineering? For example, security guarantees, such as security parameters passed to the design, security objectives, what are the security assets? What are the security attack points? What will the vectors look like if the attack occurs at this componenticular knowledge point? I would call all this a guarantee of security. Today we have what is called an intellectual property package that we are all aware of. We come with verification IP as part of this: test banks, IP, etc. This is usually equated with a security guarantee as a security package, and this will all be part of it. We’re looking to identify a popular industry on how to express all those safety guarantees in a format that formula integrators can use. It deserves to be anything that IP specialists follow smoothly and discard, and also through EDA vendors to popularize the schema format. This is what we are working on. The summary component I mentioned is asset allocation. Defining the format in which we provide collateral is whatever we are executing. It is possible at this point, however, the definition of assets remains somewhat summarized when it is even necessary to take it to the mathematical realm. So it depends on the EDA vendors – how they take this framework and adopt it in their tools.

Borza: You hit the nail on the head, Pavani, because it’s really about having that language that can be used across teams to integrate those things. If you can do that, it’s conceivable that we can bring a lot more automation to the overall formula safety problem.

Leave a Comment

Your email address will not be published. Required fields are marked *