Join us in Atlanta on April 10 and explore the landscape of worker safety. We’ll explore the vision, benefits, and use cases of AI for security teams. Request an invitation here.
The “department without” stereotype when it comes to cybersecurity would force security and CISOs to close the door on generative AI teams in their workflows.
Yes, this generation comes with dangers, but in fact, many security professionals have already experimented with AI and most of them don’t think it’s going to be incorporated into their work. In fact, they are aware of the usefulness of generation.
Ultimately, more than a portion of organizations will deploy AI-generation security teams through the end of the year, according to a new report from the Cloud Security Alliance (CSA) and Google Cloud’s State of AI and Security survey.
“When we hear about AI, we assume everyone is afraid,” said Caleb Sima, president of the CSA AI security alliance. “Every CISO says no to AI, it’s a huge security risk, it’s a huge problem. “
The AI Impact Tour – Atlanta
But, in reality, “AI is cybersecurity and offers interesting opportunities and complex challenges. “
According to the report, nearly three-quarters (67%) of security professionals have already tried AI, particularly for security tasks. In addition, 55% of organizations will integrate AI security teams this year; The most sensible use cases will be rule creation, attack simulation, compliance violation detection, network discovery, false positive mitigation, and anomaly classification. Leaders are largely responsible for this increase, as evidenced by 82% of respondents.
Contrary to convention, only 12% of security professionals said they thought AI would take over completely. Nearly a third (30%) said the generation would improve their skills, sometimes help in their role (28%) or update much of their homework (24%). ). A large majority (63%) said they saw the potential of security measures.
“For some jobs, we’re very pleased to have a device that takes care of them,” said Anton Chuvakin, CISO security advisor at Google Cloud.
Sima agrees, adding that “most people probably think this increases their employment. “
Interestingly, however, senior management executives said they were more familiar with AI technologies than staff: 52% vs. 11%. Similarly, 51% had a transparent indication of use cases, compared to only 14% of staff.
“Let’s face it, most employees don’t have time,” Sima said. Rather, they face day-to-day turmoil as their leaders are inundated with news about AI from other executives, podcasts, news sites, newspapers, and a host of others. materials.
“The disconnect between leaders and the understanding and implementation of AI highlights the need for a strategic, unified strategy to effectively integrate this technology,” he said.
The first use of AI in cybersecurity is to inform, Sima said. Typically, a member of the security team manually collects the effects of the tools and spends “not a small portion of their time” on them. But “AI can do that a lot faster and a lot better,” he said. AI can also be used for repetitive responsibilities, such as revising policies or automating guides.
But it can also be used more proactively, such as detecting threats, acting on definitive detection and response, locating and resolving vulnerabilities in code, and recommending corrective actions.
“Where I see a lot of action without delay is, ‘How do I rank those things?'” said Sima. There’s a lot of data and a lot of alerts. In the security industry, we’re very smart at spotting things, but not very smart at determining which of those bad things are the most important.
It is difficult to distinguish “what is real, what is not, what is a priority,” he stressed.
But on the other hand, AI can intercept an email as soon as it arrives and temporarily, whether it’s phishing or not. The style can retrieve data, who the email is coming from, who it’s for, and the reputation of links on online pages. , all in a matter of moments while also offering reasoning about the threat, chain, and history of communications. By contrast, validation would take at least five to 10 minutes for a human analyst, Sima said.
“Now they can, with a lot of confidence, say, ‘This is phishing’ or ‘This is not phishing,'” he said. “It’s pretty phenomenal. It’s going down today, it’s working today.
There is an “infection among leaders” when it comes to employing AI in cybersecurity, Chuvakin said. They seek to integrate AI to fill skills and knowledge gaps, enable faster risk detection and productivity, decrease errors and misconfigurations, and provide faster incident response. , among other factors.
But, he added, “we’re going to hit the bottom of the disappointment about it. “He stated that we are “close to the most sensible cycle of the hype” because a lot of time and money has been spent on AI and expectations. They are high, and yet the use cases have been equally transparent or tested.
Now it’s a matter of finding and implementing realistic use cases that will be shown as “magical” until the end of the year.
When there are real-world examples, “ideas about security will be dramatically replaced around AI,” Chuvakin said.
But enthusiasm continues to be mixed with risk: 31% of respondents in the Google Cloud-CSA survey felt that AI is as useful for defenders as it is for attackers. In addition, 25% said AI could be more useful to malicious actors.
“Attackers have the merit of being able to use the technologies much faster,” Sima said.
As many have already done, AI looks at the past evolution of the cloud: “What has the cloud done?The cloud allows attackers to act at scale.
Instead of targeting a specific target, risk actors can now target everyone. AI will intensify their efforts by allowing them to be more complex and specific.
For example, a style can simply control someone’s LinkedIn account to gather valuable data to create an absolutely credible phishing email, Sima noted.
“It’s a custom design to scale,” he said. “This puts this fruit even more within reach. “
Stay informed! Get the latest news delivered to your inbox daily