Artificial Intelligence: A relative reality

Schellman & Company Principal and Cybersecurity Practice Leader Doug Barbin was recently interviewed by SC Magazine for their SC Solutions Technology Research Report, for his thoughts on whether vendors' products embedded with AI deliver on their promises, or if AI today is still a work in progress. Read a portion of the article below, or in its entirety on the SC Magazine website.


Written by Esther Shein

Introduction

We have all heard the expression, “Go big or go home.” Today’s conventional wisdom says if vendors are not considering implementing artificial intelligence (AI) into their security software, they might as well not bother innovating. The reality is there is no silver bullet that solves all of the CISO’s security needs.

Pundits today tell CISOs that pretty much everything from endpoint protection and perimeter monitoring to security information and event management (SIEM) and malware protection will include AI components, which are touted as the panacea for stopping attacks, protecting your network from advanced persistent threats, enhancing your threat intelligence and making all your security technology smarter, faster and more effective. That’s a big promise, but can AI deliver?

SC Media set out to differentiate what is marketing jargon from what CISOs actually can do today with AI-enhanced security products. To determine the veracity of the claims, we spoke to CISOs and other security leaders and analysts to find out which products embedded with AI deliver on their promises. 

Then we asked vendors whose products were selected in five categories to describe briefly how their tools actually use AI. Those categories include SIEM, Endpoint Protection, Network Performance Monitoring/Intrusion Detection, Antimalware, and Antiphishing Software. We also asked industry experts questions such as what should CISOs expect from AI offerings, how do they ensure they are getting what they expect, and what should they know before they buy? 

There is no doubt

Artificial Intelligence (AI) deployments are on the rise, but opinions remain mixed about what this term means in practice and how viable it is at present, especially when it comes to security. Research firm Deloitte broke out AI adoption into four distinct categories in its 2018 State of AI in the Enterprise report: machine learning (ML), deep learning, natural language processing, and computer vision. Fifty-nine percent of respondents reported using enterprise software with AI. Additionally, 44 percent said the benefits of AI are that it enhances current products.

The same report found that “executives are commonly concerned about the safety and reliability of AI systems as well.” 

Specifically, a little over half of the respondents said they are concerned about the cybersecurity vulnerabilities of AI, while 43 percent of respondents rated “making the wrong strategic decisions based on AI/cognitive recommendations” in their top three concerns. 

Thirty-nine percent cited the failure of an AI system in a mission-critical or life-or-death situation.

The promise of AI

Ultimately there appears to be no single definition of AI, as marketers and analysts tend to use AI and ML interchangeably and often with slightly different connotations.

“Placing strategic decisions or mission-critical actions entirely in the hands of an AI system would certainly entail special risks,’’ the report says. “Entrusting AI systems with such responsibilities remains rare today, however.”

Yet vendors continue to promote their AI capabilities. Some of the descriptions vendors use for their products include: “AI baked in,” “unlock the potential of your data with AI” and “zero false positives.”

Many tout the use of AI to address the dearth of cybersecurity professionals with the promise that AI can replace Level 1 and 2 tech support engineers — and perhaps it can.

“Both artificial intelligence and machine learning are being applied to cybersecurity use cases to address fundamentally the same problem: the cybersecurity workforce shortage,’’ notes an IDC Market Perspective Artificial Intelligence and Machine Learning in Cybersecurity: Creating Meaning with the Terms from February 2019. “One is looking to make security professionals more effective; the other is looking to improve efficiency.”

AI is also increasingly being viewed as another tool in the arsenal for combating the increasing sophistication of cybercriminals since current security controls are not doing enough to defend networks, as evidenced by the rise in breaches.

At the same time, though, it can be hard to know which products offer the promise of AI and ML and which are hype. “Unfortunately, the recent uptick in marketing hyperbole around the generic terms ‘machine learning’ and ‘artificial intelligence’ often gets in the way of understanding the benefit-risk trade-offs,” observes Gartner, in the 2018 report Lift the Veil on AI’s Never-Ending Promises of a Better Tomorrow for Endpoint Protection.

What you should know

Unlike a decade ago when an organization bought a security information and event management (SIEM) application and their data stayed on-premises, today, a large number of systems are cloud-based, and cybersecurity vendors need larger datasets to implement effective AI. The caveat “buyer beware” is a good reminder as organizations begin to deploy more cloud-based security offerings embedded with AI, says Paul Hill, a senior consultant at SystemExperts Corp., a Sudbury, Mass-based security and compliance consultancy.

Customers should be asking their vendors how they are collecting data if they are planning to use it to feed machine learning algorithms, he advises.

Another question they need to ask is whether their data will be anonymized and filtered so that no sensitive data will be sent to a third party. “It’s not talked about much,’’ Hill observes. “And a lot of end-user companies … go buy a product and say, ‘hey, it has AI’ and turn it on, but does that customer understand that some data may be sent off to that vendor?”

In terms of the vendors, Hill wonders whether they are proactively explaining to customers how they are collecting their data and whether customers understand that their data might be used.

"I’m a little skeptical any time I hear ‘AI baked in,’ and if [vendors] can’t tell me what that means fairly quickly, I’ve moved on."

There is no denying that AI means different things to different people. “I’m a little skeptical any time I hear ‘AI baked in,’ and if [vendors] can’t tell me what that means fairly quickly, I’ve moved on,’’ says Doug Barbin, principal and cybersecurity practice leader of Schellman & Company, LLC, an independent security and privacy compliance assessor in Sacramento, Calif. “AI is a very, very broad term. It’s like saying ‘cloud,’’’ he adds. “Is it machine learning, is it natural language processing, is it robot process automation?”

How to ensure your AI product is performing as promised

Marketing teams tend to create FUD, or fear, uncertainty, and doubt says IDC Research Director of Worldwide Security Products Chris Kissel. There are two simple metrics an AI system should be able to explain: the meantime to detect an incident and the meantime to respond to it, says Kissel.

Conducting a proof of concept pilot can help set expectations appropriately, observers say.

“The most important thing someone can do is when you go to buy a tool, get three to four vendors and tell them specifically what your terms are,’’ he says. Terms might include: “We want to be able to find an anomaly in this part of the network.” Often, a vendor will tell a security operations center (SOC) team what they should be looking at in their environment, “so a buyer has to own the process,” Kissel says.

“I always recommend people pilot these things first for six to eight weeks,” agrees Mark Horvath, senior research director at Gartner. “It is not enough to just look at the product interface and say, ‘It’s sophisticated, I’m going to buy it.’”

In some cases, piloting the product might mean having to invest in staff or additional infrastructure to get the most out of it, he adds.

Users say the proof of concepts are key. “We test it,’’ says Gary Miller, VP, information security at global outsourcing provider TaskUs, headquartered in Santa Monica, Calif. “In a demo, I would definitely want to sync up Active Directory to the tool to continuously inform it of current credentials and in testing that in a proof of concept, we’d provide  [the vendor] with fake credentials and have them put it out there and see how it searches.”

And Miller adds, “I’d also look to them to prove to me how it works.”

The promise of AI is that it enables automation and more effective decision making, he says, but admits it is a “major challenge” to boil out the false promises from the reality. “A lot of vendors want to explain this advanced algorithm they have that they can’t allow you to see — but trust them, it’s very effective,’’ he notes.

Kissel maintains that there is no “artificial intelligence truly in cybersecurity” right now, citing the Turing Test, which states that true AI must be equivalent to, or indistinguishable from human performance.

“It’s early days, the stuff is evolving and evolves rapidly,’’ he says. “The incident response piece has been very visible, and I think, the most mature in AI in having an influence and impact.”

Taylor Lehman, former CISO of Wellforce and Tufts Medical Center and now CISO at Athena Health, says the best proof of concept he did was the time he picked out 13 antivirus platforms “and downloaded every virus known to man, and threw them at each solution and watched them” to see which one caught which virus. “Then you look at what it didn’t block,’’ he says. “The proof is in the pudding.”

There are other considerations, of course, such as how easy a product is to deploy and update, he adds.

By getting deep into a security platform, organizations can understand if an AI-embedded feature set not only improves the security controls they have in place but whether there’s an added cost for it, notes Miller.

AI for intrusion detection

There are machine learning techniques that are particularly good at anomaly detection, but the challenge is knowing the underlying assumptions you’re making about the data you have, says Fernando Montenegro, a senior industry analyst at 451 Research.

For example, if your security team is monitoring the network and suddenly sees a spike, there could be a number of legitimate reasons why he says. These might include a new initiative within the business or a change in how often you decide to monitor, or it could be a new employee going through different databases to learn their job. “There’s a whole slap of things that can be anomalous that aren’t necessarily malicious," Montenegro says. “So there is a place for machine learning in [identifying] insider threats and anomaly detection, but it’s not the silver bullet people think it is. You’ll get a heck of a lot of false positives or false negatives depending on how you tune your system.” 

AI for Phishing

Anomaly/fraud detection and insider threats are good use cases for AI, says Horvath. 
“Machine learning is a great way of reducing the number of false positives you have to deal with,” he says. “You can’t get it down to zero, but you can reduce it.”  Surveys indicate that traditional virus attacks are down, but phishing is way up, which begs the question: How effective can AI be at spot phishing attacks? 

AI-based antiphishing software looks for patterns either in the email or within the malware, says Horvath. “One of the nice things about AI is it can examine polymorphic malware,” which looks at very specific parts of the virus itself that it has learned to recognize. So it recognizes little bits and pieces humans would not see. AI replaces looking at signatures with looking at even smaller aspects of the code that it recognizes as malware, but would be invisible to a human and wouldn’t show up in a signature scan, he notes.

The problem is that as effective as AI is in reducing the current generation of malware, “the people writing the malware are also writing new malware to get at this, so it’s a continuous cycle,” he says.

Research is being done using natural language processing to determine if a sample of text was written by a person or generated by a machine, says Barbin, but he isn’t sure whether that should be called AI or language heuristics.

Echoing Horvath, Barbin says “the same AI tools you can throw at that problem could also be leveraged by attackers to improve the phishing attempts.’’

This is especially true when an attacker is able to gather samples of emails to and from particular individuals, which helps them replicate language effectively, he says.

As always, it comes down to the human element as phishing attempts become more sophisticated, Barbin says.

“It is truly tricky stuff to figure out what is a bot-driven email and what is legitimate,’’ agrees Eric Ogren, a senior security analyst at 451 Research.

TaskUs’ Miller says he believes “phishing is one of those wins for AI that’s very visible to me.” 

AI for Endpoint Protection

Hospitals have thousands of endpoints and anything that can improve anomaly detection — especially in a life or death situation. So far, most security categories with an AI play are in the infancy stage while endpoints are in the “toddler years,” according to Lehman, who has been using AI to look at behavior on endpoints as well as on the network.

AI on the endpoint is “the furthest along and most proven in the sense that it works and meets the objectives it sets out for itself,’’ he says.

The next step for Wellforce is to look at an individual’s identity, the identity of the computer being used, and how the system behaves, he explains. That might mean three people on the network all exhibit similar behaviors and then a fourth person appears whose actions are atypical for that type of employee. “The goal of machine learning is to figure out the patterns of data and the volume of data,” and determining what is acceptable behavior when more people share data and what is anomalous, Lehman explains.

When Wellforce began using AI-based endpoint protection, for monitoring “we almost immediately got two full-time people back,’’ he adds. Previously, “It was like there was a ghost in the system and once we stuck a product in that was prevention-focused … we stopped it and started seeing a decreased volume of users calling in.” With AI, about 15 percent more of Wellforce’s devices received a higher level of security controls than they previously had, he says.

For all the skepticism Schellman’s Barbin has about broad claims proclaiming the benefits of ML and AI in security, he also finds the technology to be very useful for endpoint protection because of the ability to apply clustering techniques around samples. 

“There is always a trade-off between false positives and false negatives. So there’s always going to be an incident.”

“Where I think it doesn’t help is when people start assuming [AI] can do more than it can do,’’ he says, noting that “There is always a trade-off between false positives and false negatives. So there’s always going to be an incident.”

For example, a SOC operator might get an alert from an endpoint saying it has malware. “If it’s true, great, if false, you as an organization have to spend resources to go in there and check out the false positive” and either ask the user to re-image their laptop or confiscate it while you investigate, he says.

“What this means is there’s a cost to the organization,’’ says Barbin. “Not only are you paying for the time for the security team to handle this incident, but you’re also dealing with lost productivity for the employee.”

On the flip side, that SOC operator might catch an attacker doing something else that they were not looking for. AI works as part of a layered defense, he says.

AI can deal with immense volumes of data, which is just not feasible for a human to look at, Barbin says. There is just too much data coming in and too many malware samples. “Humans can augment the process,” he says, “but you need AI to do some triage on the data.”

AI for SIEM and more

Most SIEMs now come with analytics to detect problems, assemble threat timelines and consolidate alerts, says Ogren. “It is hugely effective.” 

He says he has seen a “resurgence in network security based totally on AI. “I used to call this ‘network traffic analytics’ but believe NVDR (network visibility, detection and response) speaks better to its value. The network sees everything, providing operations teams insight into all devices using the network” using AI to detect threats, and then using AI to help correlate information to accelerate remediation, he says. 

It is still the proverbial early days for NVDR, Ogren adds, “but if you think about it, perimeter threat-facing filters whiff on many advanced threats so the only way to catch active threats is to analyze traffic for signs of threat behaviors. It’s hugely effective.”

The only drawback, he says, is remediation still requires IT people to fix a host somewhere, but security needs the visibility that can only come from NVDR to make good prioritized decisions.

Continue reading on the SC Magazine website >>

About the Author

Douglas Barbin

Doug Barbin is a Principal at Schellman & Company, LLC. Doug leads all service delivery for the western US and is also oversees the firm-wide growth and execution for security assessment services including PCI, FedRAMP, and penetration testing. He has over 19 years of experience. A strong advocate for cloud computing assurance, Doug spends much of his time working with cloud computing companies has participated in various cloud working groups with the Cloud Security Alliance and PCI Security Standards Council among others.

More Content by Douglas Barbin

No Previous Articles

Next Article
World Kindness Day 2019
World Kindness Day 2019

My name is Hiren Desai and I’m a senior associate with Schellman. I’m happy where I have landed in life and...

×

First Name
!
Success
Error - something went wrong!