(Article originally published in SC Magazine by author Evan Schuman, with commentary from Schellman & Co.'s Doug Barbin.)
At the risk of potentially alienating a high-demand workforce that potentially can jump to a new company for seemingly minor perks such as company-paid cafeterias or flex time with little oversight, CISOs today find themselves with a challenge. In order to protect their corporations against data breach from internal and external sources, CISOs have a tool that is effective at identifying breaches but some employees might find a bit too intrusive: analytics. The move to analytics-based security — be it behavioral, threat intelligence, big data, or one of a myriad of other analytics technologies — could be interpreted as Big Brother watching over the employees.
The potential damage that an insider attack can inflict on a business is massive, a reality that prompted some enterprises to use analytics, keystroke capture, and digital video to track insiders. But are the risks of a company alienating its employees and contractors worth it? Are analytics even an effective means of neutralizing insider threats?
When exploring insider threats, it is critical to focus on the distinctions between a potential and an actual threat. The potential threat is significant with insider attacks, given that these are people who already have legitimate credentials to a company’s systems and who, one way or another, exceed their authority on the system and do something unauthorized such as sabotage servers or steal company data and sell it to a competitor.
But the true threat from insiders is a matter of debate with some experts saying the actual insider threats seen are small compared with today’s external attacks. Then there is the question of how one defines an insider threat in the first place. Forrester Research, for example, defines an insider threat as any breach that is caused or facilitated by an insider, whether it is an “accidental insider or malicious insider,” says Forrester Principal Analyst Joseph Blankenship. Forrester considers accidental insider attacks as ones where the insider had no malicious intent — perhaps an employee accidentally left a port open and an attacker leveraged that to gain access or saved a file to an insecure thumb drive so they could work at home rather than remain in the office.
Using Forrester’s all-encompassing definition, Blankenship reports that insiders were responsible for 24 percent of all data breaches last year. But when limiting the definition to just malicious insiders — the definition commonly assumed in IT and security circles — that percentage drops to closer to 11 percent, he says. That suggests that 89 percent of all attackers were external.
“Some of the vendor marketing may be overblowing the insider threat,” Blankenship says.
“Malicious is always a pretty small number, but they are very impactful because they have so much access.”
IDC uses a similar insider threat definition as does Forrester, also including unintentional insider acts that facilitate external attacks. “The number goes down pretty dramatically if you start to remove things that are unintentional,” says Sean Pike, program vice president for security products at IDC. “Malicious is always a pretty small number, but they are very impactful because they have so much access.”
Another important component of an insider threat analytics strategy is whether to try and keep it secret or not. The “keep it secret” argument focuses on preventing any employee or contractor backlash from them being monitored so precisely. The “disclose it” argument speaks to deterrence, suggesting that the main reason for launching such analytics is less to catch insider evildoers than to discourage anyone from trying.
Indeed, the deterrence argument is made quite handedly by some companies that say that they are monitoring employees, when they really are not. Danny Rogers, the CEO of a Dark Web intelligence company called Terbium Labs, used to work with a casino that populated its money-counting rooms with fake cameras with little red lights on them. He calls it “security theater.” That way, the casino got almost all of the deterrence of true monitoring without almost any of the cost.
That works until an incident occurs and the company has to fess up publicly that it has no footage. But even then, would employees assume that all cameras are still fake? The cat-and-mouse game of loss prevention psychology will get a full workout.
Setting aside the psychodrama of “Are they or are they not tracking us,” the better question to ask is “Should they or shouldn’t they be tracking us?” Rogers positions himself in the “they shouldn’t” camp and he says it is for several reasons.
“When it comes to limits of mining employee data for signs of insider threats, I worry these efforts have already moved too quickly into the realm of ‘pre-crime,’ in which false positives result in employees’ benign activities being interpreted as threatening with employees being wrongfully terminated as a result,” Rogers says.
"Often, one’s most productive and creative employees regularly engage in seemingly abnormal behavior as part of their work."
“The truth is that it’s very difficult to define ‘normal’ behavior for an employee,” he continues. “Often, one’s most productive and creative employees regularly engage in seemingly abnormal behavior as part of their work. In fact, onerous employee surveillance can have a chilling effect on innovation within a company. Generally, I think too much reliance on limited or biased AI (artificial intelligence), whether in looking for anomalous behavior of employees, software, or networks, is resulting in everything from alert fatigue to the increased risk of wrongful termination litigation. You have to have trust in the people in your organization.”
Rogers’ concerns here break down into two basic points. First, it is a bad idea to surveil because of where it might lead. Secondly, it most likely will not work anyway.
As for why it likely would not work, Rogers’ argument is that deviation analytics, which is typically what machine learning does, needs to know what to look for. “You can’t really define abnormal until you define normal. Can you actually define a narrow ring of normal for any given user?” Rogers asks. “It may sound nice in a marketing sense, but I don’t think you can define a narrow enough definition of normal for this to work.”
Rogers’s argument is that for the analytics to work well, it needs to be given a large number of samples of what network activity looks like when there are no insider attacks and it ideally also needs to be shown what it looks like when there are such insider attacks. But that is a challenge in logic, as a company would presumably never have complete confidence that there were no insider attacks happening during any sample period.
David Pearson, principal threat researcher at Awake Security, a former adjunct professor at the Rochester Institute of Technology and a member of the technical staff at Sandia National Laboratories, agrees with Rogers’ concern about the quality of the initial dataset. “How great would it be to be the attacker who got in before your fancy baseline was established as the norm?” Pearson asks rhetorically.
IDC’s Pike vehemently disagrees with both Pearson and Rogers. “It’s a little silly as an argument” that a company would never have a perfect snapshot in time of noncriminal activity, Pike says.
“You’ve got to start somewhere and you may very well need to start at a place where there is rampant fraud happening. As the system goes on, those behavioral patterns will change,” Pike says. “You might spot a pattern (of fraud) and go back and say ‘Now I see it.’”
Pike’s position is that companies must start by looking for the easy things, “the low-hanging fruit” such as employees logging in at odd times, starting to come in early or leaving late when that was never their pattern, their browsing activity, where they are logging in from, and which files are they trying to access. “It’s not so incredibly intrusive,” Pike says. “It’s sort of nonthreatening to employees.”
But is aggressive tracking an overall effective tactic to thwart insider attacks? Pike maintains that it generally is. First, there is a lot of monitoring that is required. “There are regulatory obligations to do some sorts of surveillance,” such as recording phone calls with customers, Pike says. Indeed, some surveillance “has been lifesaving,” such as when the system detects that an employee is acting suicidal.
As for employee pushback and potential resentment to extensive surveillance, Pike does not think that should be a significant concern. First, he does not believe that the surveillance should be announced. “I like my surveillance with a side of secrecy,” Pike says. “It really all depends on what you do with that information. The bad actors will probe ‘What can I get away with here?’ It’s only when you act on anything, that’s where you start alienating folk.”
For example, Pike says, if a manager cracked down on an employee for coming in late based on network analytics and cited the network analytics as the reason that could cause problems. It is better, Pike says, to file away such information and wait to observe corroborating evidence personally. “Even though you have the information, you don’t have to act on every single piece,” Pike says.
"Web monitoring software has been dancing this line for some time. Our firm, with almost entirely field-based professionals, uses a security proxy that protects our professionals from harmful networks and themselves. By default, it can see everything, even perform TLS (transport layer security) inspection of encrypted traffic." -Doug Barbin, principal and cybersecurity practice leader at Schellman & Co.
Read full article at SC Magazine >>
About the Author
More Content by Douglas Barbin