How Artificial Intelligence And Analytics Deal With Insider Threats

These are insider threats, the number one contributing factor to security incidents within enterprises. Despite what you see in hacker movies such as Blackhat, the biggest threat to organisations' networks are the people who are working for them.

A disgruntled employee secretly storing away sensitive company information; a careless executive clicking on a malicious email attachment; a malicious user gaining access to company resources through stolen credentials.

These are insider threats, the number one contributing factor to security incidents within enterprises. Despite what you see in hacker movies such as Blackhat, the biggest threat to organisations' networks are the people who are working for them.

Organisations spend time and money to train their staff on cyberhygiene and how to avoid falling for social engineering scams such as phishing and baiting. Yet, as the trends show, insider threats are growing in number and becoming harder to detect.

This is partly due to the fact that the sheer increase in data that is being handled by companies is making it harder to spot malicious activities, and evil-intentioned insiders and outsiders with stolen credentials will be better able to conceal their activities in the deluge of data that goes through the network.

Also, the increased availability and use of cloud services across enterprises is making it more difficult to define and secure the boundaries of corporate networks. IT professionals and network analysts are having a hard time controlling the flow of digital assets as they move across different cloud containers, which are often not in the total control of the organisation.

The difficulty in dealing with insider threats is that it hinges on detecting anomalous user behavior instead of finding and patching vulnerabilities in software. But unfortunately, drawing the line between safe and malicious behavior is not an exact science, and traditional approaches such as setting static rules and alerts either churn out too many false positives and impede organisational workflows, or leave too many holes to be exploited by hackers.

After all, how can you tell whether there's malicious intent behind a bulk data transfer being made by an admin user. Conversely, how can you ascertain that a simple file transfer isn't part of a larger scheme to slowly trickle away data from the company?

Fortunately, these shortcomings can be remedied with the help of artificial intelligence and analytics, or the use of mathematics, data science and pattern recognition technology to glean insights, predict the future, and make decisions that are more efficient.

This is a trend that has proven its worth in different domains of cybersecurity, and can also improve the effectiveness of fighting against insider threats by reducing false positives and finding the needle in the haystack.

Gurucul is one of the cybersecurity firms that has made inroads in this field. The company uses machine learning and User Behavior Analytics (UBA) to define dynamic baselines for normal user behavior based on different factors, which provides greater visibility into user identities and account activity.

Risk Analytics, as Gurucul's security tool is called, gathers data from a host of sources, including on-premise and cloud applications, and feeds it into machine learning algorithms that create a digital profile defining the baseline behavior and habit of each user. The system then uses that baseline to risk-score future actions based on their deviation from the baseline and to determine which events need to be further investigated.

Gurucul has also introduced the concept of "dynamic peer groups," which uses machine learning and analytics to group users based on identities, privileges and also activities that they usually perform. The system uses this information to further enhance its threat detection precision by comparing users' behavior to the defined norm of their peer groups and finding outliers.

For instance, if an administrator is using his access to system resources to download huge amounts of company information, his behavior will stand out from that of his peers and he will be marked for investigation.

Interset, a Canada-based security firm, also uses artificial intelligence and behavioral analytics to correlate unstructured and scattered bits of data from users, applications and endpoints in order to discover insights that would otherwise slip past human analysts and static security models.

The company's security platform uses machine learning to define a threshold for each user which defines the range of acceptable activities that the user usually performs. These dynamic rules replace static global rules such as "how many megabytes an attachment can be," allowing for more flexibility in the work process while also maintaining the security of the system.

Interset also uses machine learning to correlate events and provide a higher-level view of the system. So instead of pointing to a set of security events that need investigation, the system can actually point to specific endpoints and accounts that might be compromised, or users that might be involved in illicit activities.

Insider threats will likely remain as the key culprit for cyberattacks for the time being. Hopefully, as these examples show, artificial intelligence and analytics can help deal with the endemic insider threat problem while also easing organisational operations in a work landscape that is increasingly dependent on online services.

Ben Dickson is a software engineer and the founder of TechTalks.

Close

What's Hot