Skip to content
Menu
menu
Machine Learning

Photo by iStock

The Perils and Promises of the Imitation Game

Josh-Rickard.jpgEventually, artificial intelligence (AI) will move closer to what Dr. Alan Turing had envisioned in 1950. But today, we are still far from passing the Turing Test—especially when thinking about AI as a commodity. When most people talk about AI, what they are really referring to is a subset called machine learning (ML).

Traditionally, computer programs make decisions about questions based on logic defined by a programmer. The introduction of machine learning can help us with these decisions by examining past data. In other words, machine learning uses previously observed data to assist with making a prediction to a provided question.

As ML continues to increase in sophistication, we will see a direct impact and tangible benefits for organizations. For instance, image and speech recognition will be commonplace and built into many products by default—especially for language translations within applications. A use-case scenario many may be familiar with is Googles Translatotron, which was introduced in 2019 to translate speech in one language to another.  

ML will also greatly improve security operations and the detection of malicious activity. Analyzing network traffic in conjunction with user behavior can reduce the number of false positives alerts that security must investigate. By using ML to gain insights into user behavior, both from a technology perspective and from a security perspective, we can begin to understand more about how people use applications, how they interact with them, and where improvements can be made. As subsets of AI continue to mature, we will be able to see outliers when it comes to user and system behavior within a computer network. We will also be able to identify potential misconfigurations, misuse, and even breach detection capabilities.

When-most-people-talk-about-AI.png

All of this will be a huge benefit for organizations, but there are many risks to reaching this level of visibility. The biggest threat is becoming reliant on ML. Having an experienced security analyst review logs and alerts is time consuming, but it will beat any machine in analysis. Machine learning, and thus artificial intelligence, may be able to identify malicious behavior but analyst have context about how an organization is structured, who people are, their roles within an organization and the overall impact of this behavior versus a program.

If we approach this technology as a process which runs in parallel with day-to-day activities, then we will see a much larger return on investments. Using this technology to prioritize alerts from disperse systems can reduce the “alert fatigue” that security analysts face daily. This reduction in fatigue means analyst can be pro-active and focus on building up defenses instead of fighting fires.

For organizations that invest in AI and ML, there are opportunities to reshape the ways in which they work. Gathering and identifying user—along with systems and network—behavior will give new visibility into productivity bottle necks and pain points, and increase visibility into potential malicious activities from external and internal threats. This type of data can give organizations real insight into employee struggles, improve business processes, and increase forms of automation.

Having-an-experienced-security-analyst.png

Companies that believe in open source sharing of data, as well as give reassurances via transparency, will win the battle of AI. For artificial intelligence and machine learning to become a commodity in everyday products and services data is needed from a disperse group of sources. There will always be skeptics but if the data is accessible, shows real world impact, and is used in a respectful manner, then more people will inherently trust that organization and likely give them more data.

On the other hand, companies that hoard data and do not share it with the rest of the community will enjoy having marketing buzz but will ultimately fail to gain trust in both its users and the public. Organizations that have a large corpus of data will succeed in some respects but ultimately will find that their data is missing key demographics to be successful. As a security professional I would question how reliant their technologies are, as well as how they use and secure the data they do have. Transparency is an absolute must when you want to win the imitation game.

Josh Rickard is a security research engineer at Swimlane focused on automating everyday processes in business and security. He is an expert in PowerShell and Python, and has presented at multiple conferences including DerbyCon, ShowMeCon, BlackHat Arsenal, CircleCityCon, Hacker Halted, and numerous BSides. In 2019, Josh was awarded an SC Media Reboot Leadership Award in the Influencer category and is featured in the Tribe of Hackers: Blue Team book. You can find information about open-source projects that Josh creates on GitHub at https://github.com/MSAdministrator.

arrow_upward