Security Awaits Digital Transformation
Many organizations worldwide underwent a digital transformation this year, whether they planned to or not. More than a quarter of employees in the United Kingdom and the United States were working exclusively from home over the summer in response to the COVID-19 pandemic, according to data from the UK Office for National Statistics and a Gallup poll. Among all U.S. workers, the average number of telecommuting days has more than doubled this year.
This shift forced organizations to rethink their digital tools and strategies to better enable collaboration, communication, and efficiency with remote or hybrid workforces. In a recent TechRepublic Premium survey, 60 percent of business technology professionals reported that COVID-19 forced them to alter their preexisting digital transformation plans—65 percent said they were prioritizing technology to better equip remote employees, and 56 percent said they were focusing on tools that can facilitate digital training.
In comparison, a 2018 TechRepublic survey found that most respondents were focused on eliminating paper through digitalization and 54 percent had implemented online training. While those initiatives are ongoing in 2020, tools that enable operational continuity got more attention, but not necessarily more budget.
More than half of survey respondents said that funding was the biggest digital transformation challenge for their organization, and with economies stalled due to lingering effects of COVID-19, those budget woes are likely to continue. While 47 percent of survey respondents said they expect more digital transformation spending in 2021 than in 2020, 31 percent said they are uncertain about how much funding might become available.
From a security perspective, digital transformation—and the myriad processes and applications it entails—offers both possibilities and challenges. However, not all security functions are mature enough to effectively transform security procedures.
The ASIS Foundation is conducting a series of research projects into aspects of digital transformation. Earlier in 2020, its report on blockchain was released, explaining that while the technology has the potential to transform many processes, it is not fully leveraged yet. In early 2021, two additional reports on artificial intelligence (AI) and social media monitoring will be released. Security Management checked in with two researchers studying how social media data can uncover security intelligence for some early findings.
Security intelligence can be found in unexpected places, according to Chelsea Binns, assistant professor at John Jay College of Criminal Justice, and Robin Kempf, assistant professor at the University of Colorado, Colorado Springs. For example, the complaints and frustrations aired daily on social media by consumers and employees can be analyzed to help security personnel scan for threats.
Previously, Binns says, it was very difficult to complain to a company. Consumers would need to send a letter, and the company’s response could be private—or nonexistent. Now with social media, complaints are public, loud, and frequent, and consumers expect immediate action.
Binns and Kempf are studying social media posts related to a ride-sharing service and a vacation rental company, and they found that customers went to social media to complain about privacy violations or allegedly unfair policies—cautioning others against patronizing the company if the complaints were not quickly and publicly resolved.
However, different tools enable companies to quickly sift through large amounts of publicly available social media data to flag concerns and recognize patterns. For example, Binns says reviewing data for the vacation rental company uncovered a concerning trend: lots of people were reporting their accounts had been hacked. But instead of reporting it to the company’s internal protocols, they were posting it on Twitter.
“Unless you’re looking at a volume of tweets—100,000 conversations—you don’t see the full picture,” Binns adds.
Binns and Kempf say that organizations that are pivoting more broadly to AI-driven or automated analysis can leverage a customer service tool like social media monitoring for security purposes. Organizations can add in key search terms or analytics to watch for trends that may need mediation, such as user education or training, additional communication about account compromise reporting guidance, or security incident response.
But AI itself is not a straightforward term in the security field. According to Michael Coole, chief investigator for the School of Science at Edith Cowan University and researcher on a second upcoming ASIS Foundation project, “AI has become an imprecise umbrella term, where many people assume the technology is engaging in the type of information processing and analysis that the human mind does.
“However, this is not the case,” he continues. “Computers do not understand broader context; AI uses computational techniques to enable machines to perform tasks that would normally require some level of human intelligence…. The output of these techniques appears to resemble human intelligence, but they do not replicate human brain processes; there is no awareness or metacognition. Consequently, there are substantial differences between how humans and computers process information, solve problems and act. But analogies between humans and computers amplify misunderstanding.” The project, conducted by coauthors Coole, Associate Professors Peng Lam and Martin Masek, and Lecturer Jennifer Medbury, will examine how AI could affect security applications and roles, and what barriers to entry lie in the way of its adoption.
There are three main categories of AI, according to Coole.
Narrow AI. Includes computational techniques to solve very specific tasks.
Broad AI. Combines Narrow AI techniques to deliver specific business processes, such as a self-driving car.
General AI. Possesses theory of mind, as well as being self-aware and able to understand humans’ beliefs, thoughts, emotions, and expectations. Such advances have yet to be achieved.
Currently, Coole says, most AI applications for security combine Narrow AI techniques—evaluating credential requirements and biometric analysis, for example—to achieve a Broad AI application, such as granting or denying access based on those multiple computational techniques.
Adoption of AI in organizational security functions has been inhibited by a few different elements, including a lack of sufficiently large, quality data sets to work from, as well as proper system integration for interoperability and usability.
“There are many opportunities though in the current levels of AI capabilities to enhance security. At this stage, the project is indicating these opportunities sit with higher levels of advanced integration of individual technological outcomes,” Coole says. “For many years, integrators have been seen as salespeople, or very defined project personnel. The future of advances in AI include the facilitation of advanced automation through our ability to integrate various individual hardware and software program outputs using AI computational techniques to achieve not only observe, detect, control, or respond outcomes as individual security objectives, but to seamlessly integrate these technologies, to observe, detect, and where necessary act, much like advanced military weapons systems.”
So far, AI enables human analysts to add context, uncover trends, and make informed decisions, the researchers say. But standalone AI systems are still outside the realm of possibilities.