Skip to content
Menu
menu

Photo by iStock

Sunshine Through the Clouds: AI and the Security Professional

The ASIS International Security and Applied Sciences Community (SASC) is spending significant time looking at the future of artificial intelligence (AI) applications—helping to find sunlight through the clouds. There are just a few reasons why ASIS members should join us in this pursuit.

Current market estimates from PricewaterhouseCoopers suggest that by 2030, AI could serve to boost the global economy by more than 14 percent. That equates to about $15 trillion in value. Programs to utilize AI are found in almost every vertical, including security. Governments around the globe are scrambling to see how their countries can promote AI use.

Security professionals need to have an understanding of AI and the implications it poses for security operations. The SASC is actively supporting this effort. Working with the ASIS IT Security Community (ITSC), we developed a new glossary to provide the ASIS membership—and larger security audience—with a common lexicon of terms related to AI.

For example, we define AI as: “An artificial system that can make complex decisions or plans based on environmental inputs. It stimulates basic intelligence, but its ability to make decisions is based on explicit programming. It lacks the ability to learn or synthesize new concepts.”

This definition and others provided in the glossary were a product of extended discussions of members of the ITSC and SASC committees.  Members of those committees conducted research of scientific and technical publications both inside and outside the industry to develop non-technical explanations of these terms—essential to a common language. Drafts of the glossary, which included source information for the definitions offered, were reviewed in an iterative process by the committee members. The document offers a starting point for planned, continued review and discussion by the wider ASIS membership.

Establishing this taxonomy of common language around AI will serve to give security professionals a common understanding and is just the first step for the ITSC and SASC. This work is ongoing and open to communitywide participation.

Beyond creating a common language, the SASC is working to craft information products to help practitioners understand the theoretical and technical underpinnings of AI. This work is drawing on expertise of ASIS members inside and outside the SASC, as well as industry leaders.

The SASC is currently working on a comprehensive look at the development of AI and its application in the security industry. This work will serve as a primer to help orient security practitioners to current and future uses of AI. It will examine developing AI technologies; the security uses of those technologies; and the legal and policy implications of those applications

AI products are already in use in fraud detection and prevention. As an example, in the healthcare industry where fraudulent activity is extensive, AI solutions are being adopted to combat the threat. A range of U.S. and international firms are offering solutions. These AI platforms analyze billions of transactions looking for patterns and markers indicative of fraudulent activity.

There is growing interest in AI use for background assessments and screening. AI applications have been developed to analyze the social media activity of potential applicants to look for patterns of conduct and even to assess trait like intelligence.

Technologies like license plate recognition have become commonplace and use of facial recognition applications are on the rise. But these uses just scratch the surface of AI’s possibilities.

Use of AI on crime prevention and crime prediction are growing possibilities, as is the use of AI in cybersecurity protection. In the wake of the COVID-19 pandemic, and amidst new thoughts about public health, AI raises interesting possibilities, but ones fraught with privacy concerns.

While the growth potential of AI is promising, the SASC is focused on informing ASIS members of the challenges that AI may pose. Efforts are increasing in the United States and abroad to regulate and control AI use. Provisions in the European Union’s General Data Protection Regulation already require notifications when AI is being utilized in decision-making involving individuals’ data. 

Similarly, in the United States, states like Illinois have moved to restrict the use of AI in conjunction with some employment processes. The growth in state and local initiatives to limit law enforcement use of technologies like license plate recognition and facial recognition serve as a further example of government-imposed limits being placed on technology use that are being widely discussed in the press. Some restrictions are in place on facial and license plate recognition technologies in the states of Oregon, California, Washington, Maine, and New Hampshire, as well as in cities like San Francisco, Oakland, and Boston.

Most of the proposed regulation is nascent, but as the use of AI grows and is considered more commonplace the regulatory environment will likely expand. This will make compliance a key consideration for practitioners.

As the SASC works to create knowledge products to shine more sunlight on the subject of AI, there is a great opportunity for dialogue and information sharing. Interested individuals—which should be all ASIS members—are invited to join us in these conversations in the SASC Community.

Don Zoufal, CPP, is safety & security executive for CrowZ Nest Consulting, Inc., and former chair of the ASIS International Security and Applied Sciences Community. For more information on the community and to get involved in its work, visit its page on ASIS Connects.

arrow_upward