Where We Stand with AI
Society’s understanding of artificial intelligence (AI) is evolving. Where once we looked to depictions of human-like behavior in pop culture, we now understand that AI is best suited to help automate tasks and identify input. This increased understanding has resulted in greater success in the field because when organizations know what AI based solutions can and cannot solve, they are better able to utilize it in meaningful ways.
Within the physical security sector, AI is playing an increasingly important role. One area where we're seeing a positive impact is AI supporting security operators with routine tasks. This is critical because in a recent survey by Ponemon Institute, more than 70 percent of IT security practitioners said that the increasing workload security operations center staff face was causing burnout. Organizations are making real gains by training computers to do part of an operator’s job, which then frees operators up to focus on tasks only they can perform. Since most operators are overworked, any time savings that can be achieved with AI-based solutions can have a significant impact.
Using AI to prevent the spread of COVID-19
Organizations are also using AI to meet more immediate challenges. In 2020, many began using AI in business cases related to the COVID-19 pandemic. While some awaited new applications to enter the market, many organizations decided instead to use their existing security infrastructure, including video analytics, in new ways.
Managing occupancy levels is an important part of our strategy against COVID-19. To maintain proper physical distancing, organizations are using existing AI-based video analytics to count, monitor, and automatically limit the number of people within an environment. This ensures compliance with evolving government guidelines about the maximum number of people per square foot and automatically restricts the number of people entering corporate and commercial spaces.
Meeting the problem of bias in AI
A lot of attention has been paid recently to bias, inclinations or prejudices towards someone or something, in AI. In some cases, the bias is caused by using limited datasets.
For example, the New Zealand Government created a system that citizens could use to update their passports. They simply had to fill out the required form and upload a photo. When Asian residents uploaded their photos, many were told to select another because the system identified their eyes as being closed. The problem arose because the data set used to train the AI was limited to mostly non-Asian faces.
There are two ways to mitigate against bias. The first is to produce better datasets. The second is for users to understand potential biases.
In a security setting, bias could impact the way an operator responds to an alarm. When an alarm is raised, an operator must decide to acknowledge and respond to the alarm or to bypass it. The decision process an operator engages in to make that determination could be biased, and if that bias is not acknowledged or addressed it could result in an automated process that is also biased.
Users must be well informed about any datasets that they are purchasing to train AI models to avoid biased outcomes. End users should ask where the data to train the model was collected from, and if that data sufficiently represents the real world. For example, with data sets used for facial recognition technology, end users should ask if the faces used to train the model are evenly representative of individuals around the world. When end users understand the limitations of the data, they can take action to mitigate possible problems.
Creating Context: One Future of AI
Now, organizations are starting to look at how to solve their business problems with the data being uncovered by their AI tools. The challenge is how to use that data to further improve efficiency and increase business intelligence.
Creating context is one way to achieve this. Using AI provided rules, organizations could “reach out” to different systems to bring in more information and build the context required. This would then allow them to make better-informed decisions.
For example, building alarms may require operators to access temperature sensors to check to see if the alarm is related to a fire. To take advantage of AI, a model could be built to learn those associations and recommend datasets relevant to the problem at hand—the building alarm—based on what the operator commonly accesses—temperature sensors—to minimize the time spent gathering information to make a response decision.
The next step in AI development will be another exciting one.
Sean Lawlor is lead data scientist, Genetec, Inc.