Skip to content
Menu
menu
Artificial intelligence (AI) pioneer and entrepreneur Rana el Kaliouby speaking at GSX Global Security Exchange 2024.

Photo courtesy of Oscar & Associates

El Kaliouby: Humans Can Leverage AI to Improve the World

As a Harvard Business School executive fellow and co-founder of Blue Tulip Ventures,  Rana el Kaliouby has seen the threats artificial intelligence (AI) can pose to our society—bias, disinformation, threats to sustainability and draining of resources, and even dangerous emotional attachments.

But when considering the question of whether this technology should even exist, her answer, without hesitation, is, “Absolutely yes.”

Human-Centric AI

In a GSX keynote address on Tuesday, 24 September 2024, el Kaliouby shared how AI technology can be leveraged in positive ways.

For instance, detecting dangerous conditions and alerting vehicle operators—such as drowsy driving or if a child has been left locked in a car—and in assisting children with autism to recognize others’ emotional states.

From the standpoint as a venture capitalist with roots in innovation, el Kaliouby noted that this technology presents significant opportunities.

“Our investment themes are basically good AI is good business,” she said. “Yes, AI offers a massive economic opportunity, but it also offers amazing solutions to humanity’s biggest challenges.”

The solutions she looks to for investment opportunities are those building and dealing with responsible and ethical AI. This is because every company will need to determine how it will govern AI and how to implement responsible and safe AI solutions.

El Kaliouby also highlighted other facets of our lives where AI can be used as a tool for improvement. Health and wellness (i.e., advancements of sensors, health data analysis), productivity and learning (i.e., leveraging AI agents to streamline workflows or even operate on behalf of the user), and sustainability (i.e., creating more sustainable food systems and supply chains, mitigating climate events) are the larger positive themes that el Kaliouby pointed to.

“Every industry is being transformed and revolutionized and reimagined with AI,” she said. El Kaliouby nodded to Jensen Huang, CEO of Nvidia, in agreeing that AI won’t be what replaces a human workforce—instead it will be humans that can harness AI that will replace those who shun this technology.

Security Risks

El Kaliouby noted that there are significant challenges from AI, which can even become threats without some form of redirection.

She touched on how the current data sets, which are limited, can create biased algorithms that can then be used to create images that perpetuate stereotypes or discrimination.

Another kind of bias stems from “AI hallucinations,” where AI presents materials, such as scientific articles or journals, that may appear legitimate, but instead are an AI creation. These hallucinations can then become disinformation.

AI has other bias issues, including gender, thought, and perspectives. “A lot of AI today is very U.S.-centric, and China is also a big player in the AI race,” el Kaliouby said. She added that she hopes to see more AI products representing other nations and cultures, which can be created by relying on local workforces.

Lastly, while generative AI can offer data-driven solutions to promote sustainable practices, the technology powering it is a different story.

“For example, Nvidia’s high-end chips, if you’re using them to train a foundation model, they will consume the same amount of energy as the entire country of Costa Rica,” el Kaliouby said. “That’s not sustainable, and that’s just on the training side.”

However, the AI industry is projected to add $2.6 to $4.4 trillion to the global economy and el Kaliouby said she believes it will be the biggest lever in changing humanity. Its current challenges—especially around the construction and deployment of AI—present a business opportunity that’s “ripe for disruption,” she added.

Re-imagine Security and AI

AI presents an opportunity for the security industry to shift from a reactive to a proactive posture, el Kaliouby said in her remarks.

AI could be used to support forecasting security incidents and provide real-time threat analysis, leveraging predictive analytics and pattern recognition to identify threats to organizations sooner.

“It all starts with the data,” el Kaliouby said, adding that these worthy ventures and algorithms are larger undertakings.

On a wider scale, el Kaliouby said it’s time to rethink our social contract with AI to create a new one built on trust. This should include considering the autonomy that AI in its various forms should have.

For instance, she asked GSX attendees to consider treating AI like an intelligent intern—one that can be delegated to while still operating under a human supervisor providing oversight.

That social contract built on trust also includes the data that is used to build AI tools, el Kaliouby added. Security practitioners should pay close attention to data on collection, use, sampling, diversity, and storage—along with how AI is allowed to access and analyze this data.

 “I really think companies need to have a methodology and an approach to institutionalize this across the board,” el Kaliouby said, stressing the need to root out algorithmic bias.

Sara Mosqueda is not a bot or an AI creation—instead, she’s associate editor for Security Management. You can connect with her via LinkedIn or send her an email at [email protected]. If the stars align, you might even run into her in real life at GSX 2024.

For more on security applications of AI, read our Security Technology Artificial Intelligence issue.

arrow_upward