Skip to content
Menu
menu
Graphic depicting two overlapping heads, one in magenta and the other in yellow, forming a striking blue-dot pattern at the intersection.

Illustration by iStock

AI-Powered SaaS: How to Get the Most Out of It, Safely

Chief information security officers (CISOs) across industries are facing increasing scrutiny from boards regarding the cybersecurity risks associated with Artificial Intelligence (AI)-powered Software as a Service (SaaS) applications.

It seems almost impossible these days to find a SaaS application that doesn’t use AI. From the highly popular GitHub, Salesforce, Notion SaaS applications, to what seems like the most esoteric application, wherever a CISO turns, they’ll find AI is used on company data.

This poses significant challenges and risks: Which data is the AI collecting? How? How long is it storing it for? Will others be able to leverage an organization’s years’ worth of hard work, intellectual property (IP), and know-how simply by asking an AI application about it? What happens when a human, working for the Generative AI, is allowed to review the data collected to improve the AI’s learning model? Can that unknown human be trusted? These are just some of the questions and risks modern-day security teams face.

Mitigating these risks is paramount, with some resorting to a complete block on apps labeled “Gen AI.” However, AI-powered SaaS applications have become such indispensable tools, boosting efficiency and productivity, while delivering personalized user experiences, that completely blocking them is nearly impossible, especially when they are so easily accessible.

Our recent research has uncovered troubling issues regarding “shadow AI” and other AI-related SaaS security concerns. Shadow AI is a new term used to describe unknown AI and other AI-integrated capabilities embedded within popular SaaS applications that security teams, and often the users themselves, are unaware of. This relates heavily to the way users adopt SaaS and the common, previously existing shadow IT problem: When diligent employees need a quick solution that will help them get their jobs done faster or better, it’s unlikely that they’ll pause and wait for security or IT to approve a free solution they found online. Nowadays, this solution is highly likely to have AI embedded.

Popular SaaS applications often have a free version, can be connected to using a username and password (without going through security measures such as multifactor authentication or single sign-on), and the user can share company data with them to solve an ad-hoc business need, like uploading a sensitive presentation they’re working on, just for a better design. What happens with the data shared is a reason for concern.

What stands out from our research is that 83.2 percent of companies are using pure AI applications, and nearly everyone—99.7 percent—is leveraging applications with integrated AI capabilities. Without automated discovery solutions, employees, security teams, and CISOs may not even realize their SaaS stack contains SaaS apps that have integrated AI within them.

As users freely interact with these AI-infused apps, they may inadvertently expose proprietary IP through AI training models, heightening already concerning data privacy and security concerns. In fact, the same research revealed that 70 percent of popular SaaS apps can train AI models directly using customer data and IP.

While this makes AI usage unrealistic to avoid, the security concerns around AI usage can be prevented or mitigated with the right tools and procedures in place.

Challenges of Safeguarding Sensitive Data and IP

As workforces continue to adopt AI and integrate it into their organizational workflows, safeguarding sensitive data and IP becomes more critical. This, however, poses a significant challenge for security teams due to the scale, complexity, and vastness of modern SaaS environments. With thousands of SaaS applications already containing embedded AI capabilities, manually assessing data practices, managing access controls, and identifying potential exposure risks may overwhelm security professionals and place pressure on their finite resources.

With AI model training, vast amounts of information are processed so AI can identify patterns, trends, and insights. Through machine learning algorithms, AI models learn from data and adapt over time, refining their performance and accuracy, resulting in better service to end-users. However, there is a downside. By granting these models permission and the ability to learn your business knowledge, you are essentially providing AI applications with the potential means to commoditize your organization's competitive edge.

Compounding this issue is the constantly evolving nature of SaaS provider Terms and Conditions (Ts&Cs) agreements. Employees often consent to Ts&Cs changes without fully understanding the potential consequences, putting their organization’s IP at an even greater risk of data leaks or misuse.

Moreover, Ts&Cs are frequently updated, and employees may be unaware of the new permissions these updates grant. For example, employees may accept various “write” permissions that allow the application to perform actions on behalf of users, like write automated emails or manage their calendar.

Another critical point to address is that data stored by AI can exist for both extended periods and short durations. This raises concerns because storing data allows AI learning models to continually train on it. Given the recent surge in cyberattacks targeting the SaaS landscape, where breaches result in compromised information, data stored by AI is susceptible to such incidents, too. This poses key questions for CISOs and security teams regarding their understanding of where, when, and how their data is being used and stored.

Another AI-SaaS security consideration has to do with how certain AI applications leverage human validation to ensure the accuracy and reliability of the data they gather. This approach, often referred to as human-in-the-loop or human-assisted AI, involves using human expertise to support the algorithmic decision-making process. While this results in higher accuracy for the AI model, it also means a human working for the GenAI application can observe and access potentially sensitive data and know-how of other companies.

SSPM’s Approach to Mitigating Emerging Threats

SaaS Security Posture Management (SSPM) solutions aim to ensure an organization’s SaaS-stack is always secure, while at the same time minimizing restrictions over the SaaS usage itself. These solutions focus on solving shadow SaaS, fixing misconfigurations, finding and remediating security issues, and ensuring continuous monitoring of SaaS usage. Modern SSPM can ongoingly streamline discovery, detection, and remediation processes while providing oversight of authorized AI models. SSPMs' approach to mitigating emerging threats, particularly in the general SaaS security domain and within AI-related SaaS, focuses on three main pillars.

Firstly, it emphasizes comprehensive AI discovery. It provides tools for organizations to gain visibility into AI usage and associated risks, including Shadow AI and integrated AI within SaaS applications.

Secondly, SSPM promotes end-user collaboration by streamlining remediation processes and involving employees in security practices.

Lastly, it addresses critical aspects of AI usage in SaaS environments, such as data storage, AI learning capabilities, and human oversight—enabling organizations to implement security measures to tackle these challenges.

Automation Alleviates Manual and Cumbersome Processes

Automation emerges as a crucial SSPM capability in managing AI adoption responsibly and securely amidst these challenges. Security teams encounter significant difficulties in tasks such as uncovering shadow AI risks, identifying impersonator applications across expansive SaaS environments, and keeping up with updated Ts&Cs. This is primarily due to the current manual, cumbersome, and time-consuming nature of discovering such capabilities. By integrating automation into security processes, security teams can shift their focus from repetitive manual work to more high-value strategic initiatives.

The goal for organizations today is to fully capitalize on AI’s potential while prioritizing data privacy and control. This requires implementing robust automation and leveraging advanced solutions to gain the visibility and control needed to securely oversee and mitigate against the dangers of risky or negligent AI SaaS usage.

By embracing continuous discovery and control capabilities, organizations can enjoy the benefits of all that AI has to offer while ensuring they secure themselves from AI SaaS threats.

Galit Lubetzky Sharon is the CEO of Wing Security. A retired colonel of the 8200 Unit, she has vast experience in hands-on designing, developing, and deploying the IDF’s most vital cyber defense and offense platforms. She is the recipient of the prestigious Israel Defense Awards and has led large military organizations.

© Galit Lubetzky Sharon

arrow_upward