Because large language models (LLMs) use vast amounts of data and continually create new data, GenAI applications can exacerbate security and privacy risks. Organizations will need to take a multifaceted approach to building a responsible AI system. Additionally, stakeholders will need to work the impact of GenAI on enterprise cybersecurity and find solutions to address the issues.
Mitigating Shadow AI Risks by Implementing the Right Policies
20.03.2024
With the rise of artificial intelligence, security is becoming increasingly important in every organization, which creates a new problem - "shadow AI" applications that need to be checked for compliance with security policies, writes Kamal Srinivasan, senior vice president of product and program management at Parallels (part of Alludo), on the Network Computing portal.
The shadow IT problem of recent years has pakistan mobile database into the shadow AI problem. The growing popularity of large language models (LLM) and multimodal language models has led to product teams within organizations using these models to create use cases that improve productivity. Numerous tools and cloud services have emerged that make it easier for marketing, sales, engineering, legal, HR, and other teams to create and deploy generative AI (GenAI) applications. However, despite the rapid adoption of GenAI, security teams have yet to figure out the implications and policies. Meanwhile, product teams building applications are not waiting for security teams to catch up, creating potential security issues.
IT and security teams are grappling with unauthorized applications that can lead to network intrusions, data leaks, and disruptions. At the same time, organizations must avoid a too-rigid approach that could stifle innovation and prevent breakthrough product development. Enforcing policies that prevent users from experimenting with GenAI applications will negatively impact productivity and lead to further silos.
Together to comprehensively assess
-
- Posts: 377
- Joined: Sat Dec 14, 2024 3:32 am