Dr T.W. Kwan FBCS explores the challenges associated with shadow AI and identifies the strategies that, when implemented, will help business leaders prepare for an increase in shadow usage.

The technological evolution within businesses has always presented unique challenges. In the past, organisations grappled with managing end-user computing (EUC) or shadow IT, where employees used software and systems outside the IT department's purview. As artificial intelligence (AI) continues to reshape the business landscape, its growing presence in the workplace has given rise to a new challenge: shadow AI. This term refers to the use of AI tools and applications without explicit organisational oversight — a trend that, while showcasing adaptability and innovation, brings with it significant risks and governance challenges.

The need for shadow AI management

The integration of AI tools in business operations is accelerating rapidly. From simple chatbots to complex data analytics systems, these tools streamline processes and enhance decision making. However, this rapid adoption often lacks oversight, leading to potential risks and governance challenges. Shadow AI is considered to potentially pose greater risks than shadow IT, and Forrester Research has warned that business leaders must prepare for a surge in ‘shadow usage’ as their workforce increasingly turns to individual AI tools to enhance productivity.

Real world incidents underscore the urgency of addressing shadow AI. For instance, big firms such as Apple, Amazon and JPMorgan have taken drastic measures to restrict the internal use of ChatGPT due to concerns over data privacy, security breaches and the potential for leaking sensitive information. This reaction highlights the tension between the innovative potential of AI and the governance challenges it presents. Moreover, Gartner also found that utilising AI tools in their workplace without informing their employers is a growing trend among professionals. This may lead to discrepancies in accountability, data integrity issues and even legal consequences for businesses, mainly when dealing with proprietary or consumer information.

Risks associated with shadow AI

Shadow AI, with its inherent complexity and unpredictability, can lead to several critical risks for organisations. The most prominent among these include:

  • Data privacy concerns: shadow AI can lead to significant breaches in data privacy. AI tools may involve vast amounts of data, potentially including sensitive personal or organisational information. When these tools are used without proper governance, they pose a risk of unauthorised access to or misuse of this data, potentially leading to privacy violations
  • Regulatory compliance challenges: using AI tools, which may not be designed with industry specific regulations, can result in non-compliance, leading to legal issues or potential penalties for the organisations. This is particularly concerning in industries like finance and healthcare, which must adhere to specific regulatory standards
  • Security vulnerabilities: AI tools can introduce vulnerabilities into an organisation’s IT infrastructure. If not properly vetted and secured, cybercriminals can exploit these tools, leading to data breaches or other forms of cyber attacks
  • Intellectual property problems: AI tools can also pose risks to intellectual property. For example, if employees use an AI tool to develop new products, the lack of formal oversight can lead to disputes over the ownership of these innovations. Additionally, if these AI tools are external, proprietary information can be exposed to third parties

Strategies for governance

Addressing the risks of shadow AI requires a comprehensive approach, combining policy development, employee education and technological oversight.

Effective AI governance starts with establishing clear policies that define acceptable use, data handling, privacy, compliance and security standards for AI tools. These policies should cover all aspects of AI use, from data input to output, and be capable of evolving with advancements in AI technology and changes in regulatory landscapes. It is crucial to regularly review and update these policies in collaboration with staff to ensure they remain relevant and effective.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Senior management and executives play a crucial role in AI governance. Their involvement is essential to ensure adequate resources are dedicated to AI governance, including implementing technology solutions and providing employee training. More importantly, management support is crucial for effectively enforcing AI policies.

Collaboration between IT and other departments is crucial for understanding their needs and finding suitable solutions. This involves educating employees about the potential risks associated with unsanctioned AI use and ensuring everyone understands the organisation's policies on AI. Implementing regular workshops can train employees on the approved IT solutions and their benefits, making them aware of the resources available to them. It is also essential to create an environment where employees feel comfortable discussing any AI tools they would like to use, along with their benefits and the associated risks. Providing a centralised platform where employees can request new tools allows IT to evaluate and approve them if appropriate, ensuring a streamlined process for incorporating beneficial technologies.

Beyond governance policies, deploying technology solutions that can monitor, manage and report on the use of AI within the organisation can also be beneficial. This includes implementing security measures and policies to protect against potential vulnerabilities introduced by unauthorised tools. Additionally, maintaining blacklists (or whitelists) of websites and SaaS tools that organisations do not want their employees to use proves effective both in preventing the use of unauthorised tools and alerting individuals to such usage. Continuous monitoring is also essential to detect indications of shadow AI usage, as employee behaviours and workflows constantly adapt in tandem with AI tools. Effective observability solutions can offer valuable insights into the initial indications of shadow AI utilisation, helping to maintain control over the technological environment.

Conduct regular risk assessments and audits

Regular risk assessments of AI tools are necessary to identify potential vulnerabilities. This involves assessing the tools currently in use for privacy, compliance or security risks and conducting regular audits to ensure AI tools are utilised in accordance with established policies, thereby maintaining a secure and compliant technological environment. Furthermore, periodically evaluating the technological environment to detect any unsanctioned AI tools that may have been integrated is beneficial. This approach aids in mitigating risks and bolstering the secure use of AI technologies within the organisation.

Conclusion

Shadow AI, while a testament to the adaptability and innovative spirit of the workforce, presents significant risks that organisations must proactively manage. Organisations can navigate these challenges by establishing robust governance frameworks, involving senior management, regularly assessing risks, educating employees and leveraging technologies. As AI continues to permeate the workplace, a proactive and adaptive governance approach is crucial for harnessing its potential safely and responsibly.

The title image for this article is AI generated.