Shadow AI in Local Government: The Importance of AI Policies and Training
- Angela Novelli
- Jun 6
- 3 min read

More and more local government agencies are incorporating the use of Artificial Intelligence into their operations. This is leading to increased efficiency and productivity, and even improving accessibility for citizens that utilize government services. However, as these organizations continue to gain more access to AI tools, it is of utmost importance to ensure proper usage regulations and employee training. Without policies and guidelines in place, there could arise the issue of unsanctioned AI usage, also known as shadow AI.
What is Shadow AI in Local Government?
Several years ago we saw the emergence of the term “shadow IT” which refers to the unauthorized use of technology by employees that wished to evade restrictions put in place by IT departments. Shadow AI is the same concept, only it specifically applies to the ungoverned use of Artificial Intelligence technologies, and without the knowledge of the IT department.
Studies suggest that about 50% of employees are shadow AI users, which can be due to a variety of factors including the ease of access to AI tools and the encouragement of government agencies to adopt the use of AI to improve efficiency and productivity. Without proper training and governance in place, some public sector employees may want to begin exploring AI tools on their own, leading to several risks and concerns.
What are the Risks of Shadow AI in Local Government?
Unchecked AI adoption carries transparency, security, and decision-making risks. This improper use also creates security threats with potential breach costs and legal issues from unsound policies or data leaks. Without oversight that attempts to shut down the use of shadow AI, the following risks could occur:
Threats to Data Privacy and Security: Shadow AI can lead to potential exposure of sensitive data relating to both the organization and any public information contained within it. This can lead to significant legal issues due to noncompliance with data protection laws and a decrease in public trust.
AI Compliance Risks: Unauthorized use of AI could unintentionally lead to violations of state and even federal AI regulations.
Concerns Regarding Bias and Ethics: Proper guidelines for AI technologies must be put in place to mitigate potential bias. Shadow AI increases the risk of reinforcing those biases and producing a lack of accountability due to its unsanctioned nature. Ethical issues like these will have serious negative effects on public trust.
Potential Misinformation and Over-Reliance: Shadow AI increased the risk for misinformation getting released, which can have serious consequences. There is also the possibility that those using shadow AI could become too reliant on AI tools, and without oversight, it can lead to flawed decision-making.
How Can Government Agencies Confront the Risks of Shadow AI?
Potential solutions in eliminating the use of shadow AI focus mostly on training and policy-making. It may not be enough to simply tell employees not to use AI tools without permission, as there needs to be strong governance in place that enables specific guidelines, boundaries, and any penalties for non-compliance.
One of the most important solutions is employee training. This should be a top priority with any sort of AI implementation, especially within local government. Public sector employees should be made aware of all of the potential risks associated with AI, especially usage without proper authorization. Educate employees about best practices and responsible AI use, incorporating lessons on ethics.
An example of cities moving forward with AI ethics training took place in the state of Indiana. The city of Indianapolis and Marion County, Ind. have teamed up with nonprofit InnovateUS to provide AI training for city and county employees in an effort to advance responsible and ethical AI use within government. This followed a survey that revealed that most of these employees were unclear on their own agencies’ AI policies, highlighting the need for proper training. The partnership will provide free introductory AI training and a data classification course for employees to qualify for an M365 Copilot license.
The emergence of shadow AI, characterized by its unseen usage outside of established organizational oversight, requires an urgent and proactive response. Government agencies should develop and implement comprehensive and effective AI governance frameworks in order to address the risks posed by shadow AI. Contact one of our AI experts to learn more about how Sedna can assist with innovative AI solutions.
“The innovation of our workplace is key to the development of innovative solutions that address the evolving complex challenges of our clients.”
- Cathy Engelbert, Commissioner, WNBA
Sources:
Comments