top of page
Office Hallway

Exploring State Government AI Policies and Proposals: Delaware and Ohio

  • Writer: Angela Novelli
    Angela Novelli
  • Oct 3
  • 4 min read
Ohio skyline

AI use across industries and throughout different states and parts of the world evolves and becomes regulated in different ways. AI policies may look different for different organizations or types of businesses, but there are some factors that should always be included, such as privacy and transparency. 


In this article, we will examine AI policies for state governments, this time focusing on Delaware and Ohio. In addition, we will delve into a new proposal from Ohio that could tie into what is prohibited when it comes to the use of AI. As advanced technology like AI continues to grow and shape the world around us, it is more important than ever to put our focus on how to ensure it is safe and responsible to use in our everyday lives. 



Delaware’s AI Policy


The AI usage policy for Delaware government agencies details specific guardrails for the use of AI, especially GenAI. With online GenAI tools such as ChatGPT, government employees must be vigilant in protecting their own, their agency’s, and the public’s sensitive information. Let’s take a closer look at some of the highlights of this policy and how they factor into the responsible use of AI. 


  • Using AI on State devices must be for State purposes: This guideline emphasizes that AI tools used at work at government agencies must be for work alone. It is important not just as a way to promote productivity and efficiency in the workplace, but also as a form of security due to the sensitive nature of work and information handled within these agencies. Having this included is beneficial to direct state employees on what is allowed so that there is no confusion and more accountability. 


  • Always review outputs from AI tools: This policy encourages being responsible with the use of GenAI, and verifying all outputs to look for any inaccuracies, hallucinations, bias, or other issues. Reviewing outputs from GenAI is a crucial part of AI use for any organization or individual, not just state agencies, but here it is increasingly more so. When dealing with public services, all information presented should be free of bias against any group of people, and should always be accurate and up-to-date. 


  • Identifying AI-generated content: Any content, including text, audio, video or images that are generated with GenAI must be identified in communication channels. This is an important aspect of transparency and ensuring public trust. If content is created using technology, letting those viewing it know will ensure there are no assumptions otherwise, leading to potential issues down the line. 



Ohio’s AI Policy


The AI policy for the state of Ohio helps to govern the responsible development, implementation, and use of AI by state agencies, ensuring that the public’s data and information is well-protected. Here are a few of the many important factors listed in this policy: 


  • Reviewing AI outputs: Similar to the policy from Delaware we looked at previously, this policy includes the requirement of reviewing any outputs produced by AI. This involves looking for any inaccuracies, and ensuring that there is no information that could compromise any privacy or security. This policy specifically mentions having a human verification process in place for decisions made by AI that have a legal, financial, human resources, legislative, organizational, or regulatory impact, which is a very important addition. AI should never be allowed to make the final decision when it comes to sensitive information. 


  • Focusing on ethical considerations: AI training that is necessary for state agencies utilizing AI focuses on the ethical considerations of the technology and its outputs. Not only is training and educating crucial when it comes to using any new form of technology, but it is very important to bring awareness to the possible issues that could arise and should be prevented when it comes to ethics in AI. This means that AI should not be used to impersonate without permission, and any potential biased outcomes must be corrected.


  • Conducting a risk assessment: For any proposed AI solution, a risk assessment should be conducted. This means examining the possible risks associated with the technology prior to implementation, including exploitation by malicious actors or inadvertent uses by authorized users. This is important because it allows a review of the challenges that could arise to determine if it is worth it to move forward and what security measures can and should be put in place before doing so.



Ohio Proposal for AI Images


A regulatory proposal for AI imagery has emerged in Ohio, one that affects transparency and accountability for the use of AI. AI images can be fairly easy to create, which can prove to be dangerous when it comes to deepfakes and other impersonations of others without permission. This new bill would require all AI images to be labelled as AI-generated, as well as making it a felony to use generated images in a way that is harmful or obscene. It would also make it illegal to use generated voice, images, or likeness in an attempt to defraud or pressure anyone in a financial situation, among other harmful acts. 


This bill could make big regulatory changes when it comes to AI, shifting the landscape for organizations in the state of Ohio in different ways. The intent to protect and ensure AI is used responsibly and safely is a crucial aspect of AI regulations. 



AI policy is all about governing AI use and ensuring all users focus on being responsible with advanced technology. Sedna Consulting Group has a team of AI experts with experience in developing AI policy for different organizations. Contact us at info@sednacg.com to learn more.


“With great power comes great responsibility, and that responsibility comes in the form of security and privacy. … We’re very used to balancing speed and cool new technology with safety.”

– Suzie Compton, VP Product Management at Salesforce






Sources:

Comments


bottom of page