AI 101: Is AI good or bad? Exploring AI’s risks and concerns
- Angela Novelli
- Apr 16
- 4 min read

This week in our “AI 101” blog series we will be exploring a big question that many individuals may have been pondering since the emergence of Chat GPT in 2022: Is AI good or bad?
It comes at no surprise that this is a concern associated with these advanced technologies that are being incorporated into many facets of our lives. Only about 32% of people in the U.S. trust Artificial Intelligence. When looking at the entire world, we can see that about 49% of people trust AI.
When it comes to AI, concerns about its safety and abilities are not necessarily misplaced. There may be potential risks involved with such powerful technologies, especially when used irresponsibly and unethically. Let’s look at some of AI’s risks and concerns that you should know before using AI for personal and professional work.
Bias in AI algorithms
This is a big concern that many individuals have when it comes to AI. After all, the data that AI algorithms are trained on is created by humans, and it is not uncommon for bias from the human mind to make its way into the minds of these technologies. It is typically not even intentional, but it happens nonetheless. This bias can result in harmful outcomes for certain populations of people. There have been issues with gender discrimination, where men’s applications are favored over those of women. In addition, there has been racial discrimination in healthcare, where diagnostics systems were returning lower accuracy results for historically underserved populations.
Bias is a real risk when it comes to AI, however, it does not have to be. With the necessary strategies for responsible and ethical use of AI, organizations can work on mitigating any risk of bias within their systems. This is where human oversight is important in order to pinpoint any issues, and where representation of diverse populations is crucial when training AI algorithms.
Becoming too reliant on AI
There is a risk of humans becoming too reliant on using AI for much of their work. With the ease of creating images, text, video, and more within seconds with tools like Chat GPT, it is becoming commonplace to do so. However, this can pose a detriment to humankind’s critical thinking skills and creativity.
For example, there are many software developers who are now utilizing AI to generate code and solve problems. While this might boost efficiency in the short-term, it can have negative long-term consequences for both individuals and organizations. Continuously using AI has the potential to weaken developer’s coding skills and critical thinking skills, since they are not utilizing their own minds as much as they would without having access to AI. This also comes with trusting that the AI is providing correct information, as developers may not verify it or be able to explain the logic behind them.
Some tech experts worry that AI will worsen our deep thinking and empathy skills due to people using it for research and relationship-building purposes. There are some that even form attachments to AI personas due to these more controllable relationships, though artificial. This is a big risk for AI, but becoming aware of this is the first step when it comes to mitigating it. Taking necessary steps to ensure proper use of AI where it enhances skills instead of replacing them will prove to be a major help in avoiding this risk.
Cybersecurity threats and attacks
Cyber criminals can manipulate AI tools and utilize them to steal information, money, and even identities. These tools have been known to generate cloned voices, convincing phishing emails, and fake identities that have compromised large amounts of valuable data. Without proper security, AI systems are also vulnerable to cyber attacks, so having regular updates and preparing for any threats is necessary.
AI tools are being utilized to generate fake passports, licenses, and other counterfeit documents that are convincing enough to bypass verification processes on major platforms. In addition, researchers have gotten AI to produce malware capable of breaching systems like Google’s Password Manager by engaging it in role-playing scenarios. Some organizations have told their employees not to use tools like ChatGPT for reasons surrounding data confidentiality. However, it is important to remember most importantly that AI should be used responsibly. Always treat sensitive information with caution and implement strategies to safeguard against cyber threats.
Harming the environment
Just like any form of technology, AI requires resources to power it. However, AI is utilizing more resources than other technologies, with large data centers requiring a lot of water for cooling purposes. Every 10 to 50 prompts uses up roughly 500 milliliters, which is equivalent to a standard water bottle. This adds up significantly with the amount of people that are prompting this AI tool each day. This is not to mention that training a natural language processing model can emit around 600 thousand pounds of carbon dioxide into the atmosphere.
There are data centers that are exploring utilizing renewable energy for AI systems, which can reduce the carbon footprint significantly. Simplifying model architecture and using energy-efficient AI models are more sustainable strategies that can be used. This comes back to the responsible use of AI that is good for the planet.
Back to the original question, is AI good or bad? There is no straightforward answer, as it all depends on how it is used and with what intentions. One thing that we do know is that it’s here to stay. There is significant potential when it comes to AI, and many benefits it can pose for society. However, it is important to understand its risks, as that is the best way to be able to mitigate them. Educating others and using critical thinking skills will help implement AI in a safe and responsible manner.
Sources: