Skip to content
Simatree Logo All Color
Home Insights Trust and AI

Trust and AI

Posted On 03/14/2024

AI

AI is an ever increasingly talked about technology, with many people and companies exploring how to leverage its capabilities. However, not many people truly understand how AI works, what the risks associated with it are, and how bad actors can misuse it.  

In a world where so many people consume news and information online without taking the time to fact check, the rise of AI deepfakes poses an ever-increasing risk. Since elected lawmakers around the world are the target of many of these fakes, there is increasing bipartisan support for more regulation around deepfakes and the use of AI in the United States. Across the country, states have introduced and passed bills that aim to regulate the use of AI in campaigns. These bills range from requiring that candidates tag any AI-generated media to banning the use of deepfakes during certain windows prior to an election. Technology and social media companies are also involving themselves in the regulation of AI content, however, they have not yet come up with a comprehensive plan to deal with its risks.  For example, Meta’s independent oversight board recently criticized the company for its policy around content generated by AI, calling it “incoherent and too narrow.” Since social media is the primary location of deepfakes related to elections, lawmakers are urging these companies to create formal policies.  

NPR recently published an article discussing the risks of AI related to election cycles and what regulators are doing to manage them. While increased regulation is an important step in addressing the risks of AI, there also needs to be enhanced awareness by the public that these deepfakes exist and increased efforts to be critical about media consumption. Helping people to recognize them and ignore them, as has been done with phishing attempts within email for example, will decrease the potential harm associated with AI deepfakes.  

People around the world have begun losing trust in AI, and disinformation and content generated by AI are major issues on the public’s minds. The ability to create such realistic content using generative AI technologies makes the spread of disinformation increasingly easy and convincing. To help the general public trust these technologies, lawmakers and companies using AI must develop comprehensive plans to address these areas of concern. As a company who is exploring and investing in these technologies as well as advising our clients how to leverage them, it is important that Simatree takes these risks into account as well.  


About the Author

Recent Insights

Article

Leading Projects through the Decision System: What Delivery Can Learn from Sales 

Posted on 01/30/2026

By Katie Lucas I’m a nerd for a framework, so when a project leadership challenge I encountered was reframed to me with something call…

Read Article

Article

Building a Track Record of Leadership 

Posted on 01/21/2026

By Steve Kuo For those that know me well, you know I’ve always loved fast vehicles. Top Gun is my favorite movie. I even quoted Fast and Furious in my vows. Th…

Read Article

Article

Leading From Three Hours Behind 

Posted on 01/20/2026

By Catherine Quinn The last five years have whiplashed from panic about our ability to work fully remotely, to amazement that we can be productive while remote, to return to…

Read Article

Subscribe for More Insights

Stay up to date on the latest from Simatree by subscribing to Insights and more