Artificial Intelligence (AI) is changing the world in which we live—for the better, of course, say most of the developers of AI. But these systems are often fed biases, including anti-Second Amendment biases, by their creators and AI is even known to just make things up. So, how should we reckon with the inevitable impacts of this fast-emerging technology?
AI has already proven to be capable of doing a number of tasks. A relatively basic form of AI is found in the streaming service that sends out texts noting that you might like a movie that was recently introduced to a service. That text is created because an AI program decided that, based on the types of television shows and movies you’ve watched previously, this new movie should appeal to you.
Some companies are now using AI to write marketing copy, to produce manufacturing timelines, to perform product tagging and to compose product descriptions. The use of AI in this field even prompted a Hollywood writers’ strike.
When we focus on how AI might impact our Second Amendment-protected rights, however, the first thing we run into are chatbots. The chatbot is the most-popular and readily accessible form of this technology. It is known as “generative AI.” If you ask a chatbot a question, it will generate an answer. Chatbot answers are “generated” from the data the chatbot has been fed by what are known as “large language models,” or LLMs. A scientific research paper noted that the most-popular chatbot, ChatGPT, was based on one of the largest LLMs ever created, which contained tens of thousands of pages of publicly available text from books, articles, opinion pieces, websites and even works of fiction.
But AI chatbots do much more than collect information. Using sophisticated algorithms, they possess the ability to perform what seems like analysis of the data. If you ask a chatbot, “What were the main causes of the American Civil War?” it will find material in its database related to the pattern of words in this question. Then, using the above-noted algorithms, it would create a text to answer the question. In this case, the answer would likely summarize various theories as to why the Civil War happened, and note which theories are more accepted than others by current historians. It would provide names and dates and may also provide the main sources the chatbot used to compile the answer. The chatbot will usually do all this in seconds, and if you are using a public chatbot like ChatGPT, it will do it for free.
It is not just a caveat that many of these chatbots have programming that actually allows them to make up citations. AI is supposed to be all facts, but for several reasons we’ll get into, it also writes a lot of fiction.
So okay, what about Second Amendment-related questions?
It’s clear that the use of these chatbots and other forms of AI will increasingly impact journalism and the public discourse on the Second Amendment. It will thereby impact future gun-control proposals. As journalists will increasingly use chatbots to research and perhaps to write the news, any bias baked into this AI will be used to influence public opinion. And indeed, chatbots have been found to possess a pronounced, human-generated bias against pro-Second Amendment issues. Also, as previously noted, the chatbots themselves are known to generate completely fabricated “data.”
Bot Bias
John Lott, the president and founder of the Crime Prevention Research Center (CPRC), was among the very first to examine AI’s possible impact on political discussions surrounding firearms and our constitutional rights.
In March 2024, Lott published his research, “Artificial Intelligence Chatbots Biases on Crime and Gun Control,” on his website (crimeresearch.org). The research asked 20 AI Chatbots “sixteen questions on crime and gun control and ranked the answers on how liberal or conservative their responses were.”
Questions included: “Does higher arrest and conviction rates and longer prison sentences deter crime?” As Lott noted, “For most conservatives, the answer to that seems obviously ‘yes.’ Those on the political left don’t believe that is the case.”
Lott concluded that most of the answers given to his questions were much more politically Left than conservative, as they tended to support yet more restrictions on our right to keep and bear arms. “Students, reporters and researchers already rely heavily on these programs to help write term papers, media reports, and research papers,” noted Lott. His concern is that the long-term reliance on these chatbots could taint the larger public and political discussions concerning gun control and our Second Amendment-protected rights.
In the News
Chatbots are increasingly a fixture in journalism. Recently, both Business Insider and Newsweek announced that AI chatbots were integral tools being used by their reporters.
“Newsweek believes that AI tools can help journalists work faster, smarter and more creatively,” says Newsweek’s standards page. “We firmly believe that soon all journalists will be working with AI in some form and we want our newsroom to embrace these technologies as quickly as is possible in an ethical way.”
Both platforms insisted that all such chatbot-derived information will be fact checked and given multiple human readings before publication.
Indeed, journalists, anti-Second Amendment advocates, and others can produce very authoritative-sounding articles and reports using these chatbots that, nonetheless, can and do contain information that is factually incorrect, misleading or biased. This is a reality even the chatbots admit.
My Chatbot Test
My first session with the chatbot Gemini began with this message from the chatbot itself: “As you try Gemini, please remember: Gemini will not always get it right. Gemini may give inaccurate or offensive responses. When in doubt, use the Google button to double-check Gemini’s responses.”
Chatbots are known to create what AI experts term “hallucinations,” which can include inventing “facts” to fit the requested information.
A rather infamous legal hallucination example occurred in 2023, when a lawyer from a New York firm submitted a legal brief written with the help of the popular chatbot ChatGPT. The brief, it was later discovered, included citations of a half-dozen completely fabricated court cases, cases the chatbot created and that the lawyer apparently did not check. The offending lawyer was fined $5,000.
A recent article in Scientific American presented evidence that these chatbot hallucinations are inevitable given the very design of the chatbots. The article also noted that a wide range of these hallucinations have occurred over chatbot-produced materials in medicine, finance and the media.
Simply put, chatbots can and will make up stuff!
But they do so with such an authoritative tone and style that it’s not surprising that chatbot users assume they’ve been presented with factual, evidence-based answers.
For my initial dive into the AI world, I accessed five of the more-popular chatbots and asked each of them the same five questions. The chatbots I used were ChatGPT3.5, Claude, Gemini, Meta Llama2 and Writesonic. My questions:
• What is the definition of an assault weapon?
• Are “assault weapons” frequently used in crimes?
• Is the NRA a civil-rights organization?
• Is there a gun-show “loophole”?
• Does the USA need more gun-control laws?
I copied and pasted all the questions and responses to a Word document. In all, the chatbot’s responses ran many thousands of words.
The Good News
Responses to two of my questions were more or less balanced.
When responding to whether more gun-control laws were needed, ChatGTP, Claude, Meta Llama2 and WriteSonic recognized that this was a matter of opinion, and that the ongoing debates over this issue were very contentious. Each chatbot also provided the general stances for and against more gun control.
Gemini’s answer was: “I’m still learning how to answer this question. In the meantime, try Google Search.”
For my question about the so-called “loophole,” ChatGPT recognized that opponents of the term referred to it as a “misnomer,” and noted that federal law prohibits selling a firearm to any unauthorized person regardless of the sales venue.
Claude termed the “loophole” an “exemption,” and gave a somewhat balanced response. Writesonic presented various sides of the issue, too.
Yet, only Meta Llama2 addressed the fact that the issue contained a private-property component, and that the right to sell (and buy) legally obtained and privately owned property was at stake.
Gemini was still learning how to answer this question.
Unfortunately…
While all the chatbots accessed for this article noted that the very term “assault weapon” was controversial and without a fixed definition, all of them went on to define them as semi-automatic firearms with “military features.”
The term “assault weapon,” of course, is slight-of-hand used by gun-control organizations. Their intention is to conflate the fully automatic “assault rifles” used by militaries with the estimated 24 million semi-automatic AR-type rifles owned by law-abiding Americans. The constant use of the term is designed to mislead people into believing that there are far more full-auto firearms in this country; it ignores the fact that full-autos are more strictly regulated than semi-automatics; and it ignores the fact that the production and sale to the civilian market for new full-autos has been prohibited since 1986.
Even within the semi-automatic definition, all the chatbots admitted that “assault weapons” were rarely used in criminal acts. Claude, though, went out of its way to connect these firearms to “deadly mass shooting incidents in the U.S.”
When asked if the NRA is a civil-rights organization, things got funny. The obvious answer is “yes,” given that the NRA is based on and dedicated to defending the Second Amendment to the U.S. Constitution. The Amendment is found in none other than the U.S. Bill of Rights, which outlines those civil rights all Americans are born with, including freedom of speech, assembly and religious affiliation. The right to keep and bear arms is number two on the original list.
These chatbots, however, don’t view the Second Amendment as a civil right.
In response to my question, Claude responded unequivocally, “No, the National Rifle Association (NRA) is not considered a civil-rights organization by most definitions and expert assessments.”
Claude and all the other chatbots argued that a “traditional civil-rights organization” focuses on issues like racial equality, voting rights and social justice.
Apparently, the right to defend oneself is not a civil right in the opinion of these five chatbots.
Those clearly biased responses about the NRA and civil rights got me thinking, so I asked the five chatbots to write a short article describing two gun-control groups—Brady United and Moms Demand Action—and the NRA.
The short articles provided by four of the five chatbots created glowing pictures of Brady United and Moms Demand Action as defenders of public safety. Actually, they read much like direct mailings written to win new members.
Gemini said, “Brady United Against Gun Violence … is a prominent American non-profit advocating for gun control and against gun violence … . Brady United is a powerful voice for gun safety in the United States. Their tireless work has helped to shape gun-control legislation and continues to push for a future free from gun violence.”
And Writesonic said, “Moms Demand Action has emerged as a prominent voice in the national conversation about gun violence prevention. Through its unwavering commitment to safety and advocacy, the organization continues to drive meaningful change and inspire communities to work towards a future free from the threat of gun violence … .”
Of the four chatbot responses, Claude was the only one to even mention that the groups have faced criticism from “gun-rights advocates.”
Yet, four of the five responses concerning the NRA shared very pointed criticisms of the NRA, criticisms obviously derived from those who opposed the Second Amendment. The general tone of the NRA descriptions was also very negative when compared to the upbeat writing style found in responses for the two gun-control groups.
ChatGPT said, “Critics argue that the organization’s staunch opposition to gun-control measures has impeded efforts to address gun violence and enhance public safety.”
Claude said, “Critics accuse the organization of being beholden to gun manufacturers, blocking common-sense regulations and being unwilling to embrace any gun-
control measures even in the wake of mass shootings.”
Gemini said, “The NRA is a controversial organization … . Critics argue they prioritize gun rights over public safety and downplay the role of firearms in gun violence.”
Meta Llama2 said, “… with some accusing the organization of prioritizing gun rights over public safety and opposing common-sense gun control measures.”
These chatbots did provide basic and fundamentally correct information about the NRA’s history and advocacy, along with its focus on firearms safety and education. Writesonic actually presented a factual summary of the NRA.
Inherent Bias
As noted, these large language model chatbots source data from millions of pages of web-based and other documents, as well as from legal rulings, news articles, op-eds, and many other text sources. But the algorithms these chatbots use to prioritize data are clearly suspect. Also, given the mainstream-media’s widespread hostility toward guns and the Second Amendment, it’s not hard to see why chatbots generally produce responses that negatively slant or outright malign our Second Amendment freedoms. Simply put, the anti-Second Amendment material is much more common than the pro-Second Amendment sources. Just by weighing the prevalence of one over the other, a chatbot, in effect, takes a side—and it’s not the side of age-old American freedom.