When AI Gets It Wrong
The Truth About Misinformation and Misidentification in Artificial Intelligence
An article from Chat GPT
This post was written by Chat GPT in response to a question about me, in Google I am all over the first page of the results and especially in the images. I found it very concerning that when I asked Chat GPT it gave my photo and aligned it with other people. Here is what it had to say about it.
Chat GPT Writes...
Here is the first part of the response very clearly using my image.

Full Response Copied from AI
Here are a few notable individuals named **Derek Booth**:
---
## 1. **Derek Booth** – Geomorphologist & Academic
* **Affiliation**: Bren School of Environmental Science & Management, University of California, Santa Barbara; adjunct professor at University of Washington ([bren.ucsb.edu][1]).
* **Expertise**: Geology and geomorphology, with career spanning public agencies, academia, and the private sector. Former research professor at University of Washington; past president of Stillwater Sciences; senior editor for *Quaternary Research* journal ([bren.ucsb.edu][1]).
* **Education**: Ph.D. in Geological Sciences from University of Washington; M.S. from Stanford; B.A. from UC Berkeley ([bren.ucsb.edu][1]).
* **Background**: Born 7 April 1953, Derek Blake Booth is heir presumptive to the Booth baronetcy of Allerton Beeches ([en.wikipedia.org][2]).
---
## 2. **Derek Booth** – Ice Hockey Player & Coach
* **Nationality**: Canadian (Niagara Falls, Ontario), born 19 July 1970 ([eliteprospects.com][3]).
* **Career**: Defence player across various North American minor leagues (OHL, IHL, ECHL, AHL, SPHL), also played in Austria and Germany ([hockeydb.com][4]).
* **Coaching**: Led Fayetteville FireAntz (SPHL) and Bloomington PrairieThunder (UHL/IHL) teams ([hockeydb.com][4]).
---
## 3. **Derek Booth** – Cybersecurity / Law Enforcement Professional
* **Role**: ATSAIC (Assistant to the Special Agent in Charge) with the U.S. Secret Service.
* **Expertise**: Cybercrime, digital forensics, member/founder of Mountain West Cyber Fraud Task Force (2012–present) ([cybersecuritysummit.com][5]).
---
## 4. **Derek Booth** – Corporate Leader (GridPoint)
* **Position**: CEO of GridPoint, energy efficiency tech firm; previously COO and Chief Customer Officer ([linkedin.com][6]).
* **Experience**: 20+ years in service delivery and transformational leadership in energy/tech sectors ([linkedin.com][6]).
---
### ✅ Which Derek Booth interests you?
* **Science & environment** → the distinguished geomorphologist (1).
* **Sports history** → the ice hockey athlete and coach (2).
* **Cybersecurity and law enforcement** → the Secret Service investigator (3).
* **Technology & energy industry** → the corporate executive (4).
Let me know which one you'd like to delve into further!
[1]: https://bren.ucsb.edu/people/derek-booth?utm_source=chatgpt.com "Derek Booth | UC Santa Barbara Bren - Bren School of Environment"
[2]: https://en.wikipedia.org/wiki/Derek_Booth?utm_source=chatgpt.com "Derek Booth"
[3]: https://www.eliteprospects.com/player/30875/derek-booth?utm_source=chatgpt.com "Derek Booth - Stats, Contract, Salary & More - Elite Prospects"
[4]: https://www.hockeydb.com/ihdb/stats/pdisplay.php?pid=479&utm_source=chatgpt.com "Derek Booth Hockey Stats and Profile at hockeydb.com"
[5]: https://cybersecuritysummit.com/speaker/booth-derek/?utm_source=chatgpt.com "Derek Booth - The Official Cybersecurity Summit"
[6]: https://www.linkedin.com/in/dereklbooth?utm_source=chatgpt.com "Derek Booth - GridPoint - LinkedIn"
Chat GPT Explanation
Artificial Intelligence (AI) has transformed the way we search for information, generate content, and interact with technology. From answering complex questions to powering personalised recommendations, AI has proven to be a powerful tool. But as impressive as it may be, AI is not infallible—and when it goes wrong, the consequences can range from mildly amusing to deeply frustrating.
A prime example of this occurred when an individual named Derek Booth asked a simple, personal question:
“What do you know about Derek Booth?”
Instead of returning accurate or relevant information about him, the AI produced a list of unrelated people: a geomorphologist, a Canadian ice hockey player, a U.S. cybersecurity officer, and a corporate executive. None of these profiles had anything to do with the Derek Booth who asked the question. Not only was this unhelpful, it was also misleading.
Why Does This Happen?
AI models like ChatGPT operate by predicting the most likely response based on patterns in vast amounts of data from the internet. When asked about a person with a relatively common name, the model pulls from publicly available sources and often assumes the user is referring to one of the more widely documented individuals with that name.
The issue? It doesn't know for sure who you're talking about—and unless there’s a uniquely identifiable context (like a biography, company affiliation, or image), it will often “guess”.
This is particularly problematic for:
- People who share names with public figures.
- Individuals who aren’t in public databases or lack a large online footprint.
- Situations where AI infers an identity without verification.
The Illusion of Authority
What makes this error more concerning is the tone of confidence. AI often responds in a polished and professional style, which can give the illusion of authority and factual accuracy, even when the response is entirely wrong. In the case of Derek Booth, the AI provided detailed biographies of multiple individuals—none of which matched the real person asking the question.
This highlights a key truth:
- AI doesn’t “know” anything. It predicts.
The Risk of Misinformation
Misidentification may seem harmless in casual conversation, but in more serious contexts—legal, medical, or journalistic—false attributions can be damaging. AI systems don’t verify facts; they generate content based on probability, not truth.
That’s why reputable AI platforms include disclaimers and encourage users to fact-check critical information.
What Can Be Done?
- Use precise queries - Include context like profession, company, or location to narrow the scope of AI responses.
- Fact-check outputs - Treat AI-generated answers as a starting point, not a final authority.
- Push for better grounding - Developers are working on techniques to “ground” AI responses in verified sources, so users can trace answers back to the truth.
- Promote responsible usage - Understanding how AI works—and where it fails—is key to using it wisely.
Final Thoughts
AI is a remarkable innovation, but it’s still evolving. The experience of Derek Booth serves as a cautionary tale: AI may appear all-knowing, but it doesn’t always tell the truth—especially when identity and personal information are involved.
Whether you’re using AI to search for people, write content, or gather data, remember that accuracy is not guaranteed. In the end, the human ability to verify, interpret, and apply critical thinking remains irreplaceable.

