Headline:  Chat GPT and Hallucinations

Body: 

Licking frogs???

I have never tried it, but I understand that there are colorful frogs down in South America that emit a liquid that, if ingested, will change your reality significantly.  This is due to a very powerful hallucinogen that they produce.     Under the influence of this substance it has been reported that several different senses get combined in an intimate way (e.g. they can “taste” colors) and there is a variety of other effects.  The outfall is that participants will often swear to a set of facts that differ substantially from other people not on this kind of trip.   In short, they are hallucinating.

ChatGPT also has a tendency to hallucinate when facts don’t’ line up just right.  Sometimes, this can look funny, like when a lawyer had ChatGPT write a legal document, and failed to proofread the output.  From the outside, it looked just like a brief to substantiate any other legal stance.  But the judge looked for the case that all other examples sprang from, and that case did not exist.  So, we can laugh, but this lawyer was fined by the Court, and their client received sub-par service.  Worse yet, what if this had been a surgeon, depending upon ChatGPT to help with patient care?   Somebody might be harmed or even killed by this negligence.  So, it would seem warranted to look at this avenue of artificial intelligence.

What is ChatGPT?

ChatGPT is pretty incredible.   It is open-source software wherein you can write a prompt, and the computer will instantly manufacture  an article, poem song, or other written product, responsive to your prompt.  What makes it really interesting is that the program appears to be really good, and the outputs can be fantastic.  But, it’s important to remember that ChatGPT is not “intelligent.”    It is a Large Language Model and can be usefully compared to auto complete on steroids.    This means that the results are built up, and sometimes the mistakes are two.    As long as the words are semantically related, they might appear together, and this can lead to impossible or wildly improbable linkages.  These mistaken linkages are called hallucinations.

Are these hallucinations controlled?

Yes and no.   The individual outputs need to be checked by the individual human users.    But on a periodic basis, and development team will look at the words that appear together, and update their model based upon this real-world data.  It should be noted that this process will not be helpful in the case of a truly unique line of research.

Some researchers go even further opining that Large Language Models will always have a problem with hallucinations because they lack access to “common sense.”   In a human environment, this term envelopes information about history and culture (not explicitly explained to the model) and knowledge about the world, like the balancing of different weights.  Still other theoreticians suggest that the “hallucinations” are  valid similarities that humans can’t see or won’t allow themselves to see.

So, can we safely utilize this immensely powerful tool?

Yes, we can use it.    We just have  to understand its limits and understand that regardless of how the writing was achieved, any downstream effects of that writing have to be taken as the responsibility of a human person.  Perhaps it could be useful in coming up with a handful of nuggets of wonderful information.  Perhaps the structure could be copied.   But, either way, the substantive information presented must be carefully fact-checked by whoever wishes to use this resource. 

In the meantime, scholars and coders are working on better and better AI systems.   In fact, there are a few scholars who suggest that the AI system should not insert a “fact” unless it passes 2 different Large Language Models.  I have a problem with this because, if both LLMs have “learned” from the same material, they are likely to hallucinate in very similar styles, and the “check” might not be a check at all.

The Verdict

One might very honestly ask me why I am covering this in my blog about cryptocurrency.   Well, it’s MY blog, dagnabit!!!  But, more seriously, I am interested in cryptocurrencies because I am trying to obtain the benefits of new technologies without taking on undue amounts of risk.  This rather more broad rubric seemed to suggest to me that there might be profit in going over the ChatGPT product.  Don’t be fooled; there are others like it now, and more coming in the future.  To have a hand in that future,though, we must firmly get our arms around the potential dangers, first.

And, to answer your unsaid questions:

  1.   This post was NOT written by an LLM.
  2. There ARE some seriously funny things that came out of this software.   My favorite so far is a short story in the style of Hemingway about an office copy machine.  It is epic!!

REFERENCES

Hallucinations Could Blunt ChatGPT’s Success – IEEE Spectrum

Hallucination (artificial intelligence) – Wikipedia

ChatGPT’s hallucination just got OpenAI sued. Here’s what happened | ZDNET

Lawyer Used ChatGPT In Court—And Cited Fake Cases. A Judge Is Considering Sanctions (forbes.com)

Editor’s Note: Please note that the information contained herein is meant only for general education: This should not be construed as Tax Advice.   Personal attributes could make a material difference in the advice given, so, before taking action, please consult your tax advisor or CPA.

 

Leave a comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap