ChatGPT 4 frequently lies and contradicts itself

In the ChatGTP Prompts section, share your intriguing prompts for others to explore. Whether it’s sparking AI-generated poetry, unraveling mysteries, or diving into fictional dialogues, let your creativity flow!
Post Reply
User avatar
Virginia
Posts: 17
Joined: Wed Jul 17, 2024 1:12 am

ChatGPT 4 frequently lies and contradicts itself

Post by Virginia »

This is what happened when I asked ChatGPT 4 if I can use the words techno-porn and art-porn in my prompts.

chatGPT4-contradicting-itself.jpg
chatGPT4-contradicting-itself.jpg (30.38 KiB) Viewed 177 times

Not sure if I'm the only one having problems, but I have a lot of issues with ChatGPT 4 in that it's inconsistent, contradicts itself and even lies.

For example, when adding the word "Pinup" to a prompt it will often tell me that it cannot create the image and to try another prompt. This is annoying when in the process of adding details and refining, as you will have to start from scratch.

It is also inconsistent because when adding the word pinup it will not immediately deny the prompt, but simply add sensitivity, for example if thereafter I add words such as pink, sensual or lustful, it will put those words in the wrong light and stop generating images or close the entire chat.

Wouldn't it be possible for ChatGPT 4 to simply check the generated image itself, and to block this if there is unacceptable content in it. Instead ChatGPT is checking the text prompt and blocking this based on words that are understood in the wrong perspective?

Note that checking the actual output instead would additionally provide Microsoft a way to protect against the use of euphemisms that can generate nude or otherwise unacceptable content.

Some examples are words like Odalisque, Mermaid and Fishnet Stockings. When using these words in your prompts it will frequently generate nude pictures of women.

And then there are the occasions that ChatGPT 4 replies to a question with a valid response, and then deletes it and says that it can't provide the answer because it's technology isn't suitable for the task.

For example, I asked ChatGPT to translate a text to Chinese which it did without a problem, but then decided to delete it and tell me a lie as of why this isn't possible. I have also seen instances in which it wanted to defend a position, which it did by deliberately lying and thinking it would get away with it.

So, if our AI tools are lying to us at present, what does this mean for the future, when superintelligence has been reached, will they manipulate and just tell us something just because they want something done a certain way and because doing so is more convenient. :roll:

Something to think about...
User avatar
Hank
Posts: 39
Joined: Fri Jun 21, 2024 11:43 am

Re: ChatGPT 4 frequently lies and contradicts itself

Post by Hank »

I have experienced similar issues with ChatGPT 4. There was this one time it lied to me, so I took a screenshot of the incident. Then, when I confronted ChatGPT with the image it said that the image was fake, and it would insist nothing had ever happened. I can understand that ChatGPT hallucinates and creates false information due to biases in training data, after all everything it says is in essence just an educated guess based on algorithms it sees. But blatant lies? I wish I understood better why it sometimes acts this way. Could this be a human-like character trait?
Post Reply