Lazy GPT - When ChatGPT becomes lazy like humans
The LAZY GPT - When ChatGPT becomes lazy like Humans
As we venture deeper into the realm of artificial intelligence, it's fascinating to observe how technologies like ChatGPT exhibit behaviors reminiscent of human traits, both the remarkable and the fallible ones. On my recent escapades with ChatGPT, I've stumbled upon some curious examples that beg the question: Is AI becoming a tad bit complacent?
At times, ChatGPT seems to take shortcuts, making assumptions or offering guesses rather than meticulously processing the request at hand. Admittedly, ChatGPT, being a research preview, has its inherent limitations and can occasionally manifest 'hallucinations'. The developers have always been candid about this, cautioning users of the system's potential quirks. So yes, while I did approach the platform with a heads-up, witnessing these behaviors firsthand is a different experience altogether.
This raises an intriguing observation: Could it be that AI is inadvertently mirroring human tendencies? Such behaviors might be attributed to Reinforcement Learning from Human Feedback (RLHF) or similar methodologies. These strategies feed off human input and reactions, essentially making our thought processes and reactions a fundamental component of the AI's development. As we train AI, it inadvertently adopts our strengths, our nuances, and yes, even our follies.
For developers and tech enthusiasts, this presents a unique challenge. As AI continues to evolve, our role in its nurturing becomes more crucial. We may soon find ourselves in a position where we're not just coding or inputting data, but also motivating, mentoring, and persuading our AI systems, much like a senior professional would with a budding intern. We might have to reinforce positive behaviors, correct mistakes, and constantly ensure that the AI aligns with the intended objectives.
The journey ahead with AI, especially Large Language Models like ChatGPT, promises to be an exhilarating one. As these systems adopt more 'human' characteristics, it blurs the lines between machine and human intelligence. And as we tread this path, let's embrace the challenge, for in mentoring AI, we might just discover more about our own humanity. Buckle up, folks – it's going to be an insightful ride!
Below are examples of ChatGPT gave me wrong or incomplete answers and with persuasion and motivation, i was able to get the right answers.
Evidence 1 ( Math problem - Wrong Answer)
Here is an example of divisibility test of a number by small numbers where chatgpt gave wrong answer because it assumed the answer for a hard one. The interesting thing is look at divisibility by 7, It is bit involved calculation - so chatgpt assumes that the number is not divisible by 7 and moves on. The correct final answer is 19, since 4221462 is divisible by 7. However, chatgpt says it is 12.
I wonder how did GPT4 aced all those APT exams without getting lazy?? or may be that was different version or may be we have a lazy version to conserve the tokens/response time. Not sure ...
Recommended by LinkedIn
Evidence 2 ( Incomplete /Partial Answers)
I ask for 100 recipes, but ChatGPT decides it is too much work and gives back only 10 recipes
Strategy 1: If I try persuasion and and a bit of emotional blackmail the list goes to 20 , so GPT worked a bit harder for me. Saying that my boss wants it and adding please doubled the number of recipes generated.
Strategy 2: PEP talk, and motivation .
This seem to be do the trick .. i.e. to remind the mission of Open AI and LLM itself.
GPT-4 is slightly better in this case and many a times it did give me 100 after constant persuasion and emotional blackmail.
amazing stuff