Does Claude Haiku 3.5 (or any other LLM) “intentionally” make errors when suggesting code?
I was a software engineer for thirty years. Retired before all these LLMs hit the streets. My son, who is a “data engineer” now, said that he uses Claude Haiku to help him create code.
So I tried it for tasks in perl, Python, zsh, and Ada. In spite of my “rustiness” (eleven years retired), in all four cases I could have written and debugged faster without “help.” Most of the errors were syntax errors; others were being wrong about what a particular command or option in the language does.
Finally, I tested it by asking for a very simple zsh test. Claude got it wrong. I pointed out the errors. Claude apologized and gave me a different wrong answer. Repeated this five times. The final time, it repeated its second error. Another repeated error was trying to tell me the exact opposite of the stated requirement.
Is this intentional or is Claude an incompetently written LLM??
(I didn’t know how to tag this)