Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
If you ask the llm for code it will often give you a buggy code but if you run it get an error annd then tell the ai what error you had it will often fix the error so that is cool.
Wont always work though…