• Turing Post
  • Posts
  • Large language models are making things up: can this be improved?

Large language models are making things up: can this be improved?

Ilya Sutskever and Yann LeCun's opinions

LLMs, or Language Model Machines, have been observed to exhibit hallucinations. In the article "Hallucinations Could Blunt ChatGPT's Success," we explore the contrasting viewpoints of two prominent figures in the field: Ilya Sutskever and Yann LeCun.

Ilya Sutskever, the chief scientist at OpenAI, holds the belief that with time, hallucinations will gradually fade away. This can be achieved through reinforcement learning techniques coupled with human feedback. Sutskever sees a path toward reducing and eventually eliminating this issue.

On the other hand, Yann LeCun, chief scientist at MetaAI, disagrees. According to LeCun, relying solely on iterative human feedback will not be sufficient to solve the problem of hallucinations. He believes that the underlying architecture of current LLMs needs to be addressed to tackle this challenge effectively.

Interestingly, Geoffrey Hinton shares LeCun's perspective, highlighting that our learning extends beyond language and encompasses various non-linguistic aspects. We summarize Geoffrey Hinton’s opinion on why machines can become more intelligent than people in another post.

Given these discussions, researchers face the task of incorporating non-linguistic knowledge into the next generation of LLMs. Should these models embrace a multimodal approach, or is there more to consider? We invite you to share your suggestions and insights in the comments section.

Every day we post helpful lists and bite-sized explanations on our Twitter. Please join us there:

Join the conversation

or to participate.