Yann LeCun’s Post

View profile for Yann LeCun
Yann LeCun Yann LeCun is an Influencer

The weak reasoning abilities of LLMs are partially compensated by their large associative memory capacity. They are a bit like students who have learned the material by rote but haven't really built deep mental models of the underlying reality.

While they aren’t perfect, there’s still a lot of value to be gained from learning material without a deep understanding of the underlying reality. Someone who doesn’t know all the details can utilize LLM’s like chatGPT to bridge some of those knowledge gaps.

Like
Reply

100%. And that's why they are just so perfect for generating missing syntax and outlines that are tradition and/or formality based, which is so much of our work.

Like
Reply

I wonder if you could disentangle the two and forcibly “grow” the reasoning capacity?

How sufficient do you think language is as a causal representation? If at all sufficient, and NNs are universal function approximators, and causal models can be represented in functions (i.e. Pearl’s graphical SCMs), is it possible that LLMs are learning more than we expect?

Like
Reply

A bit like adding many "else-ifs" to a useful black box.

Like
Reply

Maybe resonning could emerge one day from a large enough model ? Or maybe we will have to implement this by our own using a paradigm still to be discovered…

Like
Reply

For someone not very deep into the topic, reading this was very funny because until a momemt ago I thought you and everyone in the comments were talking about people who hold a L.L.M. (master of laws) degree 😂😂

Mild Wernicke's aphasia in digital form

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories