2022-12-15 16:12:42
І ще про large language models (LLM) та хайп.
Про прогрес ШІ набагато реалістичніше пишуть ті, хто власне його рухає — розробники та дослідники. Нещодавно Ян ЛеКун поділився в LinkedIn своїм текстом на цю тему, залишу його підводку:
A new piece by Jacob Browning and me in Noema Magazine in which we argue that:
- language carries a small portion of all human knowledge.
- much of human knowledge and all of animal knowledge is non verbal (& non-symbolic).
- hence large language models trained purely from text, and not grounded in an underlying reality cannot come close to human-level intelligence.
Quotes:
- “It is clear that these systems are doomed to a shallow understanding that will never approximate the full-bodied thinking we see in humans.”
- “Abandoning the view that all knowledge is linguistic permits us to realize how much of our knowledge is nonlinguistic.”
- “A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.”
- "We should not confuse the shallow understanding LLMs possess for the deep understanding humans acquire from watching the spectacle of the world, exploring it, experimenting in it and interacting with culture and other people."
- "Dealing with LLMs at any length makes apparent just how little can be known from language alone."
https://www.noemamag.com/ai-and-the-limits-of-language/
ЛеКун — один з засновників глибокого навчання, лауреат премії Тюрінга, професор NYU та Chief AI Scientist @ Meta.
3.8K viewsAndrii Brodetskyi, 13:12