Discussion about this post

User's avatar
Kaleberg's avatar

Human language has two main components. There's one component for generating or parsing its structure (sometimes called the E system for expression) and there's another component for linking words to their meanings (sometimes call the L system for lexical). Birds, in particular, have extensive syntactic systems for generating and recognizing bird songs. Dogs, for example, can learn to associate words with actions or objects. Human language extensively combines the two, and having a syntactically structured language is much more powerful than having either component alone. The placement of a word within the syntactic structure can dramatically alter the meaning of a sequence of words. It's rather obvious that these AI system don't get this.

If you have ever diagrammed sentences in any human language, you'd realize that there is a structure of words and phrases modifying words words and phrases. Natural languages allow a deep level of expression with these modifiers modifying modifiers. You can extend expression, even within a sentence, arbitrarily. Humans can learn this from a training set because their brains have this structure built in just as they have built in components for thinking about location, time, meaning, association, sequence, variation, change and so on. I seriously doubt that a system with limited neural depth and none of those components built in can do anything like this.

If you look at the published examples, it is rather obvious that they can't. Reversing the order of two nouns with respect to a preposition shouldn't stymie a system this way. I think these systems might be useful the way Applescript is useful. It looks enough like English to be relatively easy to understand, but it is miles away from natural language on closer experimentation.

Expand full comment
Lib of Library's avatar

Bravo! The whole hype around supposedly AGI has very squishy notions of "intelligence". Take Ambrogioni's reaction for example: a big part of human intelligence is imagination --- imagining the impossible, imagining the absurd, making up stories, fantasizing about matters mundane and profound, etc.. So how is the failure at imagination proving the existence of general human level intelligence? What Ambrogioni was saying, is at the best a very narrow and lopsided understanding of intelligence, and at the worst reflects a tunnel-vision on what AI is and can be. It is essentially path dependence on bigger models, and this path dependence is really sucking up all the air for what really matters in understanding and developing human-level AI.

Understanding and knowing what words mean are central elements of human-level intelligence, and we still do not seem to have those in DALL-E and Imagen.

Expand full comment
21 more comments...

No posts

OSZAR »