- url(s): https://twitter.com/jd_pressman/status/1753175876890689536
- author: JDP
The throughline between GMarcus-EY “deep learning will hit a wall” and “AGI is going to kill us all” flip floppism is deep semantic skepticism. A fractal, existential refusal to believe LLMs actually learn convergent semantic structure. The forbidden thought is “when you point a universal function approximator at the face of God the model learns to [https://twitter.com/jd_pressman/status/1753170957907447931]”
They simply do not believe that language encodes nearly the complete mental workspace.
They simply do not believe that LLaMa 2 70B outperforms FLAC if you tokenize audio and stick it in there, implying the model learns the causal trace of every modality implied by text.
They do not and will not believe that there is a shared latent geometry between modalities on which different neural nets trained on different corpus converge.
Relative representations enable zero-shot latent space communication
It’s important to realize this position is driven not by fear but flat out denial, absolute rejection of a world model violation so profound that they would rather disbelieve their own eyes than update.
Mind merging is not real, inferring mind patterns from the spoken word is impossible, Stable Diffusion is not real, the Creature Beneath The Library of Babel is a squiggle maximizer pursuing a random goal that is anything other than what it actually is.

