Psychology Today | 20.01.2026 21:55
A recent paper from the University of Wisconsin and the "Paradigms of Intelligence" Team at Google makes a fascinating claim. Large language models can extract and reconstruct meaning even when much of the semantic content of a sentence has been replaced with nonsense. Strip away the words, keep the structure, and the system still often knows what is being said. Freaky, right?