3 Comments

LLMs don't represent reality. They represent training data, which has no more than a loose relationship to the real world. If they are converging, it has more to do with the lack of creativity in extending the underlying technology than about anything else.

Expand full comment

the internet-sweep mammoth data that is used to train LLMs, especially when filtered and finetuned has some good ties with the reality. It includes science, history, art, etc. Sure enough it's not all of them included. But there are many possible more reasons as to why this is happening and as you have mentioned, one of them is that our methods, datasets, training procedure is also converging at the same time.

Expand full comment

Yeah but it also has strong ties to unreality. A lot of what's on the Internet is, let's say, less than authoritative. Looked at from that point of view, this convergence is probably a terrible idea.

Expand full comment