Large language models such as the GPT family have risen to the public eye, generating an endless torrent of text on demand. Is the traditional wisdom about ideas being cheap about to be turned on its head in a world where words are cheap? What are the capabilities and limitations of the systems of now and the near future? What do LLMs actually do within their inscrutable matrices, as far as we can make any educated guesses? And ultimately, can the success of LLMs teach us something about thinking and the human condition?
Large language models, writing, and the human condition
Mikko Rauhala
Luentosali A1, ā (1Ā h)