Generative language models have entered the mainstream, for better or worse. They're often called just fancy autocomplete systems, but what does 'autocomplete' even mean anymore? What do large language models (LLMs) actually do within their inscrutable matrices, as far as we can make educated guesses? What does the future of the human touch look like? And ultimately, is there something to be learned from LLMs about thinking and the human condition?
(Artificial) Intelligence, Writing, and the Human Condition
Mikko Rauhala
Small Auditorium, ā (2Ā h)