How University Students Use Claude

Anthropic, the maker of the Claude foundational AI model, just released their fairly in-depth report on the use of their LLM by university students. Outside of the expected ("Students primarily use AI systems for creating (using information to learn something new) and analyzing (taking apart the known and identifying relationships), such as creating coding projects or analyzing law concepts”), the report admits that:

There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking. An inverted pyramid, after all, can topple over.

and

As students delegate higher-order cognitive tasks to AI systems, fundamental questions arise: How do we ensure students still develop foundational cognitive and meta-cognitive skills? How do we redefine assessment and cheating policies in an AI-enabled world?

These are very legitimate concerns – especially in a world that requires humans to be ever more on their A-game to keep competing with the very tool they use to outsource their learning.

Link to study.

Pascal Finette @radical