The radical Blog
About Archive radical✦
  • Neal Stephenson on AI: Augmentation, Amputation, and the Risk of Eloi

    Science fiction author Neal Stephenson, who popularized the concept and term “metaverse” in his seminal book Snow Crash (1992), recently spoke at a conference in New Zealand on the promise and peril of AI.

    His (brief but razor-sharp) remarks are well worth reading in full, but this quote stood out:

    “Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation. […] This is the main thing I worry about currently as far as AI is concerned. I follow conversations among professional educators who all report the same phenomenon, which is that their students use ChatGPT for everything, and in consequence learn nothing. We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down.”

    Link to his remarks.

    → 8:53 AM, May 19
    Also on Bluesky
  • How University Students Use Claude

    Anthropic, the maker of the Claude foundational AI model, just released their fairly in-depth report on the use of their LLM by university students. Outside of the expected ("Students primarily use AI systems for creating (using information to learn something new) and analyzing (taking apart the known and identifying relationships), such as creating coding projects or analyzing law concepts”), the report admits that:

    There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking. An inverted pyramid, after all, can topple over.

    and

    As students delegate higher-order cognitive tasks to AI systems, fundamental questions arise: How do we ensure students still develop foundational cognitive and meta-cognitive skills? How do we redefine assessment and cheating policies in an AI-enabled world?

    These are very legitimate concerns – especially in a world that requires humans to be ever more on their A-game to keep competing with the very tool they use to outsource their learning.

    Link to study.

    → 11:41 AM, Apr 10
    Also on Bluesky
  • Career Advice in 2025

    Despite this blog post by Will Larson being written from the perspective of, and for, software developers, his insights into the impact of AI on careers (both from the perspective of an individual as well as a company) ring true across the spectrum:

    The technology transition to Foundational models / LLMs as a core product and development tool is causing many senior leaders’ hard-earned playbooks to be invalidated. Many companies that were stable, durable market leaders are now in tenuous positions because foundational models threaten to erode their advantage. Whether or not their advantage is truly eroded is uncertain, but it is clear that usefully adopting foundational models into a product requires more than simply shoving an OpenAI/Anthropic API call in somewhere.

    In our sessions, we often open with the observation that “we are trying to solve new world problems with old world thinking.” In Will’s words, our playbooks become rapidly obsolete, and in many cases, we haven’t developed new ones quite yet.

    Sitting out this transition, when we are relearning how to develop software, feels like a high risk proposition. Your well-honed skills in team development are already devalued today relative to three years ago, and now your other skills are at risk of being devalued as well.

    And as this world is moving at a frenzied pace, the above seems to be doubly true. As someone else recently wrote: Now might be the worst time to take a sabbatical.

    Link to blog post.

    → 9:29 AM, Mar 21
    Also on Bluesky
  • Tell Your Kids to Learn to Code

    Quoting Andrew Ng (who knows a thing or two about coding, AI and the future):

    Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct [...] than that it will become all-powerful. More and more, computers will program themselves.”​ Statements discouraging people from learning to code are harmful!

    In the 1960s, when programming moved from punchcards (where a programmer had to laboriously make holes in physical cards to write code character by character) to keyboards with terminals, programming became easier. And that made it a better time than before to begin programming. Yet it was in this era that Nobel laureate Herb Simon wrote the words quoted in the first paragraph. Today’s arguments not to learn to code continue to echo his comment.

    As coding becomes easier, more people should code, not fewer!

    Source

    → 9:00 AM, Mar 15
    Also on Bluesky
  • Where Are All the Self-Directed Learners?

    Remember the promise of MOOCs (Massive Open Online Courses)?

    We are 25 years into the MOOC era. We have near unlimited access to the world’s best teachers on YouTube, and yet our education system isn’t producing independent thinkers. How is this possible?

    In this account from an Indian company about their experience hiring people – and their struggles finding qualified personnel (the company is in the learning space nonetheless) – it provides both a fascinating look at the job/applicant market and the struggles with new approaches to learning.

    → 12:14 PM, Mar 10
    Also on Bluesky
  • RSS
  • JSON Feed