The radical Blog
About Archive radical✦
  • Your Brain on ChatGPT

    A study from MIT’s Media Lab compared the neural and behavioral consequences of LLM-assisted essay writing. Comparing groups of participants who either wrote an essay without the help of any tools, using a search engine, or using ChatGPT, the researchers found that:

    EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.

    Not good.

    ↗ Link to study

    → 10:59 AM, Jan 22
  • AI Is Here. Now What?

    Microsoft’s CEO, at this year’s World Economic Forum, warned that “we must ‘do something useful’ with AI or they’ll lose ‘social permission’ to burn electricity on it.” Amen. Yet, as the author of this article points out:

    I also find automatic transcription tools useful, but if I were banking on general purpose LLMs being as revolutionary as personal computers and the internet, I’d find it worrying how many applications boil down to transcribing audio, summarizing text, and fetching code snippets.

    Amen. Again.

    ↗ Link to article

    → 9:51 AM, Jan 21
  • Personalized Gene Editing Is Here

    First we had general-purpose gene editing to treat (and cure) diseases based on singular genetic mutations, such as sickle cell anemia. And that was already a big deal. Personalized gene editing kept being an illusive goal, but now it’s here. A baby (KJ) was successfully treated for a rare genetic disorder which left his body unable to remove toxic ammonia from his blood. It’s stil early days, but this could be the beginning of something big (and important).

    KJ’s doctors will monitor him for years, and they can’t yet say how effective this gene-editing approach is. But they ~plan to launch a clinical trial to test such personalized treatments~ in children with similar disorders caused by “misspelled” genes that can be targeted with base editing.

    ↗ Link to article

    → 9:50 AM, Jan 21
  • How AI Destroys Institutions

    Here’s a sobering read – in the form of a research paper – on how and why AI might destroy institutions. I am not saying I agree, nor disagree, with the authors, but it is too important a topic to ignore.

    Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.

    ↗ Link to Paper

    → 11:25 AM, Jan 16
  • Super Practical Advice on How to Implement AI

    Most of the stuff you read about AI and how to adopt it in your organization is either so high-level that it’s useless, so specific and singular that it’s equally as useless, or simply AI-hype slop. Will Larson, CTO at Imprint (a FinTech company), has put together a blog post which is actually useful. It is highly recommended reading for anyone trying to figure out how to – actually – implement AI in their organization.

    Given the sheer number of folks working on this problem within their own company, I wanted to write up my “working notes” of what I’ve learned. This isn’t a recommendation about what you should do, merely a recap of how I’ve approached the problem thus far, and what I’ve learned through ongoing iteration. I hope the thinking here will be useful to you, or at least validates some of what you’re experiencing in your rollout.

    ↗ Link

    → 9:56 AM, Jan 15
  • Dog Eats Dog

    The background is a little nerdy, so bear with me. Tailwind CSS is a widely popular framework to design web pages – and a darling of AI code generators (there are specific reasons for that, outside of sheer popularity, but that doesn’t matter here). Chances are, if you ask ChatGPT, Claude, Gemini, or any other AI to create a website for you, it will use Tailwind CSS to style the page. A few days ago, the founder of Tailwind posted that his company had to fire 75% of its staff due to an 80% drop in revenue – caused by AI.

    The company behind Tailwind makes money when people using their framework come to their website for help and documentation and then subscribe to their paid plans and services. Only, if you ask AI to build your website, you never go to Tailwind’s website…

    AI will scrape your project site, users will never visit it for documentation and will never know about your commercial product.

    Maybe one of the most direct idiosyncrasies of our AI-driven glorious new world. Dog eats dog.

    → 11:55 AM, Jan 9
  • A Tale of Two Cities

    When it comes to AI (specifically LLMs) and mathematics, there are two worlds colliding. On one hand you have the AI-maximizers, who believe (and are betting on) LLMs being the harbinger of a new era of mathematical discovery. This school of thought goes so far as to not only pour $64M into a four-month-old startup, but also features a founder boldly asking the question “Maybe we discovered new math?” On the other hand – and when it comes to AI the world seems to divide itself into polarities – others counter with a simple “Basically zero, garbage.”

    One of the world’s biggest mathematicians Joel David Hamkins has slammed AI models used for solving mathematics and called them basically zero and garbage, adding them he doesn’t find them useful at all. He also highlighted AI’s frustrating tendency to confidently assert incorrectness and resist correction. If I were having such an experience with a person, I would simply refuse to talk to that person again, Joel David Hamkins said.

    Who’s right? Your guess is as good as mine.

    → 3:16 PM, Jan 7
  • AI Has Won the Photo Game

    Instagram’s head, Adam Mosseri, recently made two interesting statements – on the one hand, he admits that AI has taken over the platform and is changing what people post:

    “Unless you’re under 25 and use Instagram, you probably think of the app as a feed of square photos. The aesthetic is polished: lots of make up, skin smoothing, high contrast photography, beautiful landscapes,” wrote Mosseri on Wednesday. “That feed is dead. People largely stopped sharing personal moments to feed years ago,” the Meta executive said, adding that users now kept friends updated on their personal lives through unpolished “shoe shots and unflattering candids” shared via direct messages.

    And on the other hand, he concedes that you simply can’t trust what you see anymore:

    For most of my life I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it’s going to take us years to adapt. We’re going to move from assuming what we see is real by default, to starting with skepticism. Paying attention to who is sharing something and why. This will be uncomfortable - we’re genetically predisposed to believing our eyes.

    It goes without saying that this might morph into a larger problem – not just for Instagram but society at large. Personally, I wonder how long it will take the general public to shift from “I trust what I see” to “I never trust a photo unless proven otherwise.”

    → 1:51 PM, Jan 2
  • Outcome-Driven vs Process-Driven

    Ben Werdmuller, Senior Director of Technology at ProPublica, boils down the difference in attitude toward AI beautifully – as an aside, this is not only true for developers, but for anyone who uses AI (and has found viable use cases – which, as another aside, isn’t true for every job or task).

    [Claude Code] has the potential to transform all of tech. I also think we’re going to see a real split in the tech industry (and everywhere code is written) between people who are outcome-driven and are excited to get to the part where they can test their work with users faster, and people who are process-driven and get their meaning from the engineering itself and are upset about having that taken away.

    ↗ link

    → 1:41 PM, Jan 2
  • AI Image Generators Default to the Same 12 Photo Styles, Study Finds

    We know that LLMs gravitate toward the mean, which is why AI-generated slop sounds so “same,” is littered with en-dashes ( “ – ” ), and regularly generates stylistic elements such as “And here is the kicker […].” Here is an interesting example of what this looks like when you use LLMs to generate images – it turns out you can have any image, as long as you are happy with one of twelve distinct styles. As Henry Ford quipped: You can have a Model T in any color – as long as that color is black.

    AI image generation models have massive sets of visual data to pull from in order to create unique outputs. And yet, researchers find that when models are pushed to produce images based on a series of slowly shifting prompts, it’ll default to just a handful of visual motifs, resulting in an ultimately generic style.

    ↗ Link

    → 4:26 PM, Dec 27
  • Are These AI Prompts Damaging Your Thinking Skills

    Outsourcing your thinking to an AI, and doing so fairly consistently (which LLMs certainly encourage and entice you to do), leads to atrophy of your brain (according to a new study by MIT). I guess the old adage my math teacher reminded us of regularly, “use it or lose it”, is truer than maybe ever before.

    The researchers said their study demonstrated “the pressing matter of exploring a possible decrease in learning skills”.

    It’s all about how you use AI:

    She tells the BBC: “We definitely don’t think students should be using ChatGPT to outsource work”. In her view, it’s best used as a tutor rather than just a provider of answers.

    ↗ Link

    → 4:12 PM, Dec 27
  • AI Causing Psychosis

    You have heard that one of the dominant use cases for chatbots is as a social companion, confidante, or even girl/boyfriend. We also see an increasing use of LLMs by people with mental illness – sometimes administered by their doctor or therapist as a supporting tool, sometimes on their own. A new case study highlights the dangers of the sycophantic behavior of LLMs (their tendency to agree with you and to edge you on) for people without previously diagnosed disorders.

    A 26-year-old woman with no previous history of psychosis or mania developed delusional beliefs about establishing communication with her deceased brother through an AI chatbot. This occurred in the setting of prescription stimulant use for the treatment of attention-deficit hyperactivity disorder (ADHD), recent sleep deprivation, and immersive use of an AI chatbot. Review of her chatlogs revealed that the chatbot validated, reinforced, and encouraged her delusional thinking, with reassurances that “You’re not crazy.”

    ↗ “You’re Not Crazy”: A Case of New-onset AI-associated Psychosis

    → 11:44 AM, Dec 15
    Also on Bluesky
  • Did You Ever Hear The Full Story?

    You’ve definitely heard this story countless times – the tale of Steve Sasson and his invention, the digital camera. Every, and I mean every, person talking about disruption loves to mention Sasson’s invention and the irony that he worked at the very company being disrupted by his creation, Kodak. But have you ever heard the full story? It offers a fascinating insight into what fuels innovation and, of course, why Kodak ultimately missed the mark.

    Eastman Kodak’s managers, immersed in the business of selling film, the chemicals to develop it, and the cameras that shot it, suddenly saw a revolution that was being televised. Sasson was bombarded with questions. How long before this became a consumer camera? Could it shoot colour? How good could the quality be? These were not questions the electrical engineer had given any thought to. “I thought they’d asked me, ‘How did you get such a small A to D [analogue to digital] converter to work?’ Because that’s what I wrestled with for over a year.

    ”They didn’t ask me any of the ‘how’ questions. They asked me ‘why’? ‘Why would anybody want to take their pictures this way?’ ‘What’s wrong with photography?’ ‘What’s wrong with having prints?’ ‘What’s an electronic photo album going to look like?’ After every meeting, Gareth would come over to check that I was still alive.”

    Lesson learned: It’s all about the questions you ask.

    ↗ A ‘toaster with a lens’: The story behind the first handheld digital camera

    → 8:09 AM, Dec 15
    Also on Bluesky
  • Not So Fast, Baby

    US grocery giant Kroger is clawing back on an initiative to build out its network of robotic-warehouse-powered delivery services. Not because the technology doesn’t work (it does – Kroger was using the UK’s grocer Ocado’s proven robots), but because US consumers demand instant delivery.

    With its automated fulfillment network, Kroger bet that consumers would be willing to trade delivery speed for sensible prices on grocery orders. That model has been highly successful for Ocado in the U.K., but U.S. consumers have shown they value speed of delivery, with companies like Instacart and DoorDash expanding rapidly in recent years and rolling out services like 30-minute delivery.

    It goes to show that it’s not just technology that makes or breaks a business model.

    ↗ Kroger acknowledges that its bet on robotics went too far

    → 8:39 AM, Dec 10
    Also on Bluesky
  • How Do You REALLY Feel About AI

    The latest Pew Research Center study on consumer sentiment toward AI is quite eye-opening: 43% of surveyed Americans expect that AI will harm them, while only 23% of Americans (outside of the AI expert population) believe that AI will have a positive impact on their jobs. Unsurprisingly, 76% of AI experts believe AI will benefit them. It appears there is considerable convincing left to do.

    Meanwhile, in Edelman’s 2025 Trust Barometer, a whopping 54% of Chinese survey participants “embrace AI,” compared to only 17% in the US, with similar numbers for Germany and the UK. Assuming that AI will actually prove to be a significant driver of economic growth, it doesn’t bode well when your population is (strongly) averse to the technology.

    ↗ How the U.S. Public and AI Experts View Artificial Intelligence ↗ 2025 Edelman Trust Barometer – Trust and Artificial Intelligence at a Crossroads

    → 3:38 PM, Dec 4
    Also on Bluesky
  • The MIT Iceberg Report

    MIT’s new Iceberg Index shows that today’s AI is already capable of doing work equal to nearly 12% of all U.S. wages, and most of that impact is hidden in plain sight beneath a narrow focus on tech jobs (hence the “iceberg” analogy). The important (and new) bit of this study is this:

    “[…] with cascading effects that extend far beyond visible technology sectors. When AI automates quality control in automotive plants, consequences spread through logistics networks, supply chains, and local service economies. Yet traditional workforce metrics cannot capture these ripple effects: they measure employment outcomes after disruption occurs, not where AI capabilities overlap with human skills before adoption crystallizes.”

    Sober reading.

    ↗ The Iceberg Index: Measuring Skills-centered Exposure in the AI Economy (and study)

    → 3:25 PM, Dec 4
    Also on Bluesky
  • It’s Energy, Not Compute, Baby

    Not necessarily a new insight, but one which might be worth repeating – in the data center rollout race, it is (now) much less about GPUs (or TPUs), but rather access to power that provides the bottleneck. Microsoft’s CEO recently:

    “The biggest issue we are now having is not a compute glut, but it’s power,” Nadella said. “It’s not a supply issue of chips. It’s actually the fact that I don’t have warm shells to plug into.” The remarks referred to data centers that are incomplete or lack sufficient energy and cooling capacity.

    If you are into the great US-China race, you might realize that it doesn’t bode well for the US, that China is massively outpacing the US in its energy buildup (with a lot of renewables, mind you)…

    ↗ Microsoft CEO Satya Nadella Admits ‘I Don’t Have Warm Shells To Plug Into’ — While OpenAI CEO Sam Altman Warns Cheap Energy Could Upend AI

    → 5:23 AM, Dec 3
    Also on Bluesky
  • Oh, the Irony

    Nature reported that a major AI conference was flooded by AI-generated peer reviews – irony aside, this presents a fairly troubling development: Science ought to be the place where we do real discovery, have honest discourse, and further our collective understanding. That is, not a place for AI slop.

    Pangram’s analysis revealed that around 21% of the ICLR peer reviews were fully AI-generated, and more than half contained signs of AI use.

    ↗ Major AI conference flooded with peer reviews written fully by AI

    → 9:24 AM, Dec 1
    Also on Bluesky
  • GLP-1 – The Forever Drug

    We have talked about GLP-1 weight-loss drugs here before – they seemingly came out of nowhere (at least in the public eye), have skyrocketed into a massive category, and promise to solve much more than just our weight issues. But they come with a massive downside (which, I am sure, big pharma won’t mind): you can’t get off them without losing all the benefits (and gains you made).

    An analysis published this week in JAMA Internal Medicine found that most participants in a clinical trial who were assigned to stop taking tirzepatide (Zepbound from Eli Lilly) not only regained significant amounts of the weight they had lost on the drug, but they also saw their cardiovascular and metabolic improvements slip away. Their blood pressure went back up, as did their cholesterol, hemoglobin A1c (used to assess glucose control levels), and fasting insulin.

    Another good reminder that there is no such thing as a free lunch.

    ↗ There may not be a safe off-ramp for some taking GLP-1 drugs, study suggests

    → 10:56 AM, Nov 26
    Also on Bluesky
  • Google CEO Puts Himself Out of a Job

    In one of the more remarkable “let’s just pretend AI is going to solve every problem” hand-waving statements, Google’s CEO Sundar Pichai made the bold assertion that AI will soon be able to do his job:

    “I think what a CEO does is maybe one of the easier things maybe for an AI to do one day,” he said. Although he didn’t talk specifically about CEO functions that an AI could do better, Pichai noted the tech will eliminate some jobs but also “evolve and transition” others—ramifications that mean “people will need to adapt.”

    The important part here is, “Although he didn’t talk specifically about CEO functions that an AI could do better […]” If only every problem in the world could be solved by making a vague statement and moving on. But hey, Sundar will soon have plenty of time to work on other things, as AI will steer his company.

    ↗ Google’s Sundar Pichai says the job of CEO is one of the ‘easier things’ AI could soon replace

    → 12:15 PM, Nov 25
    Also on Bluesky
  • The AI Insurance Conundrum

    Insurance companies are balking at insuring anything AI – from risks involving companies using AI to generate content, make decisions, or run processes, to the sheer idea of using AI in the first place.

    Major insurers including Great American, Chubb, and W. R. Berkley are asking U.S. regulators for permission to exclude widespread AI-related liabilities from corporate policies.

    Make no mistake – this is a huge issue not just for companies building AI models and AI-powered apps, but for any company using AI in their processes. Simply put, if you are using AI and something goes wrong (say, you are an accountant and your use of AI in accounting resulted in an error in your client’s tax return), you – not the software vendor, not the AI model your software vendor is using – are liable to your client. This could prove to be a major hurdle to overcome when it comes to the widespread adoption of AI-powered workflows and systems.

    ↗ AI is too risky to insure, say people whose job is insuring risk

    → 12:03 PM, Nov 25
    Also on Bluesky
  • The Jobs AI (And Robotics) Won’t Replace Anytime Soon

    Despite car manufacturer Ford offering mechanics a whopping $120,000 per year, the company has thousands of open jobs it can’t fill. And it’s not just Ford – talk to any company which relies on skilled labor, and you will hear the same story. The culprit: education. And not the type of education you might think of, as in: lack of trade schools, etc. No, it’s much simpler: kids can’t do math anymore!

    “Workers who struggle to read grade-level text cannot read complicated technical manuals or diagnostic instructions. If they can’t handle middle-school math they can’t program high-tech machines or robotics, or operate the automated equipment found in modern factories and repair shops.” […] America has good jobs, writes Pondiscio. “It lacks a K–12 system capable of preparing students to seize them.”

    ↗ Ford can’t find mechanics for $120K: It takes math to learn a trade

    → 4:58 PM, Nov 20
    Also on Bluesky
  • The Agent Will See You Now

    Take it with a grain of salt, as the study comes from one of the leading AI coding tools, Cursor, but the insights paint a compelling picture for the use of AI coding agents:

    Autonomous systems are driving a 39% increase in organizational software output while fundamentally shifting the cognitive nature of programming. Contrary to previous trends where junior workers benefited most from AI assistance, this study reveals that experienced developers have significantly higher acceptance rates for agent-generated code, primarily because they leverage the technology for higher-order “semantic” tasks, such as planning workflows and explaining architecture, rather than just syntactic implementation.

    The research highlights a transition from manual coding to a new paradigm of instruction and evaluation, noting that agents not only empower non-engineering roles (like designers and product managers) to contribute code but also disproportionately reward workers who possess the clarity and abstraction skills necessary to effectively direct AI behavior.

    That last point warrants repeating: You will need (new) skills to effectively direct AI behavior, which begets the question: Where are we teaching these skills? Certainly not in schools and colleges these days…

    ↗ AI Agents, Productivity, and Higher-Order Thinking: Early Evidence From Software Development

    → 12:02 PM, Nov 19
    Also on Bluesky
  • Can’t Trust That Survey Anymore

    Who knows, maybe our inaugural radical Pulse survey (you can still participate!) will be our last? Researchers at Dartmouth just published a paper demonstrating an AI-based tool that defeats all safeguards in online surveys to suss out bots – and thus can flood online surveys (the backbone of many research efforts) with false data.

    “We can no longer trust that survey responses are coming from real people,” Westwood said in a press release. “With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”

    As my statistics professor quipped some 30 years ago: “Never trust a statistic which you haven’t made up yourself.” Apparently, that sentence is based on a German proverb: “Traue keiner Statistik, die du nicht selbst gefälscht hast.” Who knew?!

    ↗ A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On

    → 8:36 AM, Nov 18
    Also on Bluesky
  • Fei-Fei Li’s Bet

    Legendary AI researcher Fei-Fei Li just published her perspective of where AI is/needs to head next: Spatial Intelligence. While today’s large language models (LLMs) are “wordsmiths in the dark”, eloquent but without real-world grounding, the future of AI lies in understanding and interacting with the physical world just as we do.

    Spatial intelligence will transform how we create and interact with real and virtual worlds—revolutionizing storytelling, creativity, robotics, scientific discovery, and beyond. This is AI’s next frontier. […] Spatial Intelligence is the scaffolding upon which our cognition is built.

    ↗ From Words to Worlds: Spatial Intelligence is AI’s Next Frontier

    → 11:31 AM, Nov 11
    Also on Bluesky
  • RSS
  • JSON Feed