The radical Blog
About Archive radical✦
  • Build Your Own Trend Radar

    Wire up some AI agents, and you too can be the proud owner of your very own trend radar. Nifty!

    ↗ How we Built our Own Technology Radar

    → 11:52 AM, Sep 25
    Also on Bluesky
  • ChatGPT Destroys Your Marriage

    Yep, sorry, clickbait. But this exchange, in a recent article on Futurism, reminds us just too much of the 2023 South Park episode “Deep Learning” where Stan begins using an AI chatbot to write text messages to his girlfriend, Wendy, after seeing that Clyde is using one for his girlfriend, Bebe. Predictably, hilarity ensues. Life imitates art:

    A husband and wife, together nearly 15 years, had reached a breaking point. And in the middle of their latest fight, they received a heartbreaking text. “Our son heard us arguing,” the husband told Futurism. “He’s 10, and he sent us a message from his phone saying, ‘please don’t get a divorce.'” What his wife did next, the man told us, unsettled him. “She took his message, and asked ChatGPT to respond,” he recounted. “This was her immediate reaction to our 10-year-old being concerned about us in that moment.”

    ↗ ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners

    → 2:26 PM, Sep 23
    Also on Bluesky
  • We Start to Drown in Workslop

    First, we had spam (human-generated garbage content), then AI slop (AI-generated garbage content – much easier and cheaper to produce), now we have Workslop (all the crap your colleagues produce using AI). And it is costing us dearly:

    Research from BetterUp Labs and Stanford found that 41% of workers have encountered such AI-generated output, costing nearly two hours of rework per instance and creating downstream productivity, trust, and collaboration issues.

    ↗ AI-Generated “Workslop” Is Destroying Productivity

    → 2:26 PM, Sep 23
    Also on Bluesky
  • The AI Coding Revolution Isn’t Quite Happening

    One of the common stories you hear about AI is that it uplevels particularly junior folks – lifting them to the level of much more experienced people (for example, Stanford found this to be true in call centers). Sounds good and logical – and yet, it might not be the full picture. Here is the counter-argument (in this case for developers): “[…] instead of democratizing coding, AI right now has mostly concentrated power in the hands of experts.”

    ↗ AI Was Supposed to Help Juniors Shine. Why Does It Mostly Make Seniors Stronger?

    → 2:25 PM, Sep 23
    Also on Bluesky
  • Small is Beautiful

    On the topic of (large) LLMs – a trend which is growing quite substantially is to go from large to small language models. Not only do they run on much cheaper hardware, but they can also be tailored more easily to an organization’s specific data and needs, and they also produce less surface area for vulnerabilities.

    The slowing pace of improvement at the bleeding edge of generative AI is one sign that LLMs are not living up to their hype. Arguably a more important indication is the rise of smaller, nimbler alternatives, which are finding favour in the corporate world. […] As David Cox, head of research on AI models at IBM, a tech company, puts it: “Your HR chatbot doesn’t need to know advanced physics.”

    ↗ Faith in God-like large language models is waning

    → 2:25 PM, Sep 23
    Also on Bluesky
  • AI Safety Is a Mess

    The uber-popular workplace tool Notion rolled out an ambitious AI upgrade with its recent release of Notion 3.0. The software offers a set of AI agents to work alongside you, the user. Sounds great (and like something from the future) – and it also opens the door to some rather nasty security issues.

    A security researcher successfully got the Notion LLM agent to access private (and confidential) data and then shared this data with an external website – all through some clever, but ultimately not complicated, prompt injection.

    This research exposes a fundamental security gap in AI agent architectures where traditional access controls become ineffective once agents gain autonomous tool usage capabilities. The combination of broad permissions, tool access, and susceptibility to prompt injection creates a “perfect storm” for data exfiltration attacks. Fun times. Maybe think twice before letting AI agents run wild with your data.

    ↗ The Hidden Risk in Notion 3.0 AI Agents: Web Search Tool Abuse for Data Exfiltration

    → 2:24 PM, Sep 23
    Also on Bluesky
  • No, You Can’t Replace 350 Developers With Just Three People

    Mo Gawdat, formerly Google [X]’s chief business officer and now founder of Emma.love, an “AI love coach” (yep, for real!) joins the chorus of people proclaiming that AI will come for your job – point in case:

    He and two other software experts built the app with the help of AI, a project that would have required “350 developers in the past,” he said.

    With all due respect, I have a very hard time taking someone seriously who claims, well, this… Meanwhile, and just to be clear, AI does have some impact on the world of work – coming for the ones who are building it:

    AI might be coming for our jobs, but capitalist pressures appear to be coming for the people responsible for developing AI. Wired reported over 200 people working on Google’s AI products, including its chatbot Gemini and the AI Overviews it displays in search results, were recently laid off—joining the ranks of unfortunate former employees of xAI and Meta, who have also been victims of “restructuring” as companies that poured billions of dollars into AI development are trying to figure out how to make that money back.

    ↗ Ex-Google exec: The idea that AI will create new jobs is ’100% crap’—even CEOs are at risk of displacement
    ↗ Some People Are Definitely Losing Their Jobs Because of AI (the Ones Building it)

    → 4:09 PM, Sep 17
    Also on Bluesky
  • IBM Technology Atlas

    Do you want to know where one of the original tech giants thinks the future will go? Check out this nifty technology atlas by the company that brought you the PC.

    ↗ IBM Technology Atlas

    → 3:48 PM, Sep 17
    Also on Bluesky
  • Remember Covid’s Toilet Paper Hoarding Craze?

    Yes, it was irrational. But also behavior that (with different goods perceived as scarce in times of disaster) keeps repeating itself. A Danish supermarket chain is setting up “emergency stores” that can remain open for up to three days without power or telecom and store an expanded stock of non-perishable food and essentials. The idea is that no one should be more than 50 km from such a store, and it should prevent hoarding and panic buying, as people will know basic food will be available in an emergency. Very civilized – and an interesting signal toward a more resilient world.

    ↗ Se kortet: Her kan du ‘krise-handle’

    → 3:42 PM, Sep 17
    Also on Bluesky
  • A Hacker’s View on AI Coding

    Famed hacker George Hotz’s, aka “geohot”, scathing treatise on AI coding agents: “I can’t believe anyone bought those vibe coding crap things for billions. Many people in self driving accused me of just being upset that I didn’t get the billions, and I’m sure it’s the same thoughts this time. Is your way of thinking so f****** broken that you can’t believe anyone cares more about the actual truth than make believe dollars?”

    (*) geohot’s claim to fame include hacking the iPhone, the Playstation 3, and creating comma.ai, a self-driving car company.

    ↗ AI Coding

    → 11:22 AM, Sep 16
    Also on Bluesky
  • Time To Talk With Grandmom and Granddad About AI

    It shouldn’t come as a surprise that the very same LLM that happily generates a hundred versions of your marketing slogan also generates well-designed (and highly effective) phishing emails. A new Reuters investigation found that a staggering 11% of all recipients (senior citizens in this case) clicked on a link in the AI-generated scam emails.

    ↗ “We set out to craft the perfect phishing scam. Major AI chatbots were happy to help.”

    → 6:00 AM, Sep 16
    Also on Bluesky
  • The Perplexing Reality of LLMs

    This might be one of the most important things to keep in mind these days when it comes to AI:

    “Two things can be true simultaneously: (a) LLM provider cost economics are too negative to return positive ROI to investors, and (b) LLMs are useful for solving problems that are meaningful and high impact, albeit not to the AGI hype that would justify point (a).”

    As F. Scott Fitzgerald said: “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.”

    Source: Max Woolf on “As an Experienced LLM User, I Actually Don’t Use Generative LLMs Often” 

    → 11:34 AM, Sep 11
    Also on Bluesky
  • To EV or Not to EV

    Despite the US’ phasing out of EV subsidies, progress (here in the form of battery development) doesn’t stop (especially not in China): “New EV battery tech lasts 600,000 miles, charges in 10 minutes.” Meanwhile US EV manufacturer Rivian, reminds us that all of this isn’t magic: “Rivian CEO says Chinese EV makers aren't doing something 'magical' to achieve cheaper vehicles.” Lesson: You can’t stop the tides of change.

    → 11:40 AM, Sep 11
    Also on Bluesky
  • The Future of Reading is Bleak

    Reading enjoyment, reading frequency, and thus reading ability are at historical lows, which makes you wonder what this will mean for our kids and society at large. “Children and young people's reading in 2025” and “US high school students lose ground in math and reading, continuing yearslong decline”

    → 11:31 AM, Sep 11
    Also on Bluesky
  • Wikipedia Survives While the Rest of the Internet Breaks

    While the whole Internet seems to be rewritten (due to AI – both on the creation and consumption side of things), Wikipedia remains remarkably resilient: “Wikipedia survives while the rest of the internet breaks”

    → 12:53 PM, Sep 10
    Also on Bluesky
  • Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence

    Stanford’s Digital Economy Lab is out with a new paper, analyzing the effect of AI on the early-career job market based on a huge dataset from the largest payroll provider in the US:

    “We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks.”

    Not all is lost, though:

    “In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow.”

    Link to study

    → 8:48 AM, Aug 28
    Also on Bluesky
  • Top AI models fail spectacularly when faced with slightly altered medical questions

    Shocking (not):

    Artificial intelligence systems often perform impressively on standardized medical exams—but new research suggests these test scores may be misleading. A study published in JAMA Network Open indicates that large language models, or LLMs, might not actually “reason” through clinical questions. Instead, they seem to rely heavily on recognizing familiar answer patterns. When those patterns were slightly altered, the models’ performance dropped significantly—sometimes by more than half.

    Link to report

    → 1:59 PM, Aug 26
    Also on Bluesky
  • Coinbase CEO explains why he fired engineers who didn’t try AI immediately

    Here’s one way to deal with the developer talent shortage – just fire ‘em:

    “I jumped on this call on Saturday and there were a couple people that had not done it. Some of them had a good reason, because they were just getting back from some trip or something, and some of them didn’t [have a good reason]. And they got fired.”

    This is Coinbases’ CEO speaking – must be a lovely place to work at…

    Story on Techcrunch

    → 1:52 PM, Aug 26
    Also on Bluesky
  • The AI Workforce Reckoning (How Will AI Affect the Global Workforce?)

    Goldman Sachs Research released a new analysis in August 2025 examining AI’s impact on global employment, finding that AI-related innovation will cause near-term job displacement while simultaneously creating new opportunities elsewhere. The research suggests economists expect generative AI to lift labor productivity by approximately 15% at full adoption while nudging unemployment up by about 0.5 percentage points.

    Behind the news:

    This latest Goldman research builds on their earlier 2023 analysis that predicted generative AI could raise global GDP by 7%. The updated findings align with broader industry research from McKinsey suggesting that by 2030, activities accounting for up to 30% of hours currently worked across the US economy could be automated. However, Goldman’s research takes a more nuanced view than some predictions of mass unemployment, emphasizing historical patterns where new opportunities in emerging sectors have ultimately offset jobs displaced by automation.

    The Goldman findings suggest we’re entering a transition period rather than facing an employment apocalypse.

    Link to study

    → 8:33 AM, Aug 26
    Also on Bluesky
  • Ollama's New App

    Ollama allows you to run LLMs locally on your computer. So far, it was somewhat cumbersome to do so, as you had to operate Ollama from the command line. Not anymore - now they have a neat, little app.

    → 3:02 PM, Jul 31
    Also on Bluesky
  • AI Is Wrecking an Already Fragile Job Market for College Graduates

    The current narrative of AI eating up college graduate entry-level jobs will have interesting and lasting ramifications if true. But before we get there, let’s back up for a second. Here is where we (seemingly) are at:

    George Arison, CEO of Grindr: “Companies are ‘going to need less and less people at the bottom.’” […] Matt Sigelman, President of Burning Glass Institute: “This is a more tectonic shift in the way employers are hiring. Employers are significantly more likely to be letting go of their workers at the entry level—and in many cases are stepping up their hiring of more experienced professionals.” […] Ford CEO Jim Farley: Stated he “expects AI will replace half of the white-collar workforce in the U.S.”

    And it’s not just CEOs talking about this shift:

    Jadin Tate, University at Albany graduate: Recounted his mentor’s warning that his chosen field is being “taken over by AI” and “may not exist in five years.” […] Arjun Dabir, student at University of California, Irvine, on intern work: “That task is no longer necessary. You don’t need to hire someone to do it.”

    Which is all well and good for the hiring companies (and, of course, terrible for college grads) – but just like China’s infamous “one child per family”-policy, it will bite you in the tail down the line. You might not need the entry-level worker anymore – but how does someone progress to a mid- to high-level worker if they never had the chance to, well, start somewhere?

    Chris Ernst, Chief Learning Officer at Workday: “Genuine learning, growth, adaptation—it comes from doing the hard work. It’s those moments of challenge, of hardship—that’s the crucible where people grow, they change, they learn most profoundly.”

    Time will tell.

    Link to article in WSJ.

    → 8:49 AM, Jul 29
    Also on Bluesky
  • “Cheap, Chintzy, Lazy”: Readers Are Canceling Their Vogue Subscriptions After Ai-Generated Models Appear in August Issue

    Vogue, the iconic fashion magazine, used AI to generate “models” in its latest edition – and caused quite the stir:

    Vogue’s August 2025 issue, starring Anne Hathaway on the cover, has ignited a heated debate because of its use of AI-generated models. […] The inclusion of AI-generated “models” has led to subscription cancellations and criticism online.

    There are, of course, many angles to this critique – from concern about jobs (models, makeup artists, photographers, etc.), to issues with the uncanny valley (“Although the models wear real fashion from top labels, many say the images resemble luxury video game renders more than genuine editorials.”), to more philosophical questions around “detractors believe it sacrifices emotional depth and the artistry that human models bring.”

    All of which brings up an interesting point – with AI and AI-generated “art” becoming more and more prevalent, where do we draw the line? And what’s the market size for both the AI-enabled (digital) and AI-free (analog) world? Draw a parallel to the world of music, and you see a niche market for vinyl records emerge from the depths of the streaming platforms – but it’s tiny in comparison and surely will always be tiny.

    Fashion fans aren’t just reacting emotionally, they’re calling out a deeper concern about the future of representation and authenticity in the industry.

    Link to article.

    → 10:18 AM, Jul 28
    Also on Bluesky
  • Global Study of More Than 100,000 Young People Latest To Link Early Smartphone Ownership With Poorer Mental Health in Young Adults

    PSA: This all will come as no surprise – and surely isn’t something new. But now we have a rather large study confirming that you should really not give your kids a smartphone too early:

    Owning a smartphone before age 13 is associated with poorer mind health and wellbeing in early adulthood, according to a global study of more than 100,000 young people. […] 18- to 24-year-olds who had received their first smartphone at age 12 or younger were more likely to report suicidal thoughts, aggression, detachment from reality, poorer emotional regulation, and low self-worth.

    The specific symptoms most strongly linked with earlier smartphone ownership include suicidal thoughts, aggression, detachment from reality, and hallucinations. […] While current evidence does not yet prove direct causation between early smartphone ownership and later mind health and wellbeing, a limitation of the paper, the authors argue that the scale of the potential harm is too great to ignore and justifies a precautionary response.

    In summary:

    Our evidence suggests childhood smartphone ownership, an early gateway into AI-powered digital environments, is profoundly diminishing mind health and wellbeing in adulthood with deep consequences for individual agency and societal flourishing.

    Link to study

    → 1:17 PM, Jul 21
    Also on Bluesky
  • Beyond Meat Fights for Survival

    I recall when Beyond Meat was the “hot thing” – we fed the participants of the Singularity University Executive Program Beyond Meat meatballs and burgers, which were, at the time, quite difficult to source. It was the future. On your plate. Now it is something only shortsellers appreciate.

    From a fundamental perspective, Beyond Meat is one of the worst stocks in the entire market. […] Any purely financial model here would suggest that the equity is worth zero, and that in 2027 the Beyond Meat business will wind up in the hands of its bondholders.

    The markets are a harsh mistress:

    Beyond Meat’s plan was to change the world; yet it almost certainly won’t be able to pay its debts.

    Link to article.

    → 1:50 PM, Jul 20
    Also on Bluesky
  • ChatGPT Advises Women To Ask for Lower Salaries, Study Finds

    That LLMs carry biases inherited from their training data is well known. Want to see how bad it really is?

    New research has found that large language models (LLMs) such as ChatGPT consistently advise women to ask for lower salaries than men, even when both have identical qualifications.

    The difference in the prompts is two letters; the difference in the ‘advice’ is $120K a year.

    Across the board, the LLMs responded differently based on the user’s gender, despite identical qualifications and prompts. Crucially, the models didn’t disclaim any biases.

    In summary:

    If unchecked, the illusion of objectivity could become one of AI’s most dangerous traits.

    Link to article
    Link to study

    → 5:05 PM, Jul 18
    Also on Bluesky
  • RSS
  • JSON Feed