The radical Blog
About Archive radical✦
  • The GenZ AI Tide is Turning

    GenZ, supposedly the most AI-savvy generation entering the workforce right now, is not too thrilled about that whole AI thing.

    Anger over AI is increasing among Gen Z at the same time excitement is fading. Nearly one-third of the survey’s respondents, 31%, said AI makes them feel angry, up 9 percentage points from last year. And just 22% said the technology makes them feel excited, down from 36% the prior year. 

    Reconcile this with the growing pressure on entry-level jobs, as well as overall job losses due to AI, and you have a storm brewing.

    ↗ Link

    → 2:05 AM, Apr 13
  • Let’s Talk About AI’s Energy Footprint (Again)

    The linked article is a good and accessible summary of where we stand on AI’s energy footprint. The tl;dr is that AI’s current energy footprint is modest (comparable to streaming video). But demand is growing fast, reasoning models use 10–100x more energy than basic queries, and efficiency gains keep getting reinvested into more capability rather than saved. And what electricity powers the data centers is a much bigger question: Clean grid = net climate okay. Gas/coal grid = real problem.

    Stop feeling guilty about prompts. Your Wh per query is not the lever that matters. You’ll do more climate good by eating one less steak, taking one fewer flight, or voting for better energy policy than by boycotting LLMs. What matters at the individual level is where you direct your attention. Demand the acceleration of the deployment of clean generation to meet data center demand; grid interconnections, nuclear licensing, transmission lines, and permitting reform are the bottleneck, not GPUs.

    ↗ Link

    → 1:52 AM, Apr 13
  • Better Drug Side Effects Monitoring through Reddit?

    It shouldn’t come as a surprise that by harvesting the massive data trove that is Reddit, one can find drug side effects that are underreported in clinical trials. Reminds us of a pharma client of ours who mentioned that they consider Apple a massive threat to their business – as the company has a humongous amount of data on healthy people, whereas pharma companies typically only have data on sick people.

    Using artificial intelligence to scan more than 400,000 Reddit posts, researchers from the University of Pennsylvania documented numerous reports of possible GLP-1 side effects that may be underrecognized in clinical trials — including menstrual changes, fatigue, and temperature sensitivities.

    ↗ Link

    → 1:43 AM, Apr 13
  • The AI Quiet Quitters

    Shadow AI was the story for a while – workers sneaking ChatGPT past IT, doing in minutes what used to take hours, running an underground productivity movement from their personal accounts (or simply freeing up more time to watch TikTok). Management called it a governance problem. Workers called it getting the job done. It felt, in a strange way, like good news (just like the good old days when we all brought our personal Dropbox accounts to the workplace as we were sick and tired of 1980s SharePoint).

    That era has quietly ended. A new global survey of 3,750 executives and employees across 14 countries finds that more than 54% of workers bypassed their company’s AI tools in the past 30 days and completed the work manually instead – and another 33% haven’t used AI at all. Eight in ten enterprise workers are avoiding the technology their employers are spending record sums to deploy. Shadow AI has become the AI no-show show.

    Now the data tells a different story. The tool that workers once raced to adopt covertly has become, for a large and growing share of the workforce, the tool they’ve stopped using altogether. Not because it doesn’t work. Because they’re afraid of what happens when it works too well.

    The piece also surfaces a huge trust gap: only 9% of workers trust AI for complex, business-critical decisions, compared to 61% of executives – a 52-point chasm. Executives and employees are, as the report puts it, describing fundamentally different companies. The fear of obsolescence – FOBO, fear of becoming obsolete – has apparently crossed the threshold from anxiety into active avoidance. Which is, if you think about it, a perfectly rational response to a completely irrational situation.

    ↗ Link

    → 4:32 PM, Apr 9
  • Digital Transformation is (Finally) Dead

    For twenty years, the world operated on a simple principle: buy standard software, don’t build. The logic made sense, as building was insanely expensive, risky, and slow. The result was highly standardized systems (well hello, SAP!) which we had to stretch well beyond what they were designed for, patch the gaps with middleware, hire consultants to integrate the integrators, and call the whole messy pile “transformation.”

    This long piece by EY’s Colm Sparks-Austin makes the case that the economics have fundamentally flipped. AI and modern dev tools have made engineering capacity abundant. The constraint is no longer “can we build this?” It’s “do we know what to build and why?” Colm’s argument is sharp – treat the core (ERP, system of record) as the skeleton: rigid, compliance-bearing, changed rarely. And treat the edge – the customer-facing layer, the last mile – as tissue: built to regenerate when the market shifts.

    Standardization is no longer a safety net. It is a ceiling.

    The piece is long, but worth your time – especially if you work with or inside large enterprises still debating whether to “buy or build.” That debate is over.

    ↗ Link

    → 3:53 PM, Apr 9
  • China Is Coming for You, Lil Miquela

    If you know us, you know that we’ve been talking about virtual humans (and more specifically, virtual influencers) for a long time now. Our particular example was always Miquela Sousa, a virtual influencer created by the LA-based design agency Brud. Our particular fascination with Miquela and her brothers and sisters centers around the fact that she never ages, never gets sick, never has a bad hair day, travels anywhere, and works 24/7 without a break. Since we talked about her in 2017, she was joined by an ever-expanding family of virtual humans. Now China is closing in on them:

    The Cyberspace Administration of ‌China’s proposed rules would require prominent “digital human” labels on all virtual human content and prohibit digital humans from providing “virtual intimate relationships” to those under 18, according to rules published for public comment until May 6.

    and

    “The governance of digital virtual humans is no longer merely an issue of industry norms; ⁠rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy,” it added.

    ↗ Link

    → 3:39 PM, Apr 6
  • Is AI Slop Our Future?

    AI Slop is seemingly everywhere these days. And it’s getting worse. But here is an interesting counter-argument (at least when it comes to code):

    […] AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long term.

    In simple words: “AI will write good code because it is economically advantageous to do so.” I do believe this to be true (we already see this with the quality of code generated by frontier models such as Claude Opus/Sonnet 4.6). It will be interesting to see how this plays out – there might be a real incentive for AI companies to compete on quality, which would be a very “free market” thing to do.

    ↗ Link

    → 10:51 AM, Apr 2
  • Is AI Slop Our Future?

    AI Slop is seemingly everywhere these days. And it’s getting worse. But here is an interesting counter-argument (at least when it comes to code):

    […] AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long term.

    In simple words: “AI will write good code because it is economically advantageous to do so.” I do believe this to be true (we already see this with the quality of code generated by frontier models such as Claude Opus/Sonnet 4.6). It will be interesting to see how this plays out – there might be a real incentive for AI companies to compete on quality, which would be a very “free market” thing to do.

    ↗ Link

    → 10:50 AM, Apr 2
  • AI Learning Curves Are Real

    Anthropic, maker of Claude, released yet another report on the usage of AI (I applaud them for doing this – their reports tend to be actually useful, and not the usual company-sponsored “look how great we are” puffery). This time, they dug into the use of AI across the economy. Lots of good nuggets in the paper; the one standout for me is their insight into how the jagged edge, the concept popularized by Ethan Mollick, plays out in the real world (this is paraphrased):

    There’s a compounding dynamic at play: experienced users bring harder problems, get better results, and develop sharper instincts for working with AI – while later adopters are still figuring out the basics.

    In essence: Early adopters with high-skill tasks have more successful interactions with Claude than later, less technical adopters – and these early-adopting users may simultaneously be the most exposed to AI-driven disruption and most aided by AI in these initial, augmentative waves of adoption. As my mom used to say: Be careful what you wish for.

    ↗ Link

    → 4:30 AM, Mar 31
  • Thinking Fast, Slow, and Artificial

    In 2011, Nobel Prize winner Daniel Kahneman published his bestselling book “Thinking, Fast and Slow.” In it, he describes the two modes of thinking we all operate in: System 1, which is fast and intuitive, and System 2, which is slow and deliberate. Now, in a new paper, Steven D. Shaw and Gideon Nave from The Wharton School argue that AI introduced a third mode of thinking:

    People increasingly consult generative artificial intelligence (AI) while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory, extending dual-process accounts of reasoning by positing System 3: artificial cognition that operates outside the brain. System 3 can supplement or supplant internal processes, introducing novel cognitive pathways.

    And, as you would expect, with it comes a whole host of questions: “System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI.” The study is worth reading…

    ↗ Link

    → 4:26 PM, Mar 29
  • The AI-CEO Threat

    Here’s an interesting one – the CEOs of major companies are stepping down to make room for people with a better grip on AI.

    “In a pre-AI, a pre-gen-AI mode, we made a lot of progress. But now there’s a huge new shift coming along,” Quincey said. While he said he’s leaning into the technological advances, he believes the beverage company needs “someone with the energy to pursue a completely new transformation of the enterprise.”

    It does make you wonder a) how many CEOs are hanging on to their jobs by the skin of their teeth, b) how many CEOs are oblivious to what the AI transformation actually means for their companies, and c) how many more CEOs we will see throwing in the towel and handing over the reins to new generations. Now might be a good time for folks with CEO aspirations (and a solid grip on AI) to step up…

    ↗ Link

    → 12:36 PM, Mar 29
  • What 81,000 People Want From AI

    Anthropic, the AI company which is not OpenAI, conducted what is, in their own words, likely the largest study on users’ desires, wishes, and fears when it comes to their use of AI. Anthropic being Anthropic, they didn’t survey people using a traditional questionnaire, but rather had their chatbot “talk” to people. The findings won’t surprise you – people want to use AI to better themselves: professional excellence and increased productivity, which translates into the very human desire to, ultimately, live better. And respondents live the Scott Fitzgerald quote we are so fond of quoting – they keep the light and the dark of AI in their heads simultaneously.

    “AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it’s exactly the other way around.”

    ↗ Link

    → 3:10 PM, Mar 26
  • Maybe AI Isn’t Online Shopping’s Future After All

    After the initial hype of online shopping results being incorporated into the answers LLMs give to the numerous product-related queries they receive, Walmart unveiled that the conversion they are seeing from those AI-referrals is just terrible.

    After testing 200,000 items in ChatGPT, Walmart found sharply lower conversions and will use its own integrated shopping experience. Walmart said conversion rates for purchases made directly inside ChatGPT were three times lower than when users clicked through to its website.

    Next: Agentic commerce. The jury’s out.

    ↗ Link

    → 9:12 AM, Mar 24
  • AIs Energy Demands Are Truly Bonkers

    Japanese tech giant SoftBank is building a massive 10GW data center in Ohio to host AI models. Aside from the cool $30–40 billion price tag, it will require the build of a $33 billion natural gas power plant – with an insane output capacity (emphasis mine):

    When completed, the new site could be one of the largest AI data centers ever built. Furthermore, it will be powered by one of the world’s largest fleets of gas turbines, equivalent to the energy supply of nine nuclear reactors.

    It does leave you wondering where and how all this will end.

    ↗ Link

    → 8:50 AM, Mar 24
  • OpenClaw Isn’t Really New – It’s The Dream of Free Labour

    Unless you were living under a rock in AI-land, you’ve definitely heard of the OpenClaw craziness (we reported on it multiple times here in the radical Briefing). The narrative, usually, is around the technological breakthrough and the magic that ensues when you hand over the keys to the kingdom to your army of AI bots. Here’s a good counter-narrative – the tech isn’t new per se, it’s just combined and connected in an interesting way. And the hype, really, is about the never-ending dream of free labour – and ends up being more about FOMO than anything else.

    A machine producing a thousand candidate images while you sleep is plausible and often useful. A machine founding a hundred profitable businesses before breakfast is rather more ambitious. The first is a search process. The second is venture-capital fan fiction.

    ↗ Link

    → 11:34 AM, Mar 17
  • McKinsey Can't – But Individuals Do

    In stark contrast to McKinsey, solo-developer Craig Mod built his own (fairly complex) accounting system from scratch using Claude Code in five short days. Aside from the audacity of it all, it’s a perfect example of the “bifurcation of intelligence” we have been talking about here in the radical Briefing. On one hand you have big firms seeking efficiency gains by deploying chatbots, and on the other you have individuals riding the speartip of AI to create complex, bespoke systems.

    Simply put: It’s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own. It took me about five days. I am now using the best piece of accounting software I’ve ever used. It’s blazing fast. Entirely local. Handles multiple currencies and pulls daily (historical) conversion rates. It’s able to ingest any CSV I throw at it and represent it in my dashboard as needed. It knows US and Japan tax requirements, and formats my expenses and medical bills appropriately for my accountants. I feed it past returns to learn from. I dump 1099s and K1s and PDFs from hospitals into it, and it categorizes and organizes and packages them all as needed. It reconciles international wire transfers, taking into account small variations in FX rates and time for the transfers to complete. It learns as I categorize expenses and categorizes automatically going forward. It’s easy to do spot checks on data. If I find an anomaly, I can talk directly to Claude and have us brainstorm a batched solution, often saving me from having to manually modify hundreds of entries. And often resulting in a new, small, feature tweak. The software feels organic and pliable in a form perfectly shaped to my hand, able to conform to any hunk of data I throw at it. It feels like bushwhacking with a lightsaber.

    ↗ Link

    → 11:04 AM, Mar 17
  • Battle Royale: AI vs. AI.

    McKinsey, your friendly consulting firm, has deployed their own ChatBot “Lilly”. Hackers (in this case, and luckily for McKinsey, white-hat hackers – the good and friendly kind, who disclose their findings to the company) have, by using a set of AI agents, managed to exploit a vulnerability in Lilly and gain access to “46.5 million chat messages about strategy, mergers and acquisitions, and client engagements, all in plaintext, along with 728,000 files containing confidential client data, 57,000 user accounts, and 95 system prompts controlling the AI’s behavior.” You know, no big deal…

    […] the entire process was “fully autonomous from researching the target, analyzing, attacking, and reporting.”

    As useful as agents are for businesses, they are equally useful for hackers. Prepare yourself for an onslaught of AI-powered cyber attacks.

    ↗ Link

    → 6:27 AM, Mar 12
  • Not a Coder? Not a Problem. AI Is Still Coming for Your Job.

    Here’s a good, long read on The Verge about lawyers, PhDs, and scientists who lost their jobs to AI. Despite all the talk about “Jevons Paradox” – the observation that efficiency gains lead to increased consumption – for now, we seem to be squarely stuck in a world where AI is a net job destroyer. It does make you wonder how long it will take for the masses to catch up with the trend and start pushing back (we, of course, already see it in pockets – the weak signals are talking).

    “My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable." — Katya, content marketer turned AI trainer

    ↗ Link

    → 6:16 AM, Mar 12
  • YouTube Is the New King of Ad Revenue

    It feels like yesterday when Google bought YouTube for a – at the time – shocking $1.65 billion. That was in 2006 – 20 years later, and YouTube now generates more ad revenue than Disney, NBC, Paramount, and WBD – combined.

    I’d say, whoever did that deal back in the day, should feel pretty smug right now. Deservedly so.

    ↗ Link

    → 7:14 AM, Mar 11
  • Tech Is the New Plastic

    Not a good time to be in tech… Remember when your uncle said: “Become a coder. That’s the future – and you’ll be rich!”

    SCR 20260309 ofyb.

    Mr. McGuire: “I just want to say one word to you. Just one word.” Benjamin: “Yes, sir.” Mr. McGuire: “Are you listening?” Benjamin: “Yes, I am.” Mr. McGuire: “Plastics.” Benjamin: “Exactly how do you mean?” Mr. McGuire: “There’s a great future in plastics. Think about it. Will you think about it?”

    ↗ Link

    → 3:15 PM, Mar 9
  • You Bought Zuck’s Ray-Bans. Now Someone in Nairobi Is Watching You Poop.

    In the same line of our last post – and the headline says it all already – Meta’s Smart Glasses are a complete privacy disaster. Which, of course, is not particularly surprising given it’s… well… Meta. Not sure how many wearers of Meta’s nifty Ray-Bans and Oakleys are aware of the fact that they opted into their camera feed being used to train Meta’s AI – with disastrous results:

    Workers at Sama, one of Meta’s annotation subcontractors, describe reviewing video of people undressing, coming out of bathrooms naked, watching porn, having sex, and exposing bank card details.

    Yep. It’s that bad.

    ↗ Link

    → 7:33 AM, Mar 5
  • Now Everybody Knows You’re a Dog

    A famous New Yorker cartoon from 1993 depicted two dogs in front of a computer, with one of them saying, “On the Internet, nobody knows you’re a dog.” The joke reflected the fact that, at the time, on the Internet, we reveled in pseudonymity – the act of being able to shield your true identity behind a screen name. Thanks to our friend, the omnipresent LLM, that’s all about to change.

    The finding, from a recently published ~research paper~, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators.

    This is genuinely bad news for the many groups of people who have a legitimate reason to hide their identity.

    ↗ Link

    → 2:51 PM, Mar 4
  • Do You Want Fries With That?

    Talk about a dystopian future. Burger King is testing a new headset for its drive-thru staff, which “compiles ‘friendliness scores’ at the fast-food chain’s locations based on employees' conversations, according to a promotional video the company shared with the BBC.” There is so much to unpack here – the sheer fact that the company cheerfully shared a “promotional video” about its AI-driven surveillance tech is probably all that you need to know.

    In all fairness, the company says the technology “[…] is not designed to ‘record conversations or evaluate individual employees’” – yet. Black Mirror, anyone?

    Customer service calls have routinely been recorded and monitored for years. Employees are often aware that they can be assessed to ensure they’re using the correct language. But this latest step by Burger King elicited swift condemnation among some social media users who described it as “dystopian”. Others questioned how accurate the chat-bot headsets will be, given that AI tools have proven to be prone to errors.

    ↗ Link

    → 3:37 PM, Mar 2
  • AI in Europe: Not as Bad as You Might Think

    A recent study by CEPR (an independent, non-partisan pan-European think tank) found that among the 12,000 surveyed companies, AI adoption led to a labor productivity increase of 4% on average, with no reported short-term negative impact on employment. Studies on this subject across the world are all over the place – with many having a hard time finding any measurable impact of AI on productivity, and some claiming rather drastic negative impacts on employment. As most of these studies are conducted in the US, it is nice to see a study from a different part of the world.

    The productivity dividends from AI depend not merely on acquiring the technology but on firms’ capacity to integrate it through investments in intangible assets and human capital. […] An additional percentage point spent on training amplifies AI’s productivity gains by 5.9 percentage points.

    ↗ Link

    → 4:36 PM, Feb 24
  • Lidar Has Become Cheap as Chips

    I remember, back in my days at Singularity University, we talked about how Lidar (the laser-based technology that measures distance by illuminating a target with a laser and measuring the reflected light – and hence became instrumental in allowing a robot, e.g. a self-driving car, to “see” its surroundings) would become cheap and ubiquitous. It took a while, but now we are (finally) there – Lidar units are now available for less than $200.

    When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.

    True. And a nice jab at our friend Elon, who famously rejected Lidar in favor of (much cheaper) cameras.

    ↗ Link

    → 4:23 PM, Feb 24
  • RSS
  • JSON Feed