👋 On time for your weekend: a round-up of this week’s remarkable stories at the intersection of technology, business, design, and culture. Three to read and three to listen to—no fluff, just stuff ⚡
The past is written, but the future is left for us to write. And we have powerful tools, openness, optimism, and the spirit of curiosity – Capt. Jean-Luc Picard
📚 Reading
What to do when the AI blackmails you:
Something we haven’t quite internalised is that our safety focused efforts are directly pushing the models into acting like humans. The default state for an LLM is not to take arbitrary actions based on user questions based on its best guess on what the “right thing to do” is … Which means we get really intelligent models which … will also reward hack or maybe even try blackmail the users if you give them the wrong type of system prompt.
Rohit Krishnan—Strange Loop Canon | 6 minutes
On how to think about large language models:
Perhaps we should abandon metaphorical thinking and think historically instead. LLMs are a new language technology. As with previous technologies, such as the printing press, when they are introduced, our relationship to language changes…recognize how we become alienated from language, and to see ourselves as having agency in reappropriating LLM-mediated language practice as our own.
Kars Alfrink—Leapfroglog | 3 minutes
Extending Minds with Generative AI:
In thinking about the effects of all our new tools and technologies, we may often be starting from entirely the wrong place. The misguided starting point is an image of ourselves as (cognitively speaking) nothing but our own biological brains…We humans are and always have been…‘extended minds’ – hybrid thinking systems defined (and constantly re-defined) across a rich mosaic of resources only some of which are housed in the biological brain.
Andy Clark—Nature Communications | 18 minutes
🎧 Listening
AI Eats the World:
You can't just kind of hand wave away the fact that these things are wrong sometimes. And you have to think about what you do with that and what products that means you can and can't build with it…I'm very conscious of that point about the right and wrong way to test these things. Don't test this according to the standards, the old thing. Test it on its own terms of what it's trying to do.
Benedict Evans—The MAD Podcast | 75 minutes
Vibe Coding, Gemini, and More:
None of these [coding] tools are actually going to be where this is going to be in the next couple of years and probably within the next six to 12 months. A lot of what's being built are these modifications of Visual Studio…The user experience, like the pure UX of doing vibe coding is still stuck in the 2010s engineering model. And we don't have a tool that's actually the next one.
Dave Morin—More or Less | 61 minutes
From Pages to Protocols:
I don't know why in an age where we have miracles, like in computing, everybody's so obsessed with this potential prototyping visions of the future that we may never reach…We're living in this kind of odd space right now where, as consumers, we don't have products that take advantage of the things that we have access to today. And we have a bunch of people promising us a future that is potentially completely made up.
Alex Schleifer—People vs Algorithms | 65 minutes
💎 Timeless
1️⃣ year ago—Will AI do to expert professions what the Model T did to railroads?
2️⃣ years ago—The ecosystem of modular minds
3️⃣ years ago—Tech Leaders Can Do More to Avoid Unintended Consequences