👋 On time for your weekend: a round-up of this week’s remarkable stories at the intersection of technology, business, design, and culture. Three reads and three listens; no fluff, just stuff⚡️
We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely —E.O. Wilson
📚 Reading
A plea for solutionism on AI safety:
Safety is an achievement. It is an accomplishment of progress—a triumph of reason, science, and institutions. Like the other accomplishments of progress, we should be proud of it—and we should be unsatisfied if we stall out at our current level. We should be restlessly striving for more. A world in which we continue to make progress should be not only a wealthier world, but a safer world.
Jason Crawford—Roots of Progress | 9 minutes
I cannot believe the shit that morons are getting up to with ChatGPT:
One way of thinking about a program like ChatGPT is that it’s much better at assessing vibes than it is at reproducing facts […] Vibes-based search is very bad for research in situations where factual accuracy is important (legal briefs, journalism), but it’s not an entirely useless function […] its answers are very good at contextualizing, explaining, and appraising a given subject, i.e., “assessing the vibes.”
Max Read—Read Max | 11 minutes
Why Chatbots Are Not the Future:
I want to see more tools and fewer operated machines - we should be embracing our humanity instead of blindly improving efficiency […] I believe the real game changers are going to have very little to do with plain content generation. Let's build tools that offer suggestions to help us gain clarity in our thinking, let us sculpt prose like clay by manipulating geometry in the latent space, and chain models under the hood to let us move objects (instead of pixels) in a video.
Amelia Wattenberger | 8 minutes
🎧 Listening
Stimulating innovation from the C-Suite:
OKRs for something that's achievable, put some stretched goals because that's barely achievable, but you know where you're going. AKIs—Aspirations and Key Insights—for anything that you'd like to achieve but you admit you don't know yet. And then the process that goes with that, how you measure it, even how you lead those teams, is going to be fundamentally different.
Alex Osterwalder—Lancefield on the Line | 32 minutes
AI 2041: 10 Visions for Our Future:
[W]e are trying so hard to create another kind of imagination of the future about AI because I think it's not only about the AI superpower countries. It's not even about the elites, the privileged people […] Also, as an individual, how can we proactively engage and leverage the technology to uplift ourselves, unlock the potential of the self? You never want to compete with AI, with all this accuracy of calculation […] What's the real advantage of human course?
Chen Qiufan—Infinite Loops | 73 minutes
With AI, we’re making the same mistakes that we did with social media:
It's not like we're looking at the landscape saying that the doomers are hypothesizing what's going to happen; it is happening, and it's happening on the backs of a pretty ugly internet. Do I think there's a scenario where we get out of this and the world is a better place? Sure. I also think there is a scenario that is not that, and that things get decidedly worse. Letting the industry self-regulate […] is spectacularly naive
Hany Farid—Danny in the Valley | 39 minutes