šĀ On time for your weekend: a round-up ofĀ this weekās remarkable storiesĀ at the intersection of technology, business, design, and culture. Three reads and three listens; no fluff, just stuff ā”
We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely āE.O. Wilson
š Reading
The Real Crisis in Humanities Isn't Happening at College:
The rapid acceleration of algorithmic and AI-driven systemsāthe latest flavor of the month in Silicon Valleyāmakes clear where we were heading. The goal in the technocracy is now obvious. Just pay attention to what they say. Thereās a reason why the most popular words in tech right now are acceleration, destruction, disruption. Have you figured out what they want to destroy and disrupt? Hereās a clueātake a look in the mirror.
Ted GioiaāThe Honest Broker | 14 minutes
Large language models can do jaw-dropping things. But nobody knows exactly why.
The biggest models are now so complex that researchers are studying them as if they were strange natural phenomena, carrying out experiments and trying to explain the results. Many of those observations fly in the face of classical statistics, which had provided our best set of explanations for how predictive models behave [ā¦] The tech worksāisnāt that enough?
Will Douglas HeavenāMIT Technology Review | 14 minutes
On the necessity of a sin:
This quasi-human weirdness is why the best users of AI are often managers and teachers, people who can understand the perspective of others and correct it when it is going wrong [ā¦] Rather than focusing purely on teaching people to write good prompts, we might want to spend more time teaching them to manage the AI. To getting them inside the non-existent head of the AI so that they can understand intuitively what works.
Ethan MollickāOne Useful Thing | 9 minutes
š§ Listening
Looking for AI use-cases:
Everything around chat GPT also supposes that you have the time and the willingness to recreate a whole new workflow that works for you. And that is the reason why all of these adoptions are always slow, always not done in mass because most people do not have the time or the capacity or the willingness or the desire to spend. Most people don't want to go and work out a completely new way of doing their job
Toni Cowan-BrownāAnother Podcast | 32 minutes
Why Multimodal Agents are the path to AGI:
What I'm more interested in is a definition of AGI that's oriented around a model that can do anything a human can do on a computer. If you go think about that, which is super tractable, then agent is just a natural consequence of that definition [ā¦] And then the thing we forgot is de novo Reinforcement Learning is a pretty terrible way to get there quickly. Why are we rediscovering all the knowledge about the world?
David LuanāLatent Space | 42 minutes
Exploitable by Default: Vulnerabilities in GPT-4 APIs:
It's very cheap to perform these kinds of attacks and it is actually one of the curses of scale because bigger models are generally more sample efficient. So they learn more quickly both from giving samples in a prompt but also from fine tuning [ā¦] You don't need many examples for it doing the wrong thing, for it to pick up on correlation. In a less capable model it might be a bit harder to do this accidentally because you need to have lots of harmful examples.
Adam GleaveāCognitive Revolution | 104 minutes
š Timeless
1ļøā£ year agoāMalleable software in the age of LLMs
2ļøā£ years agoāIt will take more than technological innovation to realise the next economic transition
3ļøā£ years agoāThe Rise of Platform Brands