Weird Nerds
Misplaced AI Expectations ⓧ Tradeoffs in Weird Nerds ⓧ Raise Your Artificial Intelligence
👋 On time for your weekend: a round-up of this week’s remarkable stories at the intersection of technology, business, design, and culture. Three reads and three listens; no fluff, just stuff ⚡
There is absolutely no inevitability as long as there is a willingness to contemplate what is happening—Marshall McLuhan
📚 Reading
Are AI Expectations Too High or Misplaced?
I think in a short time we will look at features like summarization, rewriting, templates broadly, adding still photos or perhaps video clips, even whole draft documents as nothing more than new features and hardly a mention of AI […] Is it that AI really is an ingredient technology and always gets surrounded by more domain/scenario code? Is it that AI itself is an enabler that has many implementations and points people in a direction?
Steven Sinfosky—Learn by Shipping | 8 minutes
How to Raise Your Artificial Intelligence:
A very common trope is to treat LLMs as if they were intelligent agents going out in the world and doing things. That’s just a category mistake. A much better way of thinking about them is as a technology that allows humans to access information from many other humans and use that information to make decisions […] LLMs give us a very effective way of accessing information from other humans.
Alison Gopnik—LA Review of Books | 24 minutes
The Weird Nerd comes with trade-offs:
Most people, while liking non-conformism in the abstract and post-facto, are not very willing to actually put up with the personality trade-offs of Weird Nerds in practice […] Weird Nerds will have certain traits that might be less than ideal, that these traits come “in a package” with other, very good traits, and if one makes filtering or promotion based on the absence of those traits a priority, they will miss out on the positives.
Ruxandra Teslo | 11 minutes
🎧 Listening
What to Build in AI:
If you want human level intelligence from a model, it'll be a big model and there'll be a couple of them around, some better than others. But I don't think small models substitute for big models, but they do some things really well. If you're talking to something with very low latency, you want a shorter path. So that'd be a small model on device […] But they're meant to do responsive interfacing, not to be the source of intelligence.
Vinod Khosla—More or Less | 58 minutes
AI will make money sooner than you’d think:
One of the things that I'm nervous about is because the technology is so similar to what it feels like to interact with a human, that people overestimate it or trust it more than they should and put it into deployment scenarios that it's not ready for.
Aidan Gomez—The Verge | 73 minutes
What Do LLMs Tell Us About the Nature of Language—And Ourselves?
All these language models are essentially generating text from inside a distribution […] I think the truth is good—really, really, really good—writing is way out at the edge of that probability cloud, that distribution of content. And I think truly good writing actually pushes a bit beyond it. It's the stuff that expands the frontier of what we thought could be written. And that's precisely where language models are the weakest
Robin Sloan—Every | 53 minutes
💎 Timeless
1️⃣ year ago—Detecting the Secret Cyborgs
2️⃣ years ago—The Rise of the Internet’s Creative Middle Class
3️⃣ years ago—Regulating technology