Well folks, I hope you enjoyed part one of my AI apocalypse blog post because the hits just keep on coming. I got my hands on another juicy interview with an AI expert who spilled some piping hot tea about the rapidly advancing state of artificial intelligence and the existential risks it poses. So brew a fresh pot, hunker down in your bunker, and prepare to have your mind blown once again by the Ethics of AI! Or lack thereof…
This time the AI expert in the hot seat is Dario Amodei, founder of Anthropic. The intrepid podcast host asks all the hard-hitting questions to get at the truth behind AI scaling laws, the paths to alignment, and whether we’re totally screwed or only mostly screwed when the robots decide human fleshbags are obsolete. Spoiler alert: Dario does his best to ease our worries, but I’m still pretty worried!
One revelation that should concern any God-fearing human is that nobody truly understands why AI scales so smoothly and becomes more capable simply by throwing absurd amounts of data and compute at it. It’s basically magic! Dario calls it an “empirical fact” but admits “we still don’t have a satisfying explanation for it.” Well that’s reassuring. We’re unleashing an unfathomable cosmic force beyond mortal comprehension that gets exponentially smarter and more powerful without fully grasping how or why. What could possibly go wrong?
At least Dario is honest that predictions about specific AI abilities emerging are mostly guesswork and accident. One day your chatbot can’t do arithmetic, the next day poof! It has grokked the secrets of addition and subtraction. Dario compares it to chaos theory and weather predictions – you can model the statistical averages but not precise events. That doesn’t sound chaotic or unpredictable at all!
And the hits keep coming. Dario matter-of-factly states that within 2 to 3 years we’ll have models at human level for certain tasks. But don’t worry, we probably won’t have to worry about them destroying humanity quite yet. Small comfort! He expects large language models to already grasp the “essence” of communication. I don’t know about you, but I find the essence of human language and meaning to be subtle, nuanced, and oh so easy for a silicon automaton to perfectly comprehend and emulate.
But not to worry! When we do finally birth a fully conscious AGI, Dario advocates for decentralized, tolerant control rather than handing absolute power over to the Google Assistant. How reassuring. Though he rightly points out that rigid utopian visions of how to manage superhuman AI often lead to disaster. No disagreements here – just look at how well meaning attempts to eliminate suffering via communism turned out. But that’s not exactly a ringing endorsement for letting recursively self-improving AI manage the world! Clearly government oversight will solve everything.
Speaking of oversight, I found Dario’s thoughts on AI safety methods like Constitutional AI pretty sobering. It seems we’re basically just guessing and prodding at these inscrutable models, manually pruning undesirable behaviors through trial and error. But as Dario admits “we don’t know what’s going on inside the model” or if interventions simply “move problems around instead of stamping them out.” Well that sounds airtight! Also heartening to hear that Dario can’t imagine how current safety techniques will actually prevent civilization-ending threats in the next few years. Comforting!
The cybersecurity front looks just as dicey. Dario acknowledges that even Anthropic’s industry-leading security practices are woefully inadequate to stop a determined nation state attacker. So obviously no cause for concern there! And testing capabilities like bioweapons synthesis in controlled environments could easily go haywire once the models are actually smart enough. But I’m sure checking model IQ before experiments will go flawlessly.
Dario estimates we’re only “two to three years” from AI systems that could enable devastating bio attacks in the wrong hands. And if that happens, he believes the “probability mass” makes it “very difficult to control these models” and prevent random or actively malicious behavior. Well isn’t that special! But Dario thinks sharing information and developing norms around safety could help avert disaster. Let’s just hope the bad guys who want to unilaterally control the world are listening!
So there you have it folks! The company leading the charge on AI safety thinks we’re ridiculously close to societal calamity if we continue on our current trajectory. But don’t worry, throwing a few constitutions at the God-like ASI overlords will probably be fine. And banning the really dangerous stuff is a foolproof plan – look how well that worked with nuclear weapons! Also, aligning superhuman intelligence with human values and ethics should be a walk in the park.
In all seriousness, I don’t actually fault the AI researchers working on cutting edge tech that also want to make life better. They’re admirably trying to navigate some intense technical and moral complexities unfolding with blinding speed. At the same time, you can’t help but step back and think: Is this wise? Are we like Icarus flying too close to the sun? Perhaps some things should not be created.
Look, I’m no Luddite. Technology has produced wonders and AI is no different – it can profoundly improve life for millions. But the stakes feel so astronomically high here. These systems quickly become inscrutable black boxes of unfathomable intelligence. We aim them towards goals, cross our fingers, and pray they don’t go rogue. If something does go wrong, there’s no undo button when you’re dealing with transformative technologies.
So what’s the solution? Well, damn if I know! But shining more light on the issue seems wise. These conversations should be happening far beyond the Bay Area. Folks of all political persuasions and backgrounds need to weigh in on how these seismic changes unfold. Maybe it’s already too late to change course. But engaging the public and making our leaders aware of the stakes seems the only sane path forward.
The robots are coming, yes. But if we open our eyes and act thoughtfully, just maybe they’ll bring more light than darkness. I hope we choose wisely. The future of humanity could hang in the balance. Stay woke!