I’m passionate about AI and LLMs, and I genuinely believe they could transform our world for the better. But I’m also a sarcastic realist who knows there are serious risks and challenges. I’ve struggled to find a clear way to describe how I imagine doing AI responsibly while still pushing innovation.
With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise. It could just as easily pour gasoline on existing problems. If we continue to sleepwalk down the path of hyper-scale and centralization, future generations are sure to inherit a world far more dystopian than our own.
Turns out a lot of inspiring people (including some I’ve followed and admired for years, like Amelia Wattenberger and Simon Willison) have already nailed this in The Resonant Computing Manifesto. It lays out five principles for building resonant software (as Willison describes it):
Keeping data private and under personal stewardship, building software that’s dedicated to the user’s interests, ensuring plural and distributed control rather than platform monopolies, making tools adaptable to individual context, and designing for prosocial membership of shared spaces.
My favorite part of AI, perfectly put into words:
This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations. We can build resonant environments that bring out the best in every human who inhabits them.
(via Simon Willison)