The rapid development of AI has demanded leaders question the nature of progress. Asking what success looks like, and how we get there. This alignment problem offers a fresh vantage point on the conversation and how we develop AI for human flourishing.
The Meaning Alignment Institute was first driven by the realisation that, “to yield the power and intelligence of AI well, you need sophisticated social thinking,” Ellie says. “Because you need to cultivate a very deep and refined attunement to what's worth augmenting and celebrating in life.” It’s a counter to the two main camps around AI progress and regulation. The effective altruists who claim to favour safety over speed, and the techno-optimists “hitting the accelerationist pedal,” Ellie says. “People conflate concepts a lot: ‘Progress for progress’s sake is good’. But progress is fundamentally about human flourishing.”
The institute draws links from technology’s progress discourse to a wider hunger in society – for new possibilities and frameworks for growth. At an event in 2023, Peter Thiel challenged them to design a story of progress that resonates outside of the Silicon Valley bubble. “What I like is that he recognised the limitations of the traditional Silicon Valley spirit,” Ellie says. “Because he gave me this prompt: ‘what would inspire grandma?’ I thought that was interesting. Because yes, ideas of progress and technology tend to speak to one particular type of guy. So what would a story of progress be like, if they tried to speak to a different type of person?”
ChatGPT could be replaced with a wiser version, capable of acting as a mentor and problem solver concerned with what’s meaningful to users. They then hope to see more generative AI models move in that direction, along with regulation. “We’re trying to replace different kinds of current systems,” says Joe. And even impact, “political and market structures. They also have this problem of being structured around preferences, not values.”
“The goal is full stack alignment: a very broad societal change. To inject some wisdom and some concern for people's values into large scale systems. LLMs give us the opportunity to understand if an experience is good for you – not just assume that because you clicked. I don't just assume that if you spend time on social media, you're being well served by social media. I don't assume that if you bought something, you're being well served by capitalism. It’s all the same trick, which is just to look underneath the engagement metrics and see what's really happening, and then make a more sophisticated measure of success.”