
Our take on the future of AI
19 jun 2024
“You must be an AI optimist.”
AI is likely to continue growing more capable
When people breathlessly hype over current developments in AI, they often don’t realise that the technology responsible for the latest wave of AI hype was invented in 2014, and that the underlying principle of building neural networks with computers has been around since 1943.
Since the first real AI crazy in the 50’s, the field has split and merged into many different engineering sub-disciplines.
Knowledge graphs, adversarial systems, self play. That doesn’t even touch on all the different kinds of AI that have been invented since then, including models of intelligence derived from first principles and still in their infancy. AI is an enormous field of study with many branches, one of which (transformers) has emerged as being able to turn natural language into code. That doesn’t mean this branch will become generally intelligent. It is just one more capability added to a long list.
The creation of all kinds of artificial intelligence coincide with many network effects, and that network events tend to hide power scaling laws like Metcalfes law. All the processes required to create AI benefit from the scaling laws produced by the others. Even as one optimisation process hits its limit, others take over to keep driving up value and driving down cost.
The chain of thought is something like:
As chips get denser (Moores law) they enable ever more use cases.
As chips and chip connectivity (think CUDA) get better, we can create ever larger supercomputers, AND ever smaller, lighter and more efficient hand held computers (think Apple Silicon).
As our computers improve, all sectors that enable the chips to get made in the first place get more efficiënt. from resource extraction to production, logistics, connectivity, administration and even recycling.
As our devices for capturing information about our world are doing so at higher resolutions, for cheaper, and for longer and in greater numbers, ever more data is created. There are even seismic moments of industrial knowledge transfer, like how a startup is now making precision optical lenses using similar processes used to create computer chips.
As information becomes more accessible, the smartest minds in the world are no longer bottlenecked by where they grow up but have access to the best educational materials. They might grow up to become important players that innovate in yet more aspects of the value chain.
All these feedback loops and network effects increase the capability of AI over time.
Even if Moore’s law slows down, or even stops, the other scaling laws might not.
Therefore: AI becoming orders of magnitude more useful over the course of the next 30 years is not only feasible, its highly likely.
If there is a breakthrough in reasoning that enables AI with the capability of AI researchers themselves, we might reach an intelligence explosion that makes ChatGPT look like a tamagotchi.
“Don’t you think progress will Plateau?”
Even if transformers hit a limit in reliability, we can build verification systems and knowledge graphs for factual data. If they hit a limit in sample efficiency, we can build systems that learn from first principles, combining high level abstractions to produce new concepts.
Rather than training a transformer to predict the next token on an unstructured training set, we might build layered models that learn very basic concepts, like presence vs absence, or basic logic first, before adding complexity and training them on language, code, law, art. This is allready happening to a degree: A recent paper showed that using a variety of training strategies as well as a variety of training data improved reasoning capabilities. As noted by someone on twitter (retweeted by the paper’s author):
Send the robot to algebra class every so often.
New architectures combining the error prone creativity of transformers with symbolic solvers (mathematically checking the answer, for example) leads to intelligences that are more effective than the sum of their parts.
All in all: Thinking that continued exploration will not be sufficient to bring us into a future heavilly disrupted by AI seems like a bad bet.
Of course a plateau is possible. I think a Dune style Butlerian Jihad is also possible. Thinking in possibilities doesn’t have to mean subscribing to an absolute position. It is fruitful to consider many possible futures and plan accordingly. In the current climate, planning for superintelligence seems prudent.
“So where does that leave us?”
In all futures, it is important that we come up with ways to harness the upside of machine intelligence without letting the negatives prevail. How can we do that? What does a future with Artificial Super Intelligence (ASI) look like? I believe that as AI takes over more and more expert and individual contributor roles, the primary job left for humans will be taking responsibility.
Imagine: Someone deployed an autonomous AI robot army that turned a rainforest into a lumber yard. The robots could have also planted trees, or defended endangered species. It was someone, or a group of people with some moral agency that made that decision. If society values ecological heritage and the right to life of other-than-humans more than a some hardwood furniture, the people responsible should be held accountable.
How should they be held accountable? For what? Thats up to us to decide.
We already see this dynamic in larger organisations, where some people focus solely on making decisions and (ideally) taking responsibility, while others fill specialised roles. As AI advances, it's likely to take over many of these specialised functions, leaving humans to focus on the high-level decisions that machines can't or shouldn't make.
But what does it mean for humans to take responsibility in an AI-driven world? I believe it comes down to a few key aspects:
Defining what we want to take responsibility for - what projects, goals, and outcomes do we want to steer?
Establishing and maintaining value systems - what ethical principles and priorities should guide our decisions and the AI systems we create?
Deciding who (or what) to cooperate with - which AI systems, human organizations, and other entities should we partner with to achieve our goals?
Setting clear objectives - what specific aims should we pursue, and how will we measure progress and success?
In a world where multiple groups are taking responsibility for different things, with different value systems and priorities, coordination and conflict resolution become crucial. We can't resort to violence or manipulation to get our way. Instead, we need robust democratic processes to navigate disagreements and find common ground. Fractal processes, where top down and bottom up pressures could create landscapes of incentives for collaboration might be key: Perhaps a state run intelligence enforces limits on smaller models, while smaller networks are what give the larger cooperation network legitimacy and power.
This won't be easy. There are risks of power imbalances, corruption, and gridlock. We'll need to design systems that allow for diverse viewpoints while still enabling effective decision-making. AI itself could play a role here, enhancing our ability to make informed choices, model potential outcomes, and implement democratic processes at scale. The core of the matter remains human judgment and responsibility. No matter how advanced our AI becomes, we can't outsource our moral agency. The buck stops with us.
Dembrane’s mission
We have always said that we are building the democratic membrane for the smart organisation. ASI that doesn’t share our even understand our values will be bad. That’s a given. In any future scenario, we have a moral obligation to create a platform that enables people to come together and share their moral agency, resolve concerns and identify shared values non violently. We truly believe many diverse AI techniques can help us get there. Where we are right now is step 0.1 and there are many more steps to take.
So as we build towards a future of artificial general intelligence and beyond, we need to be preparing ourselves and our institutions for this new reality. We need to be asking ourselves: what do we want to take responsibility for? What values should guide us? And how can we work together, across diverse groups and perspectives, to steer our technological progress in a positive direction?
These are weighty questions, but they're ones we can't afford to ignore. The future of human work and decision-making hangs in the balance. It's up to us to rise to the challenge and define a new role for ourselves in an AI-driven world - as the ultimate responsibility-takers, guiding our technological creations towards the outcomes we collectively choose.
It won't be the last human job. But it may well be the most important one. Let's start the conversation now about how we can prepare for it.