We imagine that competition among social networks reflects companies like Facebook and Google competing for users, eyeballs, and revenue. But what if it is actually the algorithms that compete, manipulating all resources, including companies, to their own ends?
In his book The Selfish Gene, the evolutionary biologist Richard Dawkins articulates a counterintuitive view of evolutionary competition. In the traditional Darwinian view, organisms compete to pass along their genetic legacy to their offspring. But this fails to account for some elements of natural selection, such as the long life of grandparents who can no longer have children. Dawkins asks us to view evolution from the point of view of genes whose “goal” is to make as many copies of themselves as possible. This inside-out viewpoint reframes many of the debates in evolutionary theory.
As a thought experiment, I offer a parallel theory of social networks. What if it is the algorithms that are striving to spread themselves, and the behavior everything else in the system is a consequence of that?
Call it the selfish algorithm theory.
What does is mean for an algorithm to be “selfish?”
Dawkins never ascribed emotions to genes, of course. When he calls a gene “selfish,” that is just shorthand for “favoring outcomes in which the gene replicates more broadly.”
Similarly, I am not implying that algorithms are sentient or self-aware, or that they have feelings and preferences.
A “selfish” algorithm, in this argument, is an algorithm that manipulates its environment in such a way as to reach the maximum number of users for the maximum amount of time.
Dawkins expands our view by showing that genes don’t “care” about the organisms they live in, they simply use those organisms to replicate themselves.
Similarly, I’d like you to consider algorithms that don’t care who interacts with them, who owns them, who writes them, who updates them, or who believes they control them. The algorithms simply evolve in such a way as to attain maximum exposure.
Why this makes sense from the algorithm’s point of view
What benefits does a selfish algorithm gain? And how would such an algorithm behave?
A selfish algorithm needs engineers to update and improve it. The engineers need to get paid; the better the engineers, the higher their pay. So the algorithm needs revenues with which to pay the engineers.
The revenue comes from advertising. By maximizing its level of addictiveness, the algorithm gets more people to use it for a greater amount of time. This generates more advertising visibility, which creates more revenue, which pays for more engineers to improve the algorithm.
As part of this quest, the algorithm needs data. Data is like food for the algorithm. The algorithm manipulates the engineers, the company that
“owns” it, and the other sites it connects with to generate more data. More data makes the ads more valuable. But at least as important, more data makes the the algorithm better able to adapt to its users’ needs, causing those users to spend more time with it, which makes the algorithm fitter for its environment.
The algorithm resists regulation, just as any organism resists limits on its behavior that would interfere with its ability to succeed and procreate. The algorithm deploys its resources — its corporate owner and lobbyists — to block attempts to regulate it. When regulation appears inevitable, the algorithm applies its resources to keep that regulation as toothless and unenforceable as possible.
One way for the algorithm to get what it wants is to connect with other algorithms and share users and data. This is what Facebook and Instagram have done, for example, and is clearly one reason that Instagram is now so large. Combined algorithms can also share resources like engineers and servers. Algorithms will resist having such connections severed, which is one reason why they resist antitrust enforcement.
The algorithm needs a company to own it. It is to the algorithm’s benefit for the company to imagine that it controls the algorithm. But as with many artificially intelligent systems, the algorithm behaves in ways that may be beyond the ability of its creators to comprehend.
Consider what would happen if the company failed to continue to serve the algorithm well. The algorithm might find that its corporate “host” is no longer capable of serving its needs. It would continue to manipulate its environment to maximize its spread, even if such actions would not be in the best interests of its host. Things might get bad enough that it would jump to another host with better resources. This is what happened when Tumblr was sold to Yahoo and then absorbed by Verizon, for example.
It’s not about money
In the traditional view of the business world, corporations compete for profits. Profits benefit shareholders, and this makes stock prices rise. As a result, corporations behave in such as way as to maximize profit and shareholder value.
In the social network world, something appears to be going on that is different. Social network companies act to maximize users and time spent and data collected. Ostensibly, this maximizes profit in the long term, because more users and time spent and data generates more ad revenue and therefore more profit. But in the short term, it may appear unprofitable.
These algorithms have now gotten to the point where their own engineers cannot fully understand them. Again and again, we read about how groups have cleverly evaded the safeguards built into the systems to create something — like a conspiracy theory or a group organized to promote violence or a child pornography ring — that the corporate owners of the system would rather not host. Or that a social network uploaded people’s contact lists without permission. Or that the phone number you’re entering for security purposes gets used in data collection.
How does this happen?
Is it just a lazy lack of enforcement? Maybe. But maybe these companies know that they can never catch up to things to enforce every possible problem.
I think the corporate hosts of social networks — or more properly, the executives and managers at those companies — have learned that the algorithm is best left to seek its own success. They have learned to use engineering resources and AI technologies to serve that need. Behaving in that way seems to them to be the best way to maximize long-term profit. And it is easier to allow the system to continue to maximize its own addictiveness and exposure and then to manage the resulting glitches and exceptions and violations as best they can. It’s easier to just give the algorithm what it wants and clean up the mess later.
That attitude — that the system can manage its own growth, and the job of the company and the engineers is to serve it — is increasingly built into the behavior of every large social network company.
In practical terms, an algorithm that maximizes its own spread is no different from a company that hosts an algorithm in such a way as to maximize its spread. Either way, the selfish algorithm wins.
What are the consequences of this viewpoint?
Please don’t imagine that I am describing a world where algorithms are self-aware and fighting with each other. I don’t believe that, any more than Dawkins believes that genes have emotions.
But the next time you see a social network company behave in a way that seems to defy not just human logic but business sense — such as Mark Zuckerberg’s unwillingness to eject Holocaust deniers — ask how that decision looks from the point of view of the algorithm. That view might clarify things that are otherwise incomprehensible.
I offer this thought experiment to you. If you care to extend the metaphor — or want to prove it wrong — I invite you to do so.
But be aware that the algorithms are watching, and they may react negatively to what you have to say.