by Pandu Sastrowardoyo

I’m going to start this article off with a hackneyed, cliched, perhaps trite saying: “Absolute power corrupts absolutely.” And then I’m going to argue that we need to update this phrase to the following:

“Centralized power absolutely corrupts the center.”

When you study the complete computing technology oeuvre of Mankind, you see a strange trend appear. For at least the last 100 years, the tech has swung in either of two directions: Towards centralization, and towards decentralization. I call this “The Recentralization Cycle”.

Let me go through the periods one by one: The 1920s saw the rise of accounting machines in literal ivory boxes kept within businesses and ran by each “processing node” independently, connecting to each other only indirectly via business transactions. Slow, but your data is independent and sovereign. The late 1950s gave us the Mainframe & “Mini”computers, half the size of a university classroom, which were the definition of “monolithic”, centralized systems, only accessible via dumb terminals that have to buy computing time from each giant central node of computing.

Personal Computers came in the 1980s, and then they networked with each other through telnet, IRC, archie, and BBS. This formed a very libertarian, decentralized internet, where although major nodes of data already existed (in the servers of universities, early internet companies, etc) there were always alternative nodes where you can socialize, share, and build influence.

The 1990s brought with it the rise of the Enterprise internet. Major companies started investing in this newfangled “internetworking” technology, the “dot com” bubble came and went, and after the dust settled in the early 2000s, the backbone of the internet was no longer decentralized. Yes, all of the protocols, all of the tech are still decentralized, but massive content delivery and mediation nodes — AOL, Geocities,, Yahoo, Twitter, Friendster, Myspace, Google — are starting to take control.

In the 2010s, this trend simply continued, and although the players have diminished remarkably — Google, Facebook, and Twitter were now the dominant companies — the battleground had expanded as these social media companies grew in power. Almost magically small supercomputers appeared on everyone’s hands in the form of smartphones. Mobile notifications became our primary method of getting someone’s attention instead of phone calls. Social media interactions started to become our dominant interaction in terms of information bandwidth. “Google” became a verb, and “Googling something” became an automatic reflex whenever we needed to understand reality better.

We have reached the point where both our interactions and our reality have become mediated by technology. And, unfortunately, this point was reached while The Recentralization Cycle was at peak centricity.

You are reading this in a world dominated by central powers. You might be reading this on Linkedin, Facebook, Twitter, or might have found this article on Google Sites. Or you might not have found this article at all, if one of those major companies have decided that my ideas are wrongthink. The Hunter Biden laptop and Trump’s tax returns, to choose two recent examples, were leaked data of similar levels of substantiation. Yet one was immediately censored by Facebook, Twitter, and (arguably) Google, while the other one was allowed to spread far and wide on their trending topics.

I’m not going to mention which one got censored since the aim of this essay is not to be political, but why are we allowing unelected tech company execs to decide what you may and may not see?

If you’re a pure libertarian, you might be saying here: “They’re private companies, private enterprises. Of course they can choose what can and cannot stay on their platform. They need to be able to, since it’s their liberty to do so.”

I have two arguments against you:

  1. Social media companies and Google’s search arm enjoy the exact benefit you’re talking about, the benefit of being a “platform for user content” simply because they do not editorialize. Editorializing automatically means they have become media companies, which are subject to litigation for the content they print or censor.
  2. Doesn’t libertarianism boil down to a rejection of authoritarian power? This is usually framed as government power, but what about the quasi-government power of massive multinational entities with revenues exceeding national GDPs? Shouldn’t we reject them too?

I understand the reluctance of libertarians of having the government break up large companies like Google, Facebook, and Twitter. But I hope this explains my position.

What I am proposing in this essay, however, is not just to #BreakUpBigTech. I’m here to try to help define a roadmap of how future #BigTech should be founded and run. Since, like it or not, #BigTech would also define the future of us.

Enter, stage left: Transhumanism.

At its very core, transhumanism is just the realization that, if computing technology continues to develop at this rate, the organic computing substrate of our brains can be augmented, upgraded, and/or migrated out of relatively soon.

Elon Musk’s Neuralink is the mere fledgling start of this. With the speed of how our computational understanding develops and expands, we may reach the level of bionic coprocessors and whole-brain uploads within my lifetime.

The promise sounds amazing: In 5 years, the ability to think faster and better, with information & content at your mental beckon. In 10 years, a silicon (or qubit) coprocessor, able to replicate your thoughts and represent yourself when shopping, working, even interacting with others. In 20 years, nothing less than conscious immortality– the end of the fear of death.

As a futurist, I am excited about this premise. As a practicing technologist, I have a few nagging thoughts that tug at me from the back of my very organic mind:

Who will own the systems that mediate your reality? Who will own the patterns of your mind? Who will own the you?

In this posthuman future, who will be #BigTech?

Flashing back to the near past: Another thing that happened in the 2010s was the recentralization of Enterprise computing. Computing infrastructures started the decade locked within bastions of computing owned by separate companies. Then came the cloud revolution, and these companies started offloading their core systems onto cloud infra.

The interesting thing about cloud infra, is that it is still infra. When you are using a cloud computing platform, you are simply using Someone Else’s Computer. Which was fine to begin with, because this Someone Else would be Amazon, IBM, and Google, and they have the economies of scale to bulk-buy hardware and sell them out for the fraction of the investment cost you would have to budget.

But would you put your digital minds on Someone Else’s Computer? Worse yet, would you let your digital minds be run by Someone Else’s Platform?

Brief warning, you are entering opinion territory:

I think there needs to be widespread implementation of decentralized technologies throughout the computing world, in order for humanity to maintain its liberty.

If the transhumanist future is inevitable, we need to make sure that the underlying infrastructure beneath our neocortices, as well as the data that defines our thought patterns, are controlled by our own individual private keys. Not through a username-password pair embedded in a server owned by a future Google.

And I think the only way to host this underlying infrastructure is through a Blockchain-based or Blockchain-derived incentives system.

Unlike centralized systems, decentralized systems used to fail because of the lack of incentives in running a node. Sure, running Folding@Home gave you the warm and fuzzies and might get you recognition if you find a protein. Yeah, hosting a Tor node helps protect your privacy and also helps others protect theirs. But that’s nothing compared to the actual liquid revenues you got out of owning, hosting, and maintaining a server for a company. Decentralized nodes don’t get money, centralized nodes do.

A decade ago, Blockchain changed the game. With the incentive models that Blockchain provides, a node owner in a decentralized network may receive real incentives from the network itself, without having any centralized distribution control. This means that a large enough network of Blockchain nodes with consensus rules that are designed correctly would form a self-stabilizing platform without the risks of central control.

All flavors of #FutureBigTech can run on these kinds of #decentralizedplatforms. The likes of Filecoin/IPFS, GUN databases, Kadnode, Ethereum, and Bitcoin itself shows that it’s possible.

My argument is not just that future transhumanist tech should run on these platforms. I’m saying they must.

To not even exaggerate: Our very grasp on reality is at stake.

In closing, I would like to share my belief that the Recentralization Cycle is not only valid for computing. Human society, itself, was built from the same first principles. We were decentralized hunter-gatherers first, then became centralized farmers, then decentralized tribes, centralized nations, and then globalism came in the form of multinational governments such as the EU.

Looking at this trend, you might think that the only reason there is a constant shift between the two extremes of centralization is due to the fact that systems grow and expand, and that it’s natural to end up with total centralization under a global hegemon.

But I would direct you towards the following fact: Even the most centralized governments have at least some individual liberty. The bastion of liberty in these systems is, in fact, within the individual.

And the bastion of liberty in computing systems lies within decentricity.