The atmosphere at the Beurs van Berlage in Amsterdam last December was energized. Developers and engineers had gathered for the local chapter of the TigerBeetle 1000x World Tour, locally organised by Ximedes.
They were there to walk through the architecture of TigerBeetle, an open-source transactions database built to solve one of the oldest problems in payment processing - from scratch, with a radical new design based on first principles and an unwavering commitment to correctness.
Most payment companies accept database limitations as an inevitable fact of life. They shard their architectures and build complex workarounds to manage volume, often at the cost of reliability. TigerBeetle rejects these compromises. Designed specifically for mission-critical financial transactions, it enforces double-entry accounting principles in the database layer to deliver extreme speed without sacrificing safety.
We sat down with Joran Dirk Greef, the creator and CEO of TigerBeetle, to discuss why he believes the industry is approaching a breaking point. We talked about why general-purpose databases fail at simple counting, how TigerBeetle applies NASA safety standards to fintech, and why a startup might need the same specialized infrastructure as a central bank.
Here is our conversation.
When we see numbers like 100,000 transactions per second (TPS), the figure sounds impressive, yet the largest payment companies today handle only a fraction of this volume. Is TigerBeetle built for workloads of today, or is this for people who enjoy quoting big numbers?
Joran Dirk Greef: We designed TigerBeetle for the next thirty years of transaction processing. While the largest processors today hit about 5,000 TPS, they are already aiming for 10,000 TPS and eventually 100,000 TPS.
The problem is that the existing infrastructure fails to power these volumes. We see companies break down when they try using a general-purpose string database for transaction processing because payments are primarily about numbers and counting. General-purpose databases were not designed to count integers rapidly, a problem known as “hot keys”, and therefore, per Amdahl’s Law, often cap out at 100 transactions a second per account.
We fix two fundamental problems. First, we redesign the database to help companies scale into a future where transaction volumes are exploding. However, even without the coming surge of autonomous payments, existing systems still hit limits well below 1,000 TPS due to contention and hot accounts, resulting in complex architectures, the loss of strict consistency and reconciliation problems. Therefore, second, we eliminate this complexity entirely, preserving the safety of simplicity.
The Achilles Heel of Payments
Speed is great, but in a payments pipeline, you have fraud checks, authorization, and authentication. If the rest of the pipeline is slow, does a faster database matter, or is it a rounding error?
Joran: It is definitely noticeable. Consider the Bill and Melinda Gates Foundation, which develops a complex central bank switch called Mojaloop involving more than 600 microservices.
Nevertheless, despite so many microservices, they found that the general-purpose database was the performance killer and the Achilles’ heel of the system. When they replaced the general-purpose database with TigerBeetle for the heavy number crunching, the switch improved from its bottleneck of 78 transactions a second to over 2,000 TPS.
But also TigerBeetle is able to handle these other components like fraud checks, velocity limits, and usage limits, which accelerates them as well. At the end of the day, much of the payments pipeline still consists of numbers and counting.
Safety at NASA Standards
You speak often about safety. What does "mission-critical safety" mean in a practical sense for a fintech engineer?
Joran: It means that a database survives things like disk corruption, a capability no other database offers. And we use NASA’s safety standards in our coding style to apply defense in depth against programmer error. Most database software consists of code simply to make the program work. But TigerBeetle has a whole second layer of code, over 10,000 assertions or tripwires, to check the first layer of code, that everything is operating correctly.
We also focus on the user experience because performance is not only about high throughput but predictable latency. Where most systems measure the 99th percentile, we target P100 latencies, meaning that if you process say ten thousand transactions a second, they all (in the ultimate worst case) complete in at most 100 milliseconds with a flat latency. You get a fast experience every time. It’s almost soft realtime, which is rare.
Distributed systems make operators nervous. How do you ensure the system behaves when things go wrong?
Joran: First, we made the operating experience dead simple by designing TigerBeetle as a single binary. You run this binary on six machines across three data centers where it automates itself and provides its own observability.
Second, where distributed systems of the past would be tested in the wild, leaving users to report bugs, we anticipate hardware failures and faulty disks by running TigerBeetle in a deterministic simulator, which simulates 2,000 years of runtime every day on a fleet of one thousand dedicated CPUs in Finland for efficient cooling. We operate the system in scenarios you likely will never encounter as an operator to ensure it remains safe and correct. TigerBeetle is pretty much “pre-baked” in this respect.
The Knife and Fork Approach
You mention large organizations, but I know startups use TigerBeetle too. If a startup lacks the problem of extreme contention, why would they switch rather than stick with systems they know?
Joran: They switch because we provide correct debit-credit primitives out of the box. It’s easier to get up and running.
If you build double-entry accounting on top of a general-purpose database like Postgres, you spend one or two years of engineering to get it right. We had a startup finish coding and integrating TigerBeetle in two days, allowing them to move on to product building immediately.
Also, it’s important to understand that TigerBeetle does not replace Postgres. Postgres is excellent as an online general-purpose (OLGP) database for mutable strings. For example, usernames, addresses, and product catalogs. It’s your filing cabinet. TigerBeetle is an online transaction processing (OLTP) system of record strictly for financial transactions and immutable integers. It’s your bank vault. The two go together like a knife and fork.
Disaster Recovery and Backups
Let’s talk about the worst-case scenario. If I need an offsite, independently verifiable backup, how does TigerBeetle handle disaster recovery?
Joran: The engine is open source, and we provide disaster recovery to proprietary object storage services as a paid feature for enterprise customers. The principle is that where we connect TigerBeetle to proprietary services, this in turn is viral.
So our enterprise platform includes zero-RPO disaster recovery where we back up the write-ahead log and database snapshots to object storage in real-time for defense in depth.
If you choose not to use hosted TigerBeetle, be careful not to use EBS snapshots. Taking ad hoc snapshots underneath a distributed system like TigerBeetle undermines the consensus votes, which risks split-brain issues and data loss. Instead, one safe way method is to shut down the cluster periodically, copy the data files, and bring it back up. Since this results in downtime, the paid platform handles this synchronization automatically.
It seems the value is clear regardless of company size. Is that correct?
Joran: We have $100+ billion companies using TigerBeetle at the enterprise level, alongside the Gates Foundation piloting Mojaloop across 20 countries, including Rwanda, at the national level. We also have small startups tracking energy usage, gaming scores or property rents. Transaction processing is all around us and it’s an exciting time, having enjoyed 30-50 years of some tremendous databases, to imagine what the future might look like, and do our part as the next generation.