Select Star Logo
May 12, 2021

Use Cases for a Low Latency Distributed Database

Generic Placeholder for Profile Picture
May 12, 2021
Margo McCabe
Head of DevRel & Partnerships at HarperDB

Table of Contents

There are certain industries that greatly benefit from high-performing, low-latency, geo-distributed technologies, while other organizations might be more focused on vertically scaling architectures. This is dependent on numerous factors including the data pipeline, network, data structure, type of product or solution, short and long term goals, etc. While there are currently many databases and tools that provide vertical scaling capabilities, there are not many that focus on horizontal scaling -- but there’s still a need for both.

Latency

Before jumping into specific industries that benefit from high-performing, low-latency, geo-distributed databases (it’s a mouthful, I know), let’s define a few terms here. High-performing is pretty self explanatory so I’ll skip over that one. For the next term I’ll refer to my colleague Jacob Cohen’s blog on Geo-Distributed Databases. Latency generally measures the duration between an action and a response. In user facing applications, that can be narrowed down to the delay between when a user makes a request and when the application responds to a request. So, technologies that enable low-latency usually improve performance and response times, leading to improved user experience and cost savings.

Geo Distributed

What about geo-distribution? I’ll reference my other colleague Kaylan Stock’s blog on Geo Distributed Data Lakes. Geo-distributed is often used in reference to data storage, websites, applications, containers, etc. In this case, it means a database technology deployed across more than one geographical location without performance delays. Geo-distributed functionality has several benefits. With increased redundancy, you don’t need to worry about one data center, cloud instance, or on-premise site going down. With a backup in place, a fail in one location is no longer a disaster situation for your team (this is often considered to be part of high availability architecture). Global performance is improved because queries are distributed across many different servers in parallel, and users are able to hit a database that is physically closer to them, ultimately reducing latency. User experience is also improved when data storage is distributed because of the rapid query response times.

Scaling

When scaling, there are two ways to add computing resources to your infrastructure, and most large organizations utilize a combination of the two approaches to best meet their needs. The main difference is, “horizontal scaling means scaling by adding more machines to your pool of resources (also described as “scaling out”), whereas vertical scaling refers to scaling by adding more power (e.g. CPU, RAM) to an existing machine.” As mentioned above, horizontal scaling provides redundancy, instead of having only one system in vertical scaling where a single point of failure can cause massive disruption. If your organization prefers to have the flexibility to choose the optimal configuration setup at any time that will yield the highest cost and performance benefits, scaling out might be a better option than scaling up. While the paradigm is shifting more towards horizontal scaling, there can still be benefits to vertical scaling as well. Perhaps it’s best to find a technology solution that can enable both vertical and horizontal scaling when needed in order to minimize the number of systems in the tech stack.

Let’s look at a few industry based examples below.

Retail & Ticketing

Industries such as retail and ticketing constantly battle with bots that buy up their product as soon as it’s released to be resold at marked up rates. We all know that feeling of waiting in a virtual line for concert tickets, only to have them sold out within minutes (if not seconds)! Due to these bots / bad actor systems, these products and events become less accessible to the general consumer, and revenue is distributed to previously uninvolved companies and sources. Databases and data management solutions are often centralized in a single cloud in a single region, which drives high latency and increased compute needs to power APIs. These technologies cannot respond or process data fast enough to catch or block the bad actors, and the data needs to be globally replicated. By shifting API’s and data storage to the edge, latency can be greatly reduced. If we bring data persistence and functionality closer to the source with a super fast distributed database system, we can recognize and block those bots / bad actors in real time with global replication.

Industry & Military

In military and other industrial or machine-heavy organizations, there are massive amounts of data being generated by sensors out in the field and on the edge. These sensors could be capturing data on anything - machinery performance, rotation, vibration, temperature, output, weather, etc. These industries benefit from a distributed low latency database that can sync data from the edge to data center servers, as well as back to the edge for analytics and alerting, in real time. If we bring edge persistence and functionality closer to those edge nodes, we can eliminate gaps and bottlenecks between IoT data collection and the cloud. A peer-to-peer distributed architecture will enable the capture and flow of data across the data pipeline to enable rapid decisioning and downtime prevention. In these scenarios, it’s important for both machine operators in the field and controls engineers back at headquarters to know what’s happening at all times, and decisioning that’s even a second too late can be detrimental.

Gaming & Media

Gaming and media industries greatly benefit from high performance and low latency, with clear implications for both the organization and the end-user. Referring to my Hybrid Cloud blog, latency challenges occur because large cloud providers are not highly distributed. Additionally, it is challenging to actually deliver data at the edge and allow users to interact with it. Currently there are caching solutions that bring data reads to the edge, but they are not write optimized and global replication is slow. By utilizing a database that distributes to the edge with the ability to read and write efficiently, you can improve response times and performance for the end user. This is because instead of having a limited choice of regions offered by giant cloud providers, edge data centers are much closer to their end consumer. (We all know how frustrating it can be when your game freezes with 30 seconds left on the clock when you’re playing multiplayer games with your pals across the country!) This solution also enables cost savings because organizations can avoid cloud lock-in and additional costs related to data ingress and egress, and far less API servers are required to handle the workload.

Transportation

Planes, trains, and automobiles! You can imagine that vehicles of any type require extremely low-latency so that they can predict and avoid any kind of collisions or misdirection. Peer-to-peer technologies and 5G will enable innovation like vehicle to vehicle architecture. By distributing APIs and data storage to the edge, and shifting application logic to the edge, you can remove bottlenecks and reduce infrastructure and cost. Bringing functionality on or near the vehicles will reduce latency, reduce the number of servers needed to handle the necessary workload, and improve performance.

You can find more real-world examples in Jake’s blog on Geo-Distributed Databases, where he talks about Home Internet of Things (IoT), gaming, and even warehouse robotics! Many applications rely on low-latency, and sometimes so severely that issues with high latency can cause customer loss and/or massive expenses or failures.

What’s the solution?

While there are technologies out there tackling these challenges in different ways, there are not many high-performing, low-latency, geo-distributed databases. Many edge data solutions are not write optimized and global replication is slow. Whereas HarperDB is read and write optimized, handling upwards of 20K writes per second per node, with 110ms global replication. HarperDB’s clustering methodology relies on eventual consistency to be much more efficient than more traditional options, and you can’t lock out our database globally. Here’s a few additional benefits of addressing latency challenges with HarperDB:

  • A single node of HarperDB can handle over 100K requests per second
  • HarperDB can globally replicate data at the speed of the Internet
  • Runs anywhere presenting a single interface across a multi-cloud solution
  • Enables horizontal scalability with peer-to-peer architecture and leverages parallel processing for vertical scalability
  • Easily distribute API’s to the edge to reduce latency and cost
  • Hybrid cloud capability; run on public cloud, edge data centers, on-premise, or in the field
  • Provides low-latency edge data replication for “CDN of database”
  • Real time data sync between nodes
  • Flexible and configurable data sync

With centralized databases, organizations have to buy more and more API servers to reduce latency and handle the increasing workload, creating massive bottlenecks. By distributing out and moving closer to the edge, and utilizing a Hybrid Edge/Cloud strategy, you can greatly decrease the number of servers needed and reduce latency, while benefiting from cost savings and improved customer experience. With HarperDB, you can simply spin up more nodes to scale horizontally, putting HarperDB in various regions closer to your end users, all while accessing data in real time.

Whichever route your organization takes, it’s always better to proactively implement solutions like this upfront instead of having to react later on and deal with disaster recovery. Now that we’ve provided a brief overview, would your industry or organization benefit from high-performing, low-latency, geo-distributed technologies?

While you're here, learn about HarperDB, a breakthrough development platform with a database, applications, and streaming engine in one unified solution.

Check out HarperDB