Documentation

{{docApp.title}}

{{docApp.description}}

How can we help?

{{docApp.searchError}}
{{product.name}}

Searching in {{docApp.searchFilterBySpecificBookTitle}}

{{docApp.searchResultFilteredItems.length}} results for: {{docApp.currentResultsSearchText}} in {{docApp.searchFilterBySpecificBookTitle}}
Search results have been limited. There are a total of {{docApp.searchResponse.totalResultsAvailable}} matches.

You have an odd number of " characters in your search terms - each one needs closing with a matching " character!

{{docApp.libraryHomeViewProduct.title || docApp.libraryHomeViewProduct.id}}

{{docApp.libraryHomeViewProduct.description}}

  1. {{book.title}}

{{group.title || group.id}}

{{group.description}}

  1. {{book.title}}

{{group.title}}

Multi-Region Deployment Strategy

Hornbill’s SaaS service is specifically designed to be deployed in a POD architecture, which is a term used to describe a collection of computing resources and infrastructure components designed and configured to an exact specification, optimized to support software and application stack.

Many companies providing SaaS software choose to deploy their application atop of a major cloud platforms such as Amazon EC2, Microsoft Azure and numerous others typically described as Infrastructure as a Service (IaaS). One of the pitfalls of deploying to generic IaaS, apart from a significant loss of economies of scale, is the fact that IaaS by definition is very generic. For Hornbill, we opted to build our own IaaS which is highly optimized for, and dedicated to our software and application stack, this is much harder to do than outsourcing the IaaS responsibility to a large scale cloud vendor, but the benefits are many

  • Cost & Economies of Scale
  • Hardware optimized for our specific needs
  • Network designed for our specific application
  • Full control over data storage, locations and storage types

Since a POD has limited capacity in terms of compute cycles, memory, disk space, and network bandwidth, we achieve horizontal scaling by adding more PODs as needed.

Each Hornbill POD is located within a single physical data center, but we can deploy multiple PODs to a single data center to expand capacity. These data centers are situated in specific geographic locations and classified into regions. To comply with local data protection regulations, we align our regions with geopolitical boundaries, ensuring that customers’ data transmission and storage are restricted to their chosen region(s).

  • Our multi-region strategy empowers Hornbill customers with full control over their data’s processing and storage locations. This is a growing concern for businesses worldwide, particularly in the EU.
  • Our multi-region strategy enhances performance for all customers by localizing API calls made by our frontend apps within their specific regions.
  • Our multi-region strategy minimizes risks by isolating failures and preventing global outages. For instance, during a network outage, regional networks often continue to operate.
  • Our multi-region strategy allows for seamless scaling of the entire Hornbill platform. As more capacity is needed, we simply add more PODs to accommodate the demand.

Global Configuration

Central to our POD architecture is a well defined, highly extensible global network definition that is designed to be scalable, is region aligned with global cloud providers such as Amazon and Microsoft. Backing this up is a highly distributed, globally replicated, single source of truth configuration database that provides all of the configuration information needed for our cloud services to operate.

Reliability

Reliability is paramount, and our POD architecture deliverers on this

  • Fault isolation: Our POD architecture helps us isolate faults within a single POD, preventing them from affecting other PODs. This ensures that a failure in one POD does not impact the performance or availability of services running in other PODs.

  • Redundancy: By deploying multiple PODs across different data centers and regions, our architecture inherently provides redundancy. This means that if a POD or an entire data center experiences an outage, other PODs can continue to provide service to users, ensuring uninterrupted access to applications and services.

  • Load balancing: The POD architecture allows for efficient load balancing, as our customers workloads are distributed among different PODs in the same data center.

  • Faster MTR: In the event of a failure or disaster, our POD architecture allows for faster recovery. Since our services and data are distributed across multiple PODs, backup resources can be brought online quickly, minimizing downtime and disruption to users.

  • Enhanced monitoring and maintenance: The modular nature of our POD architecture makes it easier to monitor the performance and health of individual components. This allows for proactive maintenance and faster identification of potential issues, leading to improved reliability and stability.

  • Resource optimization: Our POD architecture allows for better resource optimization, as each POD can be tailored to the specific needs of the applications and services it supports. This enables us to allocate resources more efficiently and avoid wasting resources on underutilized infrastructure.

  • Simplified scaling: The POD architecture enables easy horizontal scaling by adding more PODs as needed. This allows organizations to quickly adapt to changing demands and maintain reliable service even as their user base grows.

In This Document