Scaling by Distributing Load Across Multiple Service Instances

At the service level, the Load-Balancing Business Service may be used to distribute the messages to multiple business services for processing. To enable service-level load balancing on a service, one normally runs multiple instances of the service on the different ESB Peers and routes incoming data to the multiple running instances via a Distribution Service, as illustrated in Figure 1.

The Distribution Service routes data to its output ports depending upon the relative weights associated with the ports. Load-balancing of Services across different nodes as defined earlier can be set up directly by the End-User or Administrator within the application-flow and does not require any programmatic approach. Alternatively, this load balancing can be achieved programmatically as well, since all operations on the Fiorano platform are supported via intuitive APIs.

The load-balancing method described above increases the overall message throughout, since incoming messages are now processed by multiple instances of a service, typically running on different machines (or being processed by separate threads on a single machine). A good example is the fast insertion of data into a database via the Database Business Service. This technique is particularly very useful when a particular business service encounters a bottleneck in the entire business flow. However, the drawback is that in general the messages may not be processed in the order in which they arrive, and this lack of ordering is the trade-off for performance using this load-balancing technique. It is possible to use an event-sequencing flow to ensure message ordering, however, this is not as optimal for increased performance as the general-purpose method outlined above.

An Example of Load Balancing

Consider a case of Purchase Order (PO) processing, where a PO generation is very fast compared to the PO processing time which typically requires, among other things, some form of XSLT transformation. In such a process, the PO manipulation becomes the bottleneck in a large PO Processing Business flow.

Since the PO processing step (which typically uses an XSLT transformation) is slower than the PO generation and there is no way to increase its performance, it makes sense to have multiple instances of the PO processing step with each instance sharing the load between them.


Figure 1: Load distribution across service instances

In this example (figure above), load is distributed across multiple instances of an XSLT service. The three services present are:

  1. Feeder1 (Feeder Service) -  Feeder Service sends a large number of PO documents. This step is comparatively fast in comparison to the XSLT Service which forms part of the PO processing step.
  2. DistributionService1 (Distribution Service) - A Distribution Service evenly distributes the load between three XSLT Services. The Distribution Service can also be configured to distribute load according to the relative weight assigned to each output port. As such, the load can be proportionately distributed between slower and faster machines.
  3. Xslt3 (XSLT Service) - Three instances of the XSLT Service running on same or different peers.

Icon

For components like WSStub, RESTStub, HTTPStub which are hosted on Peer Servers, an external load balancer needs to be used for load balancing.

Adaptavist ThemeBuilder EngineAtlassian Confluence