Fiorano at Open Banking Expo, London for PSD2 RTS.


Payment Services Directive
Payment Services Directive


Now that we are in November 2018, the PSD2 Regulatory calendar leaves banks with just 4 1/2 months before the March 2019 deadline to publish (ASPSP) sandboxes for testing. It is not as simple as just publishing APIs though as there are multiple technology components required to do this properly. It will come as no surprise that the operating model PSD2 will enforce is completely unlike the way Retail Banks are used to working and unlike the way consumers are used to interacting with them.

With the Europe Banking Authorities Regulatory Technical Standard (the PSD2 RTS) understandably not being prescriptive about how banks should go about enabling aspects such as Access to Account (XS2A), Strong Customer Authentication (SCA) and Common and Secure channels of Communication (CSC), the technology requirements themselves can seem overwhelming. The core issue many banks are facing today is around being able to put in-place the technology components to meet March and September 2019 obligations in-time, yet still be able to adapt, flex and launch new products and services with minimal change as the TPP market develops.

The absolute must-haves required by banks include separate applications for:

(i) Core Banking Integration

(ii) API Management

(iii) Identity and Access Management

(iv) Security

Introducing just one piece of new technology to a bank can sometimes be overwhelming. In the case of PSD2, depending on what an individual bank may already have, introducing three or four almost becomes a programme in itself, with associated complexity, timelines and costs associated. With so little time left, Fiorano has been working to be able to provide banks with regulation specific technology that can be implemented rapidly, and yet meet the PSD2 timelines.

So how does Fiorano do it?

Built on top of class leading Fiorano MQ, Middleware and API Management technology, the Fiorano PSD2 Accelerator brings together all the components banks require to deliver ASPSP interfaces in a single, easy-to deploy technology product which covers all the functional requirements around XS2A, CSC, SCA and Security.

This uniform, single platform delivers easy integrations and light-weight maintenance, guaranteeing to be the fastest and most efficient route for banks to deliver ASPSP interfaces. To top it all, the Fiorano PSD2 Accelerator also incorporates PSD2 specific limits, thresholds and exemptions as visually-configurable components, which means banks implementing Fiorano will not require super-specialists to manage the environment post implementation.

Interested? To learn more about how you can still meet the regulatory timeline using the Fiorano PSD2 Accelerator, come meet Fiorano at stand 15 at the Open Banking Expo taking place at the America Square Conference Centre in London on 27th  November. If it can’t wait till then, contact us or email us now at and we will be in touch immediately.


Read more about the Fiorano PSD2 Accelerator



Why traditional ESBs are a mismatch for Cloud-based Integration

Cloud ESB

The explosive adoption of cloud-based applications by modern enterprises has created an increased demand for cloud-centric integration platforms.  The cloud poses daunting architectural challenges for integration technology like: decentralization, unlimited horizontal scalability, elasticity and automated recovery from failures. The traditional ESBs were never designed to solve these issues.  Here are a few reasons why ESBs are not the best bet for cloud-based integration

Performance and Scalability
Most ESBs do simplify integration but use a hub-and-spoke model that limits scalability since the hub becomes a communication bottleneck.  To scale linearly in the cloud, one requires a more federated, distributed, peer-to-peer processing approach towards integration with automated failure recovery. Traditional ESBs lack this approach.

ESBs evolved when XML was the dominant data-exchange format for inter-application communication and SOAP the standard protocol for exposing web services. The world has since moved on to JSON and today, mobile and enterprise APIs are exposed using REST protocols. ESBs that are natively based on XML and SOAP are less relevant in today’s cloud-centric architecture.

Security and Governance
These are key concerns for any enterprise that chooses to move to cloud. With multiple applications in the cloud, enterprises are not always comfortable with centralized security hubs. Security and governance need to be decentralized to exploit the elasticity of the cloud. Old-guard middleware products were typically deployed within the firewall and were never architected to address the issues of decentralized security and governance.

Latency and Network connectivity
When your ESB lives in the external cloud, latency becomes a critical challenge as end-points are increasingly distributed across multiple public and private clouds. Traversing a single hub in such an environment leads to unpredictable and significant performance problems which can only be addressed with new designs built ground-up with Cloud challenges in mind.

Microservices – The issue of Granularity: Atomic or Composite?

While implementing Microservices architecture, the “granularity” of a service has always been the subject of more than a few debates in the industry. Analysts, developers and solution architects still ponder over defining the most apt size of a service/component (The term “Service” and “Component” are used interchangeably in the discussion that follows). Such discussion usually ends up with two principal adversaries:

  • Single-level components
  • Two-level components

Single-level, “Atomic” components:  An “Atomic” component consists of a single blob of code, together with a set of defined interfaces (inputs and outputs).  In the typical case, the component has a few (two or three) inputs and outputs.  The Service-code of each Atomic component typically runs in a separate process. Figure 1 shows an Atomic component.


Two-level, “Composite” components: A composite-service consists of a single ‘outer’ service, with a set of interfaces. This outer service further contains one or more ‘inner’ components that are used in the implementation of the main, outer component.  The Composite-service runs in a separate process by default, while each of the inner components run in a separate thread of the Composite component process.  Proponents of this approach point to the fact that by componentizing the implementation of the composite component, one has greater flexibility and more opportunities to reuse implementation artifacts within Microservice implementations. Figure 2 illustrates a Composite component.

Atomic-Microservices-Diagram-02Atomic Microservices are as simple as they get.  It’s just a single blob of code, in a programming language of your choice.  Depending on the underlying Microservices infrastructure, you may have to implement a threading model yourself, or you may be able to leverage the threading model of the underlying Microservices framework (for instance, Sessions provide a single-threaded context in the case of a JMS-based Microservices platform).  Overall, Atomic Microservices offer a relatively low level of complexity for development, being as it were a single logical module.

On the contrary, Composite Microservices have an almost romantic appeal for many developers, who are enchanted with the concept of “reusing” multiple smaller inner-components to implement a larger single component.  Unfortunately, although this approach is good in theory it has several drawbacks in practice.  For starters, the execution model is complicated, since the underlying framework has to be able to identify the separate threaded contexts for the Inner components that comprise the single composite component. This carries significant performance overhead and complicates the platform framework. For reference, in the early 2000’s, the BPEL (Business Process Execution Language) in vogue followed this approach, which proved to be very heavyweight in practice.  Another issue with composite components is that there is no simple model for deployment; since composite components are more difficult to auto-deploy as agents across the network, unlike Atomic components.

Provided that the services run as separate processes, in our experience the Atomic components represent a better choice for Microservice-project implementations.