The BFSI sector (Banking, Financial Services, Insurance) relies heavily on the use of legacy (or as IBM would say, heritage) platforms, such as mainframe and IBM i (aka iSeries, AS/400) for their most crucial “business-critical” applications. Many users have simply “forgotten” that these back-end systems exist thanks to their various front-end mobile or web interfaces – however these applications are very much in existence and delivering stable core functionality. They fulfil not only the most valuable of business goals and revenue streams but are also a key part of these organisations’ future strategic direction.
The essence of all BFSI companies is risk management. This is probably the main reason why these legacy systems are still part of the IT landscape. Whilst there are many IT outsourcing companies and migration tools providers whose business model and profit motive are to “rip and replace” these legacy applications, migrations like this tend to be fraught with difficulty and go against all the “risk management” based goals of most BFSI organisations.
But how is it that in this day and age we can still imagine replacing them with other “better” or more “modern” alternatives while keeping the risks of migration perfectly under control? The number of companies that will dare to take such a risk is shrinking by the day.
To balance this, of course the strategy of “changing nothing” would be just as risky for one main reason: “the age pyramid”. Let’s face facts – in the next 5 to 10 years, 75% of existing legacy skills and development resources will be moving into retirement.
Most executive management teams with BFSI as part of their risk management strategy have fully understood the challenge and the urgency of achieving a “generational transition”. There is a real consensus that this means going through a DevOps transformation. It is increasingly clear with BFSI’s that by organizing development in a more agile way and equipping teams with modern tooling that we can attract and retain young talent to eliminate the risk of the “ageing population” holding legacy skills.
From strategy to implementation, there is however a definite “journey” to undertake, with many obstacles on the way, and “solutions” needed to succeed. The purpose of this article is to offer you guidance and advice to secure your DevOps transformation and give you back control to eliminate the “business risk” you face. The advice contained in this document is based on both over 5 years of experience of implementing a DevOps strategy and also a proven track record of success working with companies across the world in many of the world’s most successful and innovative BFSI’s organisations and their IT suppliers.
The 5 key steps are:
1. Define the target
It may sound basic as a first piece of advice, but too many people start off with relatively unclear goals, simply because DevOps addresses three levels: strategic, managerial and technical. For this reason alone it is worth taking a little time to define the milestones of a transformation that, in any case, will last several years and for which the stakes are vital. A clear roadmap integrating all milestones of the change with precise deadlines will guide all those involved, from the developer through to the management, including the security and operations teams. The objective in a sense is to “downplay” these legacy systems, consider them a technology like any other, and most importantly, demystify them completely for the younger generations. Because in the end, experience shows that the only barrier is a set of preconceived ideas that are readily dropped in practice.
Certain goals are inter-related, meaning they can be achieved simultaneously with minimal additional work, creating a “ripple effect” of achievement of multiple corporate objectives and goals. For example, increasing the level of automation in the deployment and release process whilst “shifting the number of defects to the left” implies:
- By definition, the more automation in the deployment process, the more rapidly developers can test new functionality in a production-like environment and the more rapidly they can fix defects and re-deploy.
- Increased automation ensures that developers do not spend their time performing “manual” smoke testing or build verification testing on a recently deployed application, thus freeing up their time to pack more functionality into each release.
- Advanced automation includes the ability to perform a seamless rollback in case of problems, therefore eliminating the risk of defects found late in the testing phase when they are more expensive.
2. Engage rapidly in “tooling”
Unlike application projects where you can spend time setting up “terms of reference” to ensure that the needs of all users are covered, it is almost impossible to do this in a DevOps strategy, because of the profusion of technologies in the distributed world. This is in complete contrast of course with the scarcity of choices available for legacy.
The movement feeds on “good stories” that develop virally through the network and promote the rapid emergence of authentic standards. This is how the Git source management tool became a de facto standard in the distributed world in just a few years, and, as it happens, is now also a “must” for the Legacy code. Similarly, Jenkins as orchestrator of integration tasks is a natural fit. Is it the best technology on the market? The question is irrelevant. It’s a standard. Finally, we should also mention Jira, which has become a reference in project management.
The goal then is to build a consistent and efficient DevOps chain. Either your own teams have the skills (and the time) to do the integration themselves, or you rely on a vendor who offers you a global and already integrated chain. This is the value proposition of ARCAD Software.
Tools are not the only driver for change but they do offer you a framework for continuous improvement and way to document and prove your DevOps is beginning to deliver value.
3. Adopt a policy of “quick wins”
It’s all about gaining buy-in by a proof by example. Transitioning all teams to a multi-platform DevOps stack is far too ambitious. Choose an initial application that contains a mix of different technologies. Do try however to cover the entire DevOps cycle in order to demonstrate the maximum value of the chain and go beyond a one-on-one comparison with what existed before. We need “quick wins” to prove value, but not everything can be achieved all at once. Instead of subdividing by phases, it is better to apply small changes step by step on each application domain.
Some “quick wins” to measure include the increase in number of releases and the “shift left” of defects described above. However, from our experience a large number of intangible benefits are also gained from a “quick win” approach.
In the past, typically IBM i / Legacy applications development teams have delivered a large amount of functionality in each relatively infrequent release (often 2 x releases per year with bug fix effort in between). In this context, application owners have tended to “over pack” both “high business value functionality” with “low business value functionality” into each release to ensure they can leverage the benefits immediately as soon as the once the new release cycle comes around and the development team have released into production.
With the advent of DevOps, we have seen a huge benefit from a natural “prioritization” of functionality. Since releases following DevOps are more frequent, the features that are the most “valuable” (from the viewpoint of revenue generation or cost saving) can be delivered within weeks rather than months.
Documenting these “quick wins” is very important to ensure you can prove to your stakeholders the success of your DevOps process on an ongoing basis.