By Adrian Tully | 18 feb 2020

Transforming a traditional ‘green screen’ team into an Agile Pod for IBM i

A personal and sympathetic view of taking a traditional AS/400 development team to an Agile POD using a fully automated CI/CT/CD DevOps pipeline with Agile methodology on the IBM i.

My 9 step journey as Global Head of DevOps Tooling and Engineering Services at HSBC helping to transition an ‘AS/400 development team’ into an Agile IBM i Pod with fully automated CI/CT/CD DevOps pipeline.

1. Acknowledge the resistance and listen to the specious arguments

It was never going to be easy to convince veteran developers of the need to make a quantum change to become a modern Agile Pod after decades of tradition. Developers who still preferred to use the term ‘AS/400’ in passive protest against those demon marketeers who dropped the original name of the IBM i over twenty years ago were not exactly easily swayed by fashion. A wall of resistance initially came my way when I first introduced the idea of an Agile Pod.  Their arguments sounded logical, rational, considered, fervent, convincing and at the same time… wrong.  My gargantuan task was to steadily convince a global team of 1500 traditional developers to abandon these preconceptions and make the DevOps leap.  Yet I know many who have come across these same objections in traditional teams of all sizes.  Here are just some of the doubts we faced on our IBM i DevOps journey (do any of these sound familiar?):

  • “Just starting RDi takes ages. I can’t be expected to start RDi multiple times a day”
  • “Once I have my green screen up it’s just a WRKMBRPDM command and bang, I’m back where I logged off yesterday”
  • “Why do we need daily meetings? We don’t have time for that”
  • “You’re tracking our time trying to look for excuses to get rid of us”
  • “Why can’t we just copy our changes to the test library?”
  • “There’s nothing wrong with a SAVF (IBM i save file) for putting changes into production”
  • “So, we’re all going to have (the burden) of storing the full source code on each of our laptops”
  • “Is there a way I could get my changes pushed back to Test after I commit them to Production?”
  • “We really need to talk about testing”

My challenge was to widen perspectives and turn these arguments on their head…

2. Move to a graphical development tool and show developers the benefits

Every argument these veteran developers made seemed sensible, yet there was something in the back of my mind which made me think that a graphical development tool like Rational Developer for i (RDi) might be able to bring something more to this table.  There had to be a way to make it add value to these veteran developers’ lives as they launched their many custom macros carefully crafted over the years.

Why were they starting RDi multiple times a day? I noticed that the development team manager didn’t spend all day programming so they would pop in and out of the task doing a bit of development here and there.  This was where all the slow startup noise was coming from.

My thinking was that if we could increase the productivity of the developers then surely their manager wouldn’t need to keep popping in and out of a development tool.  This would eliminate the need for them to act as a part-time developer and give them more time to manage the team.

I noticed the developers spent all day at their desks writing specifications, coding, setting up test environments, testing, drinking coffee and writing documentation. Hence, they would only need to start RDi up once a day.  There must be a way I thought to get them to see the productivity benefits of a graphical development tool with its color, its formatting, and its improved access to systems.  Even if most programming was still fixed format RPG the benefits of a graphical development tool should simplify and accelerate their coding. They would still have to switch to green screen to do their tests and environment setups but since RDi would be open all day, developers would not be exposed to slow startup times compared to green screen except once per day.

My plan was to make a start and get RDi installed on all the developer laptops so that some would at least try it out. Additionally we took some time to deliver a couple of quick RDi training sessions, just to cover the basics to see if we could wow them.

3. Get Agile and use a tracking tool

Initially there was an assumption that daily meetings, however short, were a waste of time because these veterans already knew what to do.  There was also a high level of mistrust initially about introducing new reporting tools like Jira, a false assumption that there was a bad faith motive behind it.

I persisted.  In the first few weeks the daily scrums (short morning meetings with all team members to discuss priorities for the day ahead) might not have been everything that the DevOps books had said they should be, but it was a start. The intended 15 minutes, initially was 20 minutes but after a short time the scrums became shorter and more productive.  Gradually team members became more comfortable with the concept of sprints (short time-boxed periods where the scrum team works to complete a set amount of work).    Eventually every team member started to feel more involved in the sprint goals (two sentence descriptions of what the team plans to achieve during this sprint).

As trust increased we introduced Jira which allowed us to track ‘issues’ (a generic Jira term for a piece of work such as a project task or helpdesk ticket) in an automated way.  This helped us find improvements in the way we would close off work. We could actually see who was working on what, and that made conflicts easier to resolve. We also found an opportunity to stop any blockers (anything which slowed us down) taking hold and bringing the development stream to a halt.  The performance of the team was improving as the pathway to completion became a little less obstructed and little more coordinated.

The team unit became stronger and we had more time for input from the end users.  Some of the team still couldn’t refer to them as ‘customers’ yet, just too ‘trendy’.  As a result of that input however, we were able to start prioritizing changes which delivered the most business benefit and we focused on delivering smaller enhancements faster.

We could now monitor the workload better, our estimating had improved, and in turn that allowed us to deliver more of what we promised and more of it was on time.

4. Get an integrated DevOps toolset

Team members initially struggled against inertia, even when I was showing them how to save their weekends.  Most veteran developers initially thought that spending most of their weekend running multiples of CRTDUPOBJ (IBM create duplicate object command) to copy months’ worth of changes to test libraries was ‘the only safe way’.  The lack of a transparent audit trail did not seem to be a consideration.

I insisted that we needed an integrated DevOps toolset and asked the team to trust me by appealing to them to consider how much progress had already been made. The offer from the team’s manager to build one was just not practical. We just didn’t have the resources to do that, and we would have to maintain it, support it, enhance it. It would have diverted us away from our real objectives.

There weren’t that many credible DevOps toolsets suitable for IBM i on the market at the time (still aren’t).  ARCAD fitted the bill, they have a great multi-decade history with our heritage AS/400, System i, iSeries, IBM i and their integrated DevOps tools excel at building a bridge between the old and the new worlds.  ARCAD also had some really fancy modern features that the development Pod (I started to call them that now, just for fun:)) would frown upon initially.

There were some instant benefits which surprised even me after we started using ARCAD.  The ARCAD Observer (application analysis tool) instantly (overnight) created a rich cross-referenced repository which automatically documented all our relevant applications, their internal relationships between files, programs and other objects. This rapidly enabled the Pod to spend less time analyzing and more time creating. The end-to-end development time decreased quickly.  The initial concern that a DevOps toolset would burden an already busy team and slow end-to-end development time quickly evaporated.  We found time savings all along our pipeline which enabled us to do more, faster.

The underlying ARCAD repository meant that we didn’t need to worry about what needed to be recompiled after a change had been made. Members of our Pod were coding and compiling securely in the knowledge that any objects related to their changes in code had also been rebuilt.

The changes were all packaged into deliveries and the ARCAD Builder tool made sure that all changes were compiled and delivered to test the same way, every time, a consistent approach building the application to our standards which improved both speed and reliability.

DevOps for IBM i White Paper thumbnail

DevOps Transformation:
Engage your IBM i teams

White Paper

Implementing a DevOps strategy on IBM i? Influencing IBM i DevOps maturity and success in the enterprise?

5. Integrate your delivery mechanisms

We already had an automated process for integrating changes with test environments but it was unbelievable to me that we didn’t have something automated for production.  With the ARCAD DROPS tool we did have something there already which could be used for this, it was just an extension of our existing configuration. I figured we would just give it a try one Sunday and see how it works out.

I don’t normally spend my Sunday mornings working on putting changes into the production application, but this Sunday was different. We were all there, the development Pod manager, me, and our most experienced change management guru to actually deliver the changes via a SAVF (Save File). We were warned that it could be the most boring 3 hours of our life as that’s how long it was expected for this volume of changes to be delivered to production.

My coffee hadn’t even gone cold and…“I think we’re finished”, we were 30 minutes into what was supposed to be a 3 hour marathon delivery.  “Well that went well”…I said as DROPS had brought out the master of understatement in me.

The delivery was fully documented, we had lists of objects that had been changed, we had details of the linked programs and files that had been rebuilt. Traceability of the changes was incredible and we could tie it all back to the original work ticket through the ARCAD hooks into our Jira ticketing system. We even had a sophisticated roll-back feature.

6. Build a central source code repository (with Git)

We installed the full source code for all relevant applications on each Pod members laptop (RPG is very compact so no space issues). The development Pod was now able to spend more time working from home which gave them more flexibility to manage their time and add balance back into their lives, not to mention a morale boost. We still did the daily scrums virtually to make sure everyone was up to speed with the progress of the sprint, but the need to be in the same office had been dramatically reduced.

Moving to a central source code repository and having the Pod members commit their changes rather than needing to be constantly online to the IBM i virtually eliminated telecommunication grief. This grief would happen daily when the neighborhood kids all returned home from school at the same time and overloaded the telecommunications bandwidth. Also when workmen periodically shut off power for a couple of hours forcing you to contemplate driving to the local coffee house just to get back online.

We implemented Git (the industry standard distributed version control system for tracking changes) and it worked really well here. The interface with ARCAD Builder and some additional ARCAD plugins made linking the source code repository and applications on the IBM i really easy.
YouTube was a great resource for helping the Pod to understand Git basics. Learning about branching and merging from a large global community of coders was liberating for the Pod. Adopting Git and ARCAD for our RPG coding made more sense to everyone now. We were starting to leverage the collective knowledge of a much larger community. We were starting to share code with other IBM i developers elsewhere in our global company. We were no longer stuck in a stuffy room hiding behind a bimodal shield of resistance. Since we were now starting to use modern tools and methods, it enabled the Pod to feel more secure about their jobs and their contribution to the company and it allowed us to appeal to new staff.

7. Automate your pipeline

At a certain point in this journey I noticed that the benefits were starting to roll in faster than our efforts rolled out. We were reaching that point where some benefits were starting to roll in for free as welcome byproducts of what we had already done.
The Pod, now in frequent contact with a global community of coders, had started to run with DevOps themselves and were coming back to me with more ways to improve the development cycle and remove more blockers.

ARCAD had customizable build and integration macros chained together, hence after a successful build we were delivering application changes automatically into Test environments with the ARCAD macros. It was a nice solution; it was a good stepping stone, but I now had a Pod hungry for full-DevOps so I knew that even more DevOps automation would be needed soon.

ARCAD has a reliable integration with Jenkins, so after some discussions with the Pod we returned the ARCAD macros to their original (non-customized) state so that they would remain fully compatible with Jenkins moving forward. We then implemented the natural Jenkins pipeline to integrate and deliver our builds into our test environments. A Jenkins CI (continuous integration server) was now running the development pipeline for our heritage AS/400 development, bang! This was a goal from the start and when it went live it gave the whole Pod a real sense of achievement. We now had a paved “highway” for integration.

8. Bring automated testing into your pipeline

We now realized that we were delivering changes to test environments so fast it didn’t allow enough time for even basic unit testing.  This resulted in a lot of errors returned to the Pod and this in turn took time out of the sprints.   We had inadvertently introduced a bottleneck exactly where we didn’t need one. We were spending too long testing the changes which and this was causing problems when delivering improvements to customers (yes. I eventually convinced the Pod to say customers not users:)) against an agreed timeline.

There is nothing else out there that comes close to off-the-shelf ARCAD capabilities when it comes to unit test automation  unless you put in a whole lot of effort to rebuild a decade old piece of open source.  ARCAD iUnit plugged straight into our existing pipeline. The Pod was writing their own ARCAD iUnit test cases and sharing them within a day or two.  The test cases tied back to specific components so that we could unit test just the specific functions that had changed rather than having to unit test an entire program (we have large programs).

The Pod started using ARCAD Verifier to automate regression testing and they concentrated on building test scenarios which could be reused across various application functions. This reusability was a big time saver. This additional fully automated testing improved application reliability so much that it would have taken 3 times the staff to achieve the same using the previous methods.

The Pod then implemented ARCAD Code Checker plugins and ‘wow’ it’s an incredible tool! It’s not a testing tool, it works on improving the quality of source code in the first place.  It enabled us to define and apply coding standards across all source code and it demonstrated where quality weaknesses existed and why.   It was like an automated peer review. We were able to enforce code quality standards across the full development Pod.  We could for example automatically identify code that would have opened up security issues (extra big deal for a bank) and bounce it straight back to the developer.  As time went on the Pod could see code quality issues in near real time and fix it at the root while coding.

By this stage continuous integration and continuous testing had already delivered quantum improvements.  Application changes were being delivered more rapidly, more reliably and with better business value for our customers.  Customers were commenting that they were seeing their requested enhancements quicker and they were getting improved business benefit from them.  This was all as a direct consequence of the DevOps cycle we had implemented.

The value streams we were building with these tools were looking good and there were still things we needed to refine but our “super-highway” was in place.

9. Fast forward 3 years

Yes it took us 3 years but we have achieved a fully automated CI/CT/CD pipeline .  We are now continuously delivering IBM i changes to production via an Agile development community and without manual steps.  Sundays have been  reclaimed and we are able to deliver changes to the business without introducing downtime and this in turn has increased the hours the business can operate, a competitive advantage.

The old monthly release timetable has been replaced with a Value Stream dashboard viewable by managers across our global company.  The improvements in application quality is now viewable since various Arcad tools report up to a graphical quality dashboard.

Our automated testing has resulted in a ‘shift left’ for error catching and fixing and this has made a real improvement to the costs involved in every change and the speed with which they are delivered.  All regression testing is fully automatic and unit testing has become a mouse-click.

These robust, continuous micro deliveries are having a positive impact on the business.  There are no more monthly change review meetings.  Instead, there is more involvement with the business, discussing their pain points, finding out about how we can next evolve the application to create more business benefit for them.

Last but not least all legacy RPG code has been automatically refactored to modern free form RPG using ARCAD Transformer. This was an investment for the future, we have highly experienced expert coders planning their retirements working alongside the brightest and youngest coders using a shared source code repository hosted in a cloud.  All of them developing the applications for our heritage IBM i and following an Agile methodology with an automated pipeline and all under the watchful eye of DevSecOps.

That’s a far cry from the ‘all green screen’ shop 3 years ago..

Philippe Magne

Adrian Tully

Senior Solution Architect, Arcad Software

Starting as an RPG programmer in 1988 on the System 36 on a dumb terminal, Adrian TULLY has more recently managed a team of engineers for HSBC global bank delivering DevOps tools across an international customer base. With 15 years’ experience in application life cycle management, he joined ARCAD Software in 2020 as Senior Solution Architect to bring his expertise about DevOps, Six Sigma, lean methodology and process improvement to the solutions we provide.
Contact us
Contact us .........................................
Book a demo