IBM i DevOps TechTalk #23
DB2 Optimization
by the experts at ARCAD
In this episode, Ray Bernardi hosts Alan Ashley and Jeff Tickner to explore the latest DB2 database optimization enhancements in ARCAD’s V25 release. They share how these improvements streamline efficiency, from optimizing ALTER TABLE operations to managing SQL objects, physical and logical files, and complex dependencies like views, triggers, and constraints.
Listeners will also hear how detailed logs, spool files, and execution metrics provide greater transparency, making it easier to plan and optimize deployments. These improvements make DB2 management faster, more reliable, and less dependent on manual intervention.
The Story Behind the Mic: Podcast Transcription
R.B. – Welcome to IBM i DevOps TechTalk, where we discuss key topics and questions with ARCAD experts. I’m Ray Bernardi and I’ll be your host today. Today we’ll be exploring some of the database optimization changes that we’ve made available in our latest release. These changes really improve efficiency and functionality. These enhancements include techniques to optimize table changes such as performing a change physical file or alter table instead of going through a regular create. If you haven’t already updated to the latest version of ARCAD, you’ll learn a few reasons why you should during this techtalk. Joining us today will be Alan Ashley. He’s a senior solutions architect here at ARCAD software and Jeff Tickner. He’s the CTO of North America. So without further ado, let me pass this to Alan.
A.A. – All right. Thanks, Ray. So, Jeff, what were some of the driving forces behind making these database optimization changes?
J.T. – So you used the keyword optimization, but it was also that we had been adding support for newer SQL types, and they were kind of added on. And we essentially looked at our whole deployment process and integrated them all together so that the any database update, whether it’s actual file change or one of the ancillary objects like a user defined function or constraint or what have you, they’re all taken into account as a complete set.
A.A. – Okay. So now you mentioned SQL and I did hear the word file I believe. So is this optimization mainly for DDL or SQL tables. Are will DDS generated files like physical and logical. Are they included in this?
J.T. – The biggest impact was on the SQL based objects. But we have optimized how we manage physical files in logical, as far as the backup and deployment, the changes, we’re doing a change, etc.. So they’re in there too.
A.A. – And just to throw it out there for many of the customers that are out there, this is the big change of going from DDS to DDL, and this is kind of a side topic, but we do have some tools at ARCAD that can help make that migration from DDS to DDL. So you can start taking advantage of some of these optimizations.
Not only from ARCAD but from IBM, because that’s what they’re going to be focusing on as well. So I say that knowing that many customers are moving to DDL formats. What are some of the enhancements just briefly that are impacting SQL tables from an ARCAD perspective?
J.T. – One of the things we did was that we were already using essentially an alter table, and we just revisited it and looked at our options there. We looked at crater replace versus alter and tested and both and realized that alter really is much better than crater replace. We looked at our backup. We looked at all of the possible attributes and settings of a table that could impact the promotion. Constraints was a big thing.
We have some customers with lots of constraints, and they are significant time use on production when you have a lot of data. And so we spent some time optimizing those to make sure that we actually have to rebuild a constraint. Because if you have millions or billions of records, rebuilding a constraint, even just enabling, it takes a lot of time.
So, every aspect of the table beyond the alter was analyzed and optimized.
A.A. – Now, one of the things that you mentioned was, UDFs, and the views. Did anything change around how views are recompiled or the recompile order or how they’re actually handled?
J.T. – Yes. So that’s what they’re saying. No, we’re looking at it as a complete set. Before we always looked at the files and then the other objects, and now we look at the whole picture, whatever you’re using, a UDF or a mask or a trigger or what have you. And we can identify now, if an user defined function has to be in place because it defines a column or a field that a view is going to use, and that was always a problem for us.
Now again, the whole package, accessing the whole package and creating the objects in the correct order, both within object type, looking at constraints and what table has to go first, look at Git relationships between views. That was other problem. We only went three levels of view and now we go eight levels of view, which I don’t think we can go higher than that.
But people are nesting views and there’s this real interrelationship interdependency that we’re analyzing, identifying and deploying in the correct order to avoid any problems.
A.A. – I just want to say, since you said nobody does eight, I want to just put it out there on record. Doesn’t mean you have to go do more than eight just to prove Jeff wrong. Let me just throw that out there. You don’t have to go do nine just to prove Jeff wrong in this. Because I know some people out there will take that as a challenge of going through it.
J.T. – I think we can support more. But we have to. It takes longer on each view as we add more passes on it.
A.A. – That makes sense.
J.T. – I want to say eight is an absolute. I just know that one customer had seven levels, and, we now work fine with that.
A.A. – So we’ve got the all these optimizations in place. Maybe one of our customers are somebody looking to move to ARCAD. There’s also some of the internal process flows that come up, exports and deploys that we take advantage of within ARCAD. So I’m guessing some help may be needed to modify some of those internal processes as well to take advantage of some of these.
J.T. – Yes. So, as some of our customers are familiar with, during an upgrade, we don’t update a customized process. So if your deployment process or other internal process had to be modified to meet your business requirements when these new things come out, we don’t just arbitrarily change your processes. So you could upgrade to V25 without making any changes to the processes, what we call macros.
But you’re not going to be able to take full advantage of these changes. So there is some requirement to update your deployment process especially. But there’s other processes in there that can be impacted, the packaging, etc., that might have to be changed. So there is some analysis and update to take full advantage of all of the optimizations.
A.A. – Okay, good. Yes. Because I know in a lot of our upgrades, everybody likes to have somebody on standby to help through the processes. But knowing that going into it, hey, we’ll probably going to have to stop and go through and figure out which ones we want to take advantage of now versus maybe take advantage of later.
One of the things you had mentioned was this alter statement. And I know from just talking to different people. One of the things that used to happen was there really wasn’t a good way to see what this alter statement was. When you’ve gone through some of these changes, when it gets issued, it’s really hard to kind of find what was actually written. Now it’s in a spool file. Is that part of this?
J.T. – Yes, there’s a much a much higher level of feedback than there used to be. We’ve been doing this optimization for a long time, and we just made it a lot better, a lot wider. And we also greatly enhanced the feedback about what’s happening to each object. It was kind of cryptic before all you can tell was, this ran a lot faster because it was optimized and now we actually create a spool file of the alter statement.
So you could see what was used. We still don’t support alter statements as a way to change a table, meaning the developer doesn’t write an alter statement. They update the existing create and we generate it. But now you can see it and we can see what happens to each of the dependent objects. We have a much higher, granularity of the feedback about how each object was managed.
In some cases, we weren’t changing a view or a logical at all. But it was hard to tell. And now we have much clearer indicators on what we did.
A.A. – Now, one of the things I know auditors like are different reports and why things happened. And now in the transfer log, it looks like we’re including some different markers to say what, how things were handled, whether there’s a change or delete or something to that effect as well. So that’s going to be a nice little step. And like you said, adding some of the markers to figure out what happened. I know auditors always love to know why that changed. And this one didn’t. So that’s a nice little optimization even outside of the database optimization part of this is the reporting actions that come out of it.
J.T. – Yes. And there’s actually a little new message, message ID that tells you about the individual optimization of a file, about what was optimized and how long the analysis and how long the actual execution of the altered took. So it’s pretty easy to identify the improved metrics of the deployment because we’re providing that level of detail now.
A.A.- I guess also, now that we’re providing that information, they can start planning deployments, knowing that some of these optimizations take oh, this one runs five minutes. This one runs 20. This one runs 35. Now they are starting to see a pattern that they can start to build into their process flow for deployments into the next environments as well.
J.T. – Yes. Just remember it’s critical that the QA environment have a similar data set as far as size, number of records, as production. So you get accurate feedback. A lot of customers have a subset of data for their QA environment. And for the testing of this new process, we were using tables with 2.5 billion records.
And you could really see the dramatic impact it made when you had that many records are all processed versus our new process. So, we really stepped it up as far as testing this to have a realistic database with a huge number of records to find out.
A.A. – That’s nice for somebody to let us bar their database that had so many records in it, because you don’t really get a true indication of a test until you really put it to work. And in this case, we’ve taken that work and made sure that it went through the optimization properly, and we worked out all the little changes and even optimized them more through some of our testing.
So that’s always good to hear. So, Jeff, any last words on database enhancements within ARCAD? Maybe some of the things that you think of down the road that may be coming or anything like that.
J.T. – I’m just excited that a lot of the issues we used to have to work around, like the order of operations for a user to find functions, etc., are now being handled correctly. A lot of as a person, a consultant who implements ARCAD, when you use some of the more exotic SQL types, I knew I had to step in and help ARCAD along, and now ARCAD is taking care of it on its own.
So it’s made my life a lot easier as far as a new implementation besides saving customer a lot of time on deployment.
R.B. – Thanks, guys. That’s a lot of information. Sounds like ARCAD has really optimized database management. It’s improved management of SQL components, optimized commands for handling complex interdependencies. And it’s enhanced the deployment process for DB2 and for IBM i. All this and a whole lot more is available in our new release V25. If you haven’t moved to it yet, you should.
Our Hosts

Ray Bernardi
Senior Consultant, ARCAD Software
Ray is a 30-year IT veteran and currently a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner. He has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.

Alan Ashley
Solution Architect, ARCAD Software
Alan has been in support and promotion of the IBM i platform for over 30 years and is the Presales Consultant for DevOps on IBM i role with ARCAD Software. Prior to joining ARCAD Software, he spent many years in multiple roles within IBM from supporting customers through HA to DR to Application promotion to migrations of the IBM i to the cloud. In those roles, he saw first hand the pains many have with Application Lifecycle Management, modernization, and data protection. His passion in those areas fits right in with the ARCAD suite of products.

Jeff Tickner
DevOps Consultant, ARCAD Software
Jeff Tickner is CTO, North America for ARCAD Software. He has worked in the Application Lifecyle Management sector on the IBM i for 22 years. He leads client engagements in product implementation and training, including ARCAD for DevOps, ARCAD Transformer for application modernization, and ARCAD Verifier test automation. Jeff lends his expertise in the DevTestOps field as frequent speaker at conferences around the world.