A foundational idea of agile methods in software development is to develop applications iteratively and put them into operation. For new software, that works well. But how can we go about doing this if existing, complicated software has to be superseded?
Agile methods of software development result from the insight that in our complex and dynamic world, not many things in the future can be certain. Software development projects cannot be easily planned, because at the end of a long implementation, the original requirements are often no longer relevant. That’s why agile development advances in steps. In the first case a so-called MVP is built, the Minimal Viable Product. This presents the smallest possible core of the software to be developed that should function independently. From there, the software will be further developed iteratively, where the software is – at least theoretically – completely capable of running and being productively used at the end of each step. Through this ongoing refinement, industrial implementation and learning from actual use, the product can be developed goal-orientated according to the current, actual customer needs. This process makes sure that true use value is realized early on.
What is the MVP when superseding an available application?
Agility is a very elegant and practicable approach to software development. But what if a new application doesn’t have to be built and because of that customer needs don’t have to be found initially? What about superseding or migrating an existing, complicated application that, in the worst (and thereby normal) case is also connected with a whole application landscape? Under these circumstances it is often difficult to start with a minimal viable product and to improve it iteratively. If we supersede an application, then the new application needs to have at least the same features that were available in the old one. Even the interfaces for the system under development are often indispensable for all the processes to work.
So is an iterative-agile approach inappropriate here? Do we have to use classical project management with the waterfall method?
Considering the previously recognized requirements, a classical project management approach would be recommended here. Nevertheless, it would be advantageous if we could proceed iteratively: the new application could then be implemented early and not at the end of the project. Apart from that, in the case of migration there is a risk that the requirements could change by the end of the project. It would be a lot worse if the use case of the software was completely dropped – be it a regulatory requirement, because the use case is completely cancelled, or for some other reason. In these cases, an approach that can already be used early on – even only partially – is advantageous.
How can that work? It’s not easy to implement. To proceed iteratively with a migration project, we have to identify the blocks of functionality that are independently runnable and that can be built upon each other. This is the intellectual challenge of “agilizing” a migration project.
Migrating iteratively means parallel service over a long period
To “agilize” a migration project the idea is to run the first version of the new application parallel to the existing one. All use cases that can be implemented with the first blocks of functionality will be executed on the new application instead of the old one. Complicated use cases will remain on the old application until these features are available on the new application. We develop these iteratively according to the feature blocks and regularly put them into operation. This way we can migrate the use cases iteratively and profit from using the new application as early as possible – for example quicker editing times, auditability of the processes, adherence to compliance requirements. A very suitable aid for the product owner to plan the feature blocks is story mapping.
Of course this process also requires that we give extra consideration to how running the old and new applications parallel could work. We have to consider what will happen if one of the formerly simple use cases is migrated into the new application but then becomes unexpectedly complicated, and is not yet implemented. Even data dependencies in the new and old applications can be a big obstacle. All these problems can only be clarified in concrete, individual cases.
Agile migration according to idalo’s example
Enough theory. A practical example from idealo, Germany’s largest price comparison and shopping portal, makes this idea clearer. Every day, idealo imports the data of about a billion offers from online sellers using a specialized import application. Because the amount of information has been continually increasing for years, the database in the import application was at some point on the brink of its performance capabilities and could no longer be sensibly scaled. Re-building the whole application with an optimized import logic was inevitable. Of course, the import application had innumerable, complicated features for different sellers and was embedded in diverse processes. But it was not clear when the obviously overloaded database would completely fall apart. In this case, to be urgently avoided, it would be absolutely impossible to import anything – a catastrophic scenario.
To be able to implement a migration iteratively, we have to divide the application’s features into blocks. As implied, this division into independent feature blocks is the hardest challenge. In our case we have divided the features according to their use by different groups of sellers. For example, some German sellers also send their data to Austria so that their offers will be available on the German and Austrian idealo sites, including differences in the taxes, delivery times and other details. So we planned the import of German sellers that are also to be sent to Austria for a later feature block. So we couldn’t migrate the sellers whose information was to be sent to Germany and Austria until we implemented this block. Another feature block comprised dealing with varieties of offers. An example of this might be a phone that is sold in different colors or with different sized memories. If these different specifications are not delivered as independent offers, but as options of one offer, we are talking about varieties. So importing varieties was also planned for a later feature block. Before this was implemented, the new application could only handle sellers that didn’t use varieties in their offers.
Even if this process sounds – hopefully – comprehensible so far, it is not so easy to identify which concrete feature blocks individual sellers require. The application imports data from about 560 000 sellers, so we can’t just automatically identify the feature blocks that each seller requires. At the same time we have to hope that a seller who has not been using varieties doesn’t decide to start with it. Then we would have a problem, because they have been migrated onto the new system already where this option is not available.
(Test) automation pays off
To get these challenges under control, we invested in tests and automation. In the end, it was about one third of the total expenses. For that, the migration was completely automatized. Our setup contained two test tracks: the old and the new import routes. All the offers from a seller would be imported in parallel with the old and the new application and the results of both would be compared. If there were no inexplicable discrepancies, then the seller would be automatically changed over to the new import application. This way, sellers were automatically migrated as soon as it was possible for them.
How did this iterative process with its rather extensive test effort benefit us?
The biggest advantage was that we were able to quickly disburden the old application of the sellers that were first migrated, shortly before the collapse of the system. With this, the critical business risk sunk significantly. We even profited early from additional features in the new application. An example was the individual control of the import frequency per seller, even though it was only for the sellers that had been migrated at that point. Beyond that there was the side effect of our thorough testing process in that we discovered bugs in the old application and fixed them so as to also increase the quality of the offer data in the old application.
It is worthwhile to consider migrating complicated applications with an agile, iterative approach. Finding recommended steps for iteration is not simple, because lots of features and interfaces have to be available from the get-go. The art is in dividing the use cases of the data to be processed according to the complexity of their working requirements. Blocks of features might be found that can be implemented separately in iterations and a subset of the use cases, data, etc can be migrated each time. Running of the old and the new applications parallel brings its own challenges that have to be well though out. Automatizing and testing are the means of choice to control these. The total expense of such a project might therefore end up a bit more than a waterfall migration. Use can then be realized earlier, and the significant risk of a big bang migration can be obviously weighed out in most cases.