TestFlask Mocking and testing made simple

Welcome to TestFlask blog

This blog is an important step both for me personally and for TestFlask as an open source project. By keeping a blog up to date and alive, I hope to maintain a more clear roadmap and to be able to set certain milestones. Sharing what you have done with the community, which I hope is an important factor that keeps you motivated.

As this is the first blog post about TestFlask, I think I need to give a little bit of history of why my team in the company needed something like TestFlask.

I work at Softtech which develops all the software bits for the largest private bank in Turkey which has millions of customers. I was in the architectural team for Softtech developing common frameworks and developer tools, Visual Studio extensions etc. However, by the start of 2017 I changed my team and joined the PaymentHub project that aims to consolidate all the scattered specific payment implementations inside legacy and existing applications. This is no easy goal because there are millions of transcations per month, and with so many different and exceptional rules out there and most importantly most of these bit reside in a legacy COBOL/IMS system that still powers most of the banking accounting for decades.

My team is a part of migrating these legacy apps to open systems both .net and java mainly, but the transition must not be a breaking impact but a smooth and a steady one. In order to achieve such a complex transformation with thousands of transactions streaming per minute, regression testing is at upmost importance. For every feature we add or migrate from the old system, we need a lot of test effort to keep us on the safe side. We currently implemented dozens of services, batch apps, and cron jobs that both integrate with IMS, Java and .net services, FTP servers, network resources etc.

Development on a system with a such variety of integrations is, as you may have guessed, very close to an experience of shopping blindly in a glass store with other blind people as each one moving pieces around the store. In such an environment, you are mostly left with no clues of where the data you get is coming from or when it was altered. Isolating your own test data is almost an absolute necessity for no-brainers. Integration tests are easy to break for which customer or account data (which can be highly layered and complex) can be altered in legacy systems by other teams without you even noticing. We can use integration test systems and automate them, however with every protocol out there you need a different integration host (for IMS, SOAP services, TCP, DB calls etc.) which in time becomes difficult to maintain.

What about unit tests? We certainly write lots of unit tests, but mocking the data on our own is sometimes too much effort. TestFlask certainly is not a tool to replace unit tests. Unit tests are without a doubt invaluable assets to any development team, but sometimes we need to quickly create mock data for the backend from end to end spanning all the integrations we’ve invoked through out a single service call. Unit tests are not mainly suitable for testing a whole service operation. You need to break it into smaller isolated pieces, depending on that you are covering all the different combinations between the internal integrations for those pieces. By the end, you think you are covering all the gaps and edge cases. However on real user acceptance tests, you realize that sometimes you wrote tests with a very naive approach, maybe because of the lack of documentation in the systems you integrated or little hacky alterations made by other teams on a structed data form to work around messy situations. Recording a real integration test helps you uncover those blind spots on the very first run, therefore it sometimes has advantages over unit tests for external calls.

TestFlask is a test recording/replaying approach somewhere between unit tests and integrations tests. We can record incoming data from external apps just like we are doing an integration tests. However after recording, we cut our interest with the main testing environment, keep the data to ourselves and run it in our isolated environment. We use TestFlask tests not just for regression testing, but also adding certain features to our business layer as we record a base case for a scenario that was working as is.

I have developed and am still developing TestFlask on my spare time and I still try to maintain it at home, but that’s just another story deserving another blog post.

Currently we have hundreds of different scenarios recorded and we replay those tests whenever we add a new change request or migrate an existing feature from legacy systems. It greatly reduces our regression work and keep us confident for real integration, smoke and user acceptance tests.

TestFlask is not of course a silver bullet. It has its own kind of disadvantages also. With a rapidly changing service that you are integrated into, it becomes hard to maintain recorded scenarios. We come up with different solutions for different situations but it is not an always smooth experience to patch existing scenarios. Sometimes re-recording a heavily modified scenario is easier. In another post, I will try to explain disadvantages or missing features in TestFlask. Such a brainstorm is always necessary to keep TestFlask up to date and competent that is built to sustain and remain useful for such software systems with high entropy.

Finally, for those who are willing to play around with TestFlask and bits, I highly encourage you contribute to TestFlask, open up issues, or even update TestFlask website for missing documentation. It’s been a year since TestFlask is around as an open source project. For the last three months, I slowed down a bit, but still considering/rethinking some new features as well as some redesign maybe. Any help would be greatly appreciated.