[If you have missed out on Part 1 of “A brief history of data quality or – the birth of OMrun.”, click here ]

Imagine a meeting room (pick the one without windows…) with +25 application managers lined up around one big table. One after the other, they would report the status of their project with focus on the next MDP. And as you can imagine, the projects were always on “green”, especially when the date of the MDP was still far away in the future.

But the closer the MDP date came, the more difficult it would be for application managers to “hide” their challenges in delivering their tasks in time and quality. Since there were so many interfaces between all the applications, the fellow delivery managers had quite a good overview about who might be in trouble delivering and who won’t.

For quite some time, Frank himself was part of this meeting and observed it with a mixture of joy and frustration. Joy, because it was kind of funny to see the project leaders stammering around while trying to explain their difficult project status and still being able to report “green”. And frustration because these meetings were long and tiring and reminded one sometimes more of a theatric experience than of a serious business meeting. But most of all: These meetings were inefficient and extremely expensive if you imagine that +25 senior delivery managers were sitting in one room for +3h every few weeks.

Frank was always fascinated by the possibilities of data driven decision making and it was no surprise that he took his chance to think a little further. And after some reasoning, a business idea came to his mind: Why do we need to rely to the assessment, wordy declarations and “opinions” of application managers when all the evidence is right there in the data?! Because the statement, whether or not an interface is working or whether or not an application is doing what it should be doing can be derived directly from the data.
So how about using data as an early detection system? Shouldn’t it be possible to create a tool to automate this process? You would need a tool, Frank reasoned, that is able to at least:

  • compare data based on rules
  • cope with different data formats
  • measure and display the rule violations for simple further assessment

This was the starting point and there was no stopping Frank in developing an instrument that would later evolve into the powerful framework OMrun. So thanks to a rather inefficient and tiring series of meetings, the world can now use a highly configurable standard framework not only to measure data quality in heterogeneous system environments but also to perform data migrations or anonymize data.

The rest is history – and still evolving!