There are roughly 28,000 currency pairs and the exchange rates change throughout the day but at a minimum most banks are concerned with the daily close of the FX Rate. Now imagine trying to write a rule for each currency pair. You'd have to know the relationship and adjust a static threshold for each of the 28K pairs every couple days to keep the rule in tact. Quickly our minds jump to a conclusion that we might be able to solve this with simple math? We can get closer using averages or percent change formulas but these formulas quickly come up short when some currencies commonly fluctuate more than others. Ah I got it (variance), that college class that I thought I'd never use will finally come in handy. Our minds then quickly graduate from stats 101 to 201 and we could consider the individual variance of every combination. But even this will only get us so far as time is an important dimension, the length of time or window can often be tricky to calculate. The problem gets harder when you run your basic stat model and receive a bunch of false positive alerts. You quickly realize that signal-to-noise ratio is important, confidence factors are important, down-training individual foreign currencies that don't seem to fit you statical model are important. Knowing if you copied the data incorrectly, truncated 9 levels of precision on the decimal or if the source provider sent the wrong information is important. Needing the ability to flag exceptions in production on a single currency pair while not flagging the other 27,445 pairs. Using a feedback loop so that the data steward interactions are captured and learned from Vs. having to take the same corrective action over and over. What happens when there is a typo in the currency pair or a single pair goes missing? The answer is that rules don't scale and we need much more than just one off statistical metrics to have a robust and trust worthy DQ program.