I find the explanation lacking in some credibility. It's surely inconceivable that the engineer can simply kill the whole system as well as the fail over, backup, redundancy, (call it what you will) just like that.
I don't think the other system/DC was taken down by power. From what I've read it's more likely that the power was cut in one DC (they run the two DCs as active:active rather than primary:backup or DR style) after the UPS system was disabled, and then power restored in a panic (possibly botched, e.g. turned on, subsequent surge not protected by the disabled UPS, some machines go down again, data corruption, etc). When enough of the these machines were back online they started to sync partially corrupted data between the two DCs and now you're in a world of trouble.
Having worked in DCs where there are strict procedures for doing things I can see exactly how this kind of thing can happen, because there's rarely any proper enforcement of those strict procedures. A bunch of stuff written down in a playbook is useless if people ignore it. A "needs two people to do something" rule is nigh on useless if both people are complacent. Mistakes will always happen. It's the same everywhere, look at "never events" in the NHS for example.
What BA lacked is a procedure/plan (that they test frequently) for dealing with corruption like this. You can bet they'll be working towards a plan (and then the regular testing of the plan will eventually succumb to the same complacency that caused the problem in the first place).