Recently a company called OptoFidelity ran some tests on the "accuracy" of the iPhone's touch screen and came away less than impressed. So, either Apple's touch screens suddenly suck, or the test needs to be put into context. Luckily, that's why we have folks like Nick Arnott. From Neglected Potential:
This comes up in testing all of the time. In order to properly test something, you need to understand what it is that you’re testing. If you don’t understand what you’re testing, then it’s easy to misinterpret the results. Every tester out there has filed a bug, only to have it explained to them why it’s actually the expected behavior for an app (and not in the joking “it’s not a bug, it’s a feature” kind of way). A critical part of our jobs as testers is not just reporting what something does, but asking why it behaves that way. Consider this real world example. You come across a light switch that when flipped down, the lights are on, and when flipped up, the lights are off. It could be that the switch was installed upside-down. Or it could be that it’s a three-way switch and there’s another switch elsewhere that controls the same lights. In the latter case, the behavior of the switch could not be considered a bug. Arriving at that conclusion requires an understanding of what you’re testing in order to know the expected result.
Nick's point - and it's an important one - is that testers and engineers need to ask questions. A lot. And test results need to be presented in context. Always.
For example, is the iPhone screen optimized for angular input like that of fingers projecting from a hand, and does it compensate for the loss of accuracy due to throw over distance, and if so, how would that be reflected by robots testing in a manner alien to how it was designed?