Accuracy Ratio / Test Accuracy Ratio (TAR)
Calibration standards are those high accurate devices with the help of which Instrument Technician / Engineer compare and check the measurements of low accurate devices to evaluate the performance of an instrument.
For further clarification, let's review a simple example. If an accuracy of a Pressure Gauge (UUT) is +/- 4 psi at 100 psi. The Calibration standard (Pressure Gauge) needed to test UUT would have to have an accuracy of +/- 1 psi at 100 psi to maintain TAR of 4:1.
Note: +/- 4 psi at 100 psi means that on applying a pressure of exact 100 psi, UUT (Unit Under Test) should read values from 96 psi to 104 psi to remain in tolerance. The standard pressure gauge (Calibration Standard) should read values from 99 psi to 101 psi on an input of 100 psi to be considered within tolerance.
Interesting Facts related to 10:1 and 4:1
In 1955, the U.S. Navy recognized the need for improved measurement reliability in their guided missile program.Building upon Eagle's work, Jerry Hayes set out to establish a basis for accuracy ratios versus decision risks for application in the Navy's calibration program . The practice at the time was to use a 10:1 ratio, but that value was considered unsupportable by the nation's calibration support and measurement traceability infrastructure.
Hayes settled on 4:1, but others went with a more conventional and conservative 10:1. NASA used the 10:1 for all calibration and article measurement requirements through the first moon landing in 1969. After that, calibration requirements were changed to 4:1 while test measurement requirements remained at 10:1
The 4:1 ratio requirement was developed and established as Navy policy and subsequently adopted as a requirement in military procurement standards . This ratio became what is known today as the TAR and later evolved into the TUR (Test Uncertainty Ratio).