<< Previous

The facts
The basic idea of "validating" the SFST is easy. Have trained DUI officers do the SFST on a bunch of people and see how often the SFST gives the right answer. Tell everybody how often the SFST gives the right answer. End of study.

But that's not what NHTSA did. Details vary; here's how it worked for the project I've been able to get the raw data for. NHTSA had trained DUI officers do the SFST on a bunch of people. Officers assessed supects in other ways. They considered their driving, interviewed them, smelled their breath, sometimes found an open bottle, even maybe had them confess. Then the officer wrote down his own personal hunch about what the person's BAC was. And NHTSA tells everybody how often the officer's hunch gave the right answer.

Did I mention officers in this study had PBTs? And did PBTs on every driver? They did.

Here's the thing. The officers (with PBTs) doing the study knew the SFST doesn't work, so they ignored it. When the SFST gave the wrong answer, the officers often changed the wrong answer to the correct answer. The guys doing the study knew the SFST doesn't work, so they ignored it. They fixed it's mistakes. They often changed wrong answers to right answers. Every wrong answer? No. But lots of them.

Let me show you
Here is raw data from the most recent, most up to date NHTSA SFST field validation study, San Diego 1998. The picture is from one of my working Excel files, so it's not purdy. This is the SFST validation data for one officer in the study, Officer 3661. Each row has the results for one driver tested by Officer 3661.

NHTSA keeps track of three things: 1 SFST score, 2 Actual BAC, 3 The officer's guess about what the SFST is. I've numbered those columns.

Remember, SFST scores are not supposed to predict a specific BA level. All they do, supposedly, is predict BAC high or low. To match that theory this table simplifies SFST score according to NHTSA's standardized FST interpretation criteria, to "Hi" or "Lo." BAC the same, above or below the 0.08% limit SFSTs supposedly identify.

NOTICE
The SFST coordination test does in fact work like a metal detector. Everyone with a high BA (column 2, red "Hi") is uncoordinated (column 1, "Hi"). But most people with a low BA (column 2, green "Lo") are also uncoordinated (column 1, white "Hi").

For the 7 innocent people the SFST gives the correct answer only 2 times. On innocent people the SFST is 30% accurate.

Officer 3661's predictions were perfect. When the SFST gave the correct answer, that's the answer the officer gave too. But every single time the SFST gave the wrong answer, officer 3661 rejected that answer—corrected the wrong answer to the right one. Officer 3661's BAC high or low guesses match the PBT high or low results exactly.

On innocent people Officer 3661 was 100% accurate.

In court
NHTSA reports findings like this with crafty phrasing. "Using the SFST, Officer 3661 was able to classify subjects' BACs with 100% accuracy."   Whether you believe this sentence is true depends on what you think "using" means. People of good will can disagree.

Whether or not the sentence is true, it leads to false convictions of innocent people. People think using the SFST means doing what the test says to do.

On account of which people think SFST validation studies proved that officers, doing what the SFST told them to do, were 91 % accurate at classifying drivers' BACs. And the officer in this DUI case I'm the juror on right now, he also did exactly what the SFST told him to do. So he'll be 91% accurate too.

But it's not true. Officers in the validation studies did not do what the SFST told them to do. They were as accurate as they were only because they ignored the SFST. The accuracy numbers NHTSA advertises are as high as they are only because someone changed the answers.

If your D's jury believes NHTSA's false SFST accuracy claims, they will believe the SFST is more accurate than the science proves it really is.

Data prove validation study officers did SFSTs, but did not use SFSTs.
A quick look at the raw validation study data proves that officer did not—could not had they wanted to—base their BAC estimates on standardized SFST interpretation criteria. Here's how we know...

1. Officer's estimates more precise than SFST criteria allow.
In the San Diego study thirteen drivers failed the HGN test and passed both the OLS and WAT tests. These are their SFST results, and the officer's estimate of each drivers BAC.

Notice these thirteen drivers had identical SFST scores. According to the standardized FST interpretation criteria, each driver should have had a BAC estimate of ">=0.08". Instead, officers came up with nine different BAC estimates.

What's more, instead of the SFST's standardized BAC estimates— "<0.08" or ">=0.08"—officers were somehow able to estimate BAC levels to 1 part in 100. There were then and are now no standardized FST interpretation criteria for estimating BAC to 1 part in 100.

Officers did not—could not had they wanted to—rely on these identical SFST scores to come up with their nuanced, 1 part in 100, BAC scores. What's more, the officers somehow knew almost exactly which SFST results to throw out. All these drivers failed the SFST. Yet officers estimated that six of them had BACs in the legal range—flatly contradicting the SFST. Five of those six in-the-legal-range BAC estimates were correct. How'd officers do that? How did officers know almost exactly which SFSTs to ignore?

2. Officer 3661 was not alone. The truth is, officers systematically ignored SFST results. Correct results were accepted; incorrect results were rejected.

This graph shows when officer BAC estimates and SFST results agreed and disagreed.
Data is from the San Diego Field Sobriety Test validation study.

 

EXPLAINING THE GRAPH
This graph shows which drivers' SFST scores were ignored by police officers. Each point represents one driver: FST score x-axis; BAC y-axis. Drivers above the dark 0.08 line were impaired as a matter of law. Drivers below the dark line were innocent. Open dots and open squares represent drivers whose SFST result, pass or fail, agreed with the officer's BAC estimate.

Every dark square represents a driver whose SFST result was rejected by the officer. Dark squares below the 0.08 line are drivers who failed the SFST, but who the officer correctly assessed as innocent. Dark squares above the line are impaired drivers who failed the SFST, but who were incorrectly assessed by the officer as innocent. (Squares stack. You can't count visible squares to get totals. Of 59 false positive SFSTs, officers rejected 35 = 59%)

Black boxes below the 0.08 line represent SFST mistakes corrected by the officer.
Black boxes above the 0.08 line represent SFST correct-calls mistakenly rejected by the officer.

WHAT THE GRAPH SHOWS
Officers ignored the SFST when it gave the wrong answer, but not when it gave the correct answer. When the SFST gave the wrong answer, officers rejected that wrong answer a whopping 59% of the time. When the SFST gave the correct answer, officers ignored that answer only 2% of the time. This distribution of rejections cannot have happened randomly. Officers systematically ignored the SFST.

The only way officers could have known which SFSTs to ignore and which to accept is to use some other method to assess driver impairment in every case. The data proves FSTs are extremely inaccurate—so inaccurate that officers in the NHTSA's own validation studies simply ignored the test's results.

Their project reports reveal that answer switching certainly happened in all three NHTSA SFST field validation projects, Colorado 1995, Florida 1997, San Diego 1998. Because NHTSA refuses to release data for the first two studies we can't be sure that the answer switching there improved the SFST's "accuracy." To do that, we need the study data. The one study we have data for does prove the "accuracy" bump, as I've explained.

Again, this is all about science, not people. I'm not saying the police officers lied or cheated. They didn't. They were real officers doing real DUI stops. They did exactly what they were asked to do. They did exactly what they should have done—not arrest innocent people. Also I'm not saying NHTSA's contractors sneaked into the lab late at night and changed the data. They didn't. Also, I'm not saying the design of the study was deliberately deceptive. I do not know and I do not have an opinion about that. I don't care whether the error was deliberate or accidental. Water under the bridge. I care about the science. The science is flawed.

Read this web site while you can. Best I can tell, I am under ongoing threat from NHTSA contract SFST scientist Dr. Jack Stuster for exposing the scientific errors you are reading about here.

This web site is about science— NHTSA's SFST validation science. I do not know, I do not care, I do not have an opinion about Dr. Jack Stuster's knowledge or intentions at any time ever in his life. I'm not even saying he had knowledge or intentions. But if he did, this web site isn't about them. Or him.