Fancy term. Simple concept.
The operant definition is a cornerstone of the scientific method. Its purpose is to prevent spin.
It means this: If you do an experiment, or create a social program, you need some way to figure out if it worked. You don’t want it to be subjective. You want it to be objective
What you need is a metric, which means a measurement that can be expressed in numbers. That metric is the operant definition of your results. No scientific experimenter can publish results unless he has a well-defined metric based on a well-defined operant definition.
In a classic experiment designed to determine at what age a baby can recognize its mother’s face, "how happy the baby looks" was not an acceptable criterion. It’s too subjective.
The metric actually used was the number of smiles per minute. The number of smiles per minute was the operant definition of whether the baby recognized its mother.
In similar fashion, for the teenage pregnancy program discussed in the last chapter, volunteers’ perceptions of how well things were going is not a good metric. Neither is the number of lectures given or the number of brochures handed out, because they are measures of effort put in, not results achieved.
The only legitimate metric for that program was the change in the number of teen pregnancies.
The government should never initiate a program until it has figured out — in advance — how to measure and report on the results.
And you shouldn’t believe the results until you see them reported as per that metric. And until you’re sure the number wasn’t faked, as in the case of the immigration program.
If you say to your kid, "I’m cutting your allowance until you do better in school," you’re asking for trouble in the form of raging arguments over whether she in fact improved.
But if you say, "I’m cutting your allowance, but will restore it (and then some) if your grade point average is at least 2.75 for the next term," you might get an argument about whether that criterion is fair, but after that, the numbers speak for themselves.
Take the so-called "War on Drugs." The administration’s official position on the results, as expressed in a recent State of the Union address, is this: "We are winning the war on drugs."
Maybe we are, and maybe we aren’t, but you can’t tell from that report. There’s no metric.
What we need to hear is something like this: "Emergency room visits for drug overdoses are down 11%. The number of new applications for methadone programs is down 8%. Arrests for crack cocaine are up 38%. New-employee positive drug tests were down 7%."
These metrics would tell you that we are winning the war on drugs.
Fortunately, we do have real metrics for these and other indicators of drug usage.
Unfortunately, they all show that we are losing the war on drugs.
Which is why you don’t hear about them in speeches by the present administration.
This is not to say that honest numbers cannot be open to interpretation. For example, is it a good sign that arrests for sales of crack cocaine are up? Or is it a good sign when they’re down?
If they’re up, that can be interpreted as either a) The police are working harder, which is good news, or b) More people are using crack, which is bad news.
That’s a valid argument, and there are some other statistics that can help us get a handle on what it all means.
But without an operant definition, and results reported as metrics, you can’t even start the argument.
Anything else is just spin.