On the reliability of Scent Detection K9 Teams

A few weeks ago a customer challenged me on the reliability of bed bug dogs citing what she described as an “EPA study”.  When I asked her to provide me a reference to the information she was basing her ideas on, she said she would email me a link to it but never did.

Yesterday, I heard a colleague describe canine scent detection as unreliable, and when I challenged him on it, he brought up the same old tired studies that K9 detractors like to use to justify their positions, so I thought it would be worth while explore these studies and describe how useful they are, and in some cases, are not, and why, so that my customers can be better informed on the subject, and so that my fellow detection dog handlers will have a link to an article they can share with someone who attempts to use one of these studies to justify their conclusions on the reliability and effectiveness of scent detection K9 teams.
If you are a professional K9 handler, if you are in charge of a professional K9 team, or if you are a consumer of the services of a professional K9 handler, it is important you understand what these studies really say, so that you’ll recognize when someone is trying to misrepresent the results of them.
The first study I want to discuss is what most of us recognize as the 98% study.   This study conducted at Florida State university is why many dog handlers cite numbers like 97%, 97.5%, and 98% when describing the effectiveness of their dogs.  The truth is they shouldn’t be doing that.  That study was conducted in laboratory conditions, and is a great answer to the question “how effective can scent detection dogs be in controlled conditions” but it does a pretty poor job of indicating how effective our dogs can be in the field, as the conditions we encounter when we are working in the field are far from controlled, or even controllable.  You wont hear me quoting those numbers to customers, you shouldn’t hear anyone quoting those numbers to customers, because that study does not give an accurate picture of the reliability of K9 teams working in the field.
The Rutgers Study:
This study is used to try to prove Scent Detection K9 teams are not reliable.  The colleague I mentioned earlier attempted to use this study to back up his position that “while K9 teams can be a powerful tool in some circumstances, overall they are generally unreliable.”  The problem with using this study to prove that position is that the K9 teams involved were not properly selected.  Please allow me to explain…
The study seems to try to answer the question “How accurate (in terms of percentage) can Scent Detection K9 Teams be in the field?  There are several problems with the way the researches went about attempting to find an answer to that questions.
1.  Participant selection:  Imagine for a moment, you wanted to conduct a test to see how fast a human can run.  What kind of criteria would you use to govern the selection of the participants?  Certainly the results of the experiment would vary widely depending on the selection of the participants.  Imagine if they selected me to participate in the experiment instead of Jesse Owens.  If the goal was to test how fast a person could run, would it make more sense to use me as a participant or Jesse Owens?
If the goal of your study is to determine how reliable/accurate a dog team can be, would you select some random unknown dog team to participate in the study, or would you select the “Jesse Owens” of dog teams?
I don’t know why Mr. Cooper and Mr. Wang didn’t create a system for properly selecting K9 teams to participate that are known for excellent results, and I will not speculate on their motives.  What I do know is the criteria they used for selecting the participating teams was unsatisfactory, therefore the study is unable to answer the question that most people attempt to use it to answer.
2.  Very small sample size:  If Mr Cooper and Mr. Wang wanted to get a good idea of the reliability of whatever dog team might show up at the customers door, they would have had to use a much larger sample than just seven dog teams.  https://explorable.com/type-i-error
3..  Faulty method of confirming K9 results:  Simply looking for bed bugs, and not finding them, does not mean they are not there.  They are hard to find, that’s why we use dogs to help us find them.  Using a passive monitor for 14 days also does not mean there are no bed bugs present.  It simply means no bed bugs have crawled into the monitor.  This problem is why the Florida state study was conducted in controlled conditions.  When a dog alerts, and you aren’t able to visually confirm the alert, that doesn’t mean the dog is wrong.  There could be bed bug odor there you aren’t aware of.
The great thing about this study is that it demonstrates the need for peer review.   If you wish, you can engineer a study to suggest anything you want it to.  The peer review process isn’t perfect, but it helps to prevent studies/experiments from getting published when they aren’t able to withstand the rigor of fellow scientists scrutiny.  When attempting to use a study to affect public policy, or to impact company policy, or to help protect consumers, you would do well to stick to peer reviewed studies.
The UC Davis Study:
Finally, a study that is useful!  This study confirmed what skilled dog trainers/handlers have always known:  If you aren’t extremely careful, you can unknowingly cue the dog to the location of the alert.  That’s why we train and certify using a “double blind” protocol.  If no one in the room where the dog is working knows where the location of the find is, or if there even is something to find, there is no way anyone can be inadvertently cuing the dog where to alert.   Lisa Lit’s study says nothing about how reliable a properly selected and properly trained K9 team can be.  It does highlight the importance of “double blind” testing in certifications, and training using double blind exercises.
I heard David Lattimer suggest the next time someone decides to attempt to engineer a study or an experiment to answer the question “how reliable/effective are scent detection K9 teams, or how reliable/effective can they be” they should remember to have an expert in animal behavior participate in the process of creating the study/experiment.  I hope they do that someday, but until then, my customers still have bed bugs feeding on them at night.  I would put a well trained K9 team certified through a double blind test up against any other bed bug detection method available, and I would put my money on it.  A properly selected, well trained K9 team is the best tool we have for finding bed bugs.  No other tool available to us even comes close.  The next time someone says to you, “well I’ve seen technicians follow behind K9 teams and find bugs the dogs missed,” tell them you aren’t swayed by anecdotal evidence and that they should put their money where their mouth is.  Lets find a way to fund a study that will withstand peer review and finally answer the question definitively.
Until that happens, I’m pretty comfortable relying on the 150+ years of police officers and military service members relying on scent detection K9’s to save their lives, and on the double blind certification test my dog and I have passed to protect my customers from blood sucking parasitic insects.
If you know of a relevant study not listed here, or other relevant information that should be presented here, please email it to me and I’ll add it to this list.
Print Friendly, PDF & Email

Leave a Reply