Our piece, Auditing and Food Safety, brought this note from a longtime Pundit contributor:
In your Pundit 1/4/12 article, ‘Auditing and Food Safety,’ the word ‘transparency’ is used to refer to the rapid communication of producer audit failures, low scores, and comments, to the buyer.
Shouldn’t the word ‘transparency’ also include the rationale behind the components of the audit itself? If it is limited to communication of audit results, then this suggests that the audit is entirely a verification tool: the audit says this is what you should do, or the producer says this is what I’m doing, and the purpose of the audit is to make sure the producer is doing what the audit says, or what the producer says, should be done. What is the basis for inclusion of the item being verified? In other words, what constitutes adequate validation of the effectiveness of the procedure?
The answer one most often hears is that it must be ‘science-based.’ The criteria for being science-based can range from what a recognized scientist says is true, to a statistical risk analysis, which can get terribly complicated, and since few of us are statisticians, may tend to loop back to what the recognized scientist says. Or, more commonly, to what is traditionally done, with the assumption that if it has been done in the past, it must make sense.
We muddle through.
— Bob Sanderson
Jonathan’s Sprouts
Rochester, Massachusetts
Bob raises a very good point. Very often, people require audits because they want things safe, because they wish to mitigate liability and because it may be required by law, regulation or clientele. Very frequently, they really don’t know what is being audited or why. They just know they need an audit.
Audit-standard developers and auditors themselves sometimes feel the need to justify their product by producing large and impressive checklists and reports, without much evidence that these individual items have much efficacy in preventing foodborne illness.
Harmonization efforts often resolve differences by simply doing more. So if audit-standard developer Smith thinks Question A is vital, and audit-standard developer Jones thinks that doesn’t matter, but Question B is essential, the resolution is often to ask both A and B and thus harmonize the standard.
The temptation is also always there to audit those things that are easy to audit and ignore those things that may be more important but are difficult or impossible to audit.
As Bob implies, the notion that every question asked on an audit is “science-based” is a bit of a stretch. There are very few controlled studies that have ever been done, and our knowledge of pathogens and their behavior is very limited, so the notion that “science” compels a 25-foot buffer zone and not a 20-foot or 30-foot buffer zone is a very hard case to make.
Of course, following the spinach crisis of 2006, there was a conscious decision to not allow the limitations of knowledge be an excuse for inaction. The thought was that the status quo was unacceptable and so we would, as an industry, have to do the best we could to improve food safety even while launching initiatives such as the Center for Produce Safety to improve the science and increase our knowledge.
Still, audits are burdensome, and it wouldn’t be a bad idea for audit-standard developers to have to publish a justification next to each question or line item. Why is this here? What will this prove? How will it enhance food safety?
The Rocky Ford cantaloupe outbreak led many to learn that audits are typically not done to determine if every possible thing that can enhance food safety has been done or even to ascertain if a facility is world-class. Audits are typically done just to confirm that some facility operates to industry standards, although the definitions here can be slippery
Making someone explain how those standards make food safer isn’t the worst idea one could proffer.
Many thanks to Bob Sanderson of Jonathan’s Sprouts for contributing to the industry discussion on this important issue.