The scientific working group for digital evidence, in response to a series of articles by John Barbara that appeared in Forensic Magazine:
have taken a very interesting stance. In a published document, the SWGDE claim that computer forensics is different than other forensic sciences because in computer forensics "false positives are non-existent". Therefore controls are not applicable to this field.
I am deeply troubled by what I consider to be a false belief system – computer forensics and its tools are infallible. This position is not supported by the larger scientific community and in fact numerous examples are available that contradict this position (e.g., orphan files and folders in NTFS, misrepresentation from data carving).
What is equally as disturbing, is the notion that has been proffered that somehow using a hashing algorithm to verify the integrity of a forensic copy of the original, is a control against false positives at the data abstraction and presentation layer during the analysis and examination phases.
Most of the examples of false positives occur due to an error in the data abstraction layer. Since we rely on tools (software) to abstract the data (we cannot see the ones & zeroes etc.) an error in the tool becomes problematic, as we trust the tools output. To date, none of the commercial computer forensic tool vendors are willing to share the error rates of their tools, so we are left to experimentation in order to try and determine this for ourselves.
I have weighed in on this issue with the SWGDE (full disclosure - I am a non-voting academic associate member). Since the SWGDE has publicly released their position paper, I think that in the spirit of open discussion and debate, we in the digital forensics community need to weigh in on this. I believe this is a watershed issue and it needs to be addressed.
Here is the link to the SWGDE position paper: