Legal Ramifications of Digital Imaging in Law Enforcement, by Berg (Forensic Science Communications, October 2000)
October 2000 - Volume 2 - Number 4
Legal Ramifications of Digital Imaging in Law Enforcement
Erik C. Berg
Forensic Services Supervisor
City of Tacoma Police Department
Some of the information in this article was presented in 1997 at the International Association for Identification’s (IAI) 82nd Annual Training Conference in Boston, Massachusetts. At the time, only a handful of law enforcement agencies were using digital imaging for casework. Of those, the majority were using it only for specialized work, such as the enhancement of latent fingerprints. Interest in digital imaging was growing rapidly. The IAI passed Resolution 97-9, which recognized digital imaging as a scientifically valid and proven technology for the first time. This offered legitimacy to the technology and encouraged its adoption among the members of the IAI. Since that time, digital-imaging technology has spread to nearly every major law enforcement agency in the United States. The capability of the technology has continued to expand at an amazing rate. The use of digital cameras is no longer restricted to capturing latent fingerprints or other specialized purposes. More and more agencies are choosing digital capture systems to eliminate the need for film-based imaging systems. The potential for budgetary savings and the ability to deliver images faster continue to drive more law enforcement agencies toward a conversion to digital imaging. Rapid moves toward any technology, without adequate research and planning, or the training needed to demonstrate competency and proficiency, can leave an agency vulnerable to misinformation and legal challenges.
An article entitled Digital Imagery in the Courtroom, published by Kodak (1999) on the Internet, states that “Imagery is not evidence.” Most experienced forensic photographers would challenge the accuracy of this statement, because the author fails to recognize that there is a difference between an image or a chart introduced for illustrative purposes and an image submitted as evidence. According to Blond’s Evidence (Blond et al. 1994), photographic evidence can be authenticated by two methods, depending on the type of imagery. The traditional method is to consider images as “illustrative of a witness’ testimony.” Given the advances in imaging technology, many jurisdictions have adopted an alternative method on the basis of the silent witness theory, which states that photographic evidence “speaks for itself” and is thus admissible through testimony that establishes how it was produced. It is ironic that one of the first sentences in the article Digital Imagery in the Courtroom states that there is “much confusion” about the subject of admissibility of digital images in court. Most of the confusion about digital imaging stems from a lack of knowledge about both the technology and what it can do and how the legal system scrutinizes new methods.
Any agency contemplating digital imaging technology would be wise to seek out as much information on the subject as possible. Particularly relevant is how the local magistrate is going to react the first time a digital image is introduced in court. Every state and federal court in the United States has a published set of court rules that, among other things, outlines the process of how various types of evidence and testimony can be received by the jurisdiction considering the case. Black’s Law Dictionary (Nolan and Nolan-Haley 1990) devotes two pages to the subject of evidence and includes a brief discussion of how evidence is authenticated. Prosecuting attorneys as well as defense attorneys are excellent sources of information about how a new technology might be received if it were introduced in a case they were involved in. Some attorneys will even tell you how they might attack the evidence.
In general, attorneys treat anything new and lacking in previous case law with suspicion. Scientific evidence is admissible only if the principle upon which it is based is “sufficiently established to have gained general acceptance in the field to which it belongs“ (United States v. Kilgus 1978). From 1923 until 1993, the process for determining whether a scientific process had gained general acceptance was determined in a Frye hearing (Frye v. United States 1923). In 1993, the federal courts adopted a more restrictive standard known as Daubert (Daubert v. Merrell Dow Pharmaceuticals, Inc. 1993). Since that time, a number of state courts have also adopted Daubert as the standard they use for determining the admissibility of scientific testimony. How this will affect digital imaging remains to be seen, but it is likely that we will see more scrutiny by the court rather than less.
Cost is always a consideration when making a decision to replace one tool with another, but the real questions that need to be answered are: What is the most effective tool for the job at hand? What are the consequences of that decision? An unidentifiable latent fingerprint lifted from a crime scene is of virtually no value. A photograph of that same latent fingerprint is equally useless; it just costs more to recover. A digital image of that fingerprint costs very little to capture, but if it is subsequently enhanced and then identified as a result of the enhancement process, any material costs associated with the capture of that fingerprint become irrelevant.
A lack of understanding or misinformation about the use of digital imaging can contribute to poor budget decisions and wholly inadequate statements about policy and procedure. This can leave an agency or potential witness in a precarious position when it comes time to testify in court. The basis of any expert testimony is predicated upon superior knowledge of or possession of specialized experience with a particular subject.
|As the analog, or continuously variable,
rain falls across the glasses in a random pattern,
the water is collected and measured. Light
falling upon a CCD produces a pattern of
electrical charges, which are converted to
numbers and then stored.
The process of creating a digital image is a simple principle to understand. Light falling upon a grid of detectors known as a charge coupled device, or CCD, produces a pattern of electrical charges that are measured, converted to numbers, and then stored. The process can be analogous to a group of water glasses set out in a grid pattern to collect rainwater. The amount of water collected in each glass will vary according to the pattern of the rainfall at any given point above the glasses. Once the rain has stopped, the glasses are collected and carried one at a time, much like a bucket brigade, to a metering device. The metering device measures the quantity of water collected in each glass, converts the amount to a number, such as six ounces, and then records the amount before moving on to the next glass.
In a digital image, the quantity of light collected by each sensor, or glass, is measured by the metering system and recorded as a number. This number is stored in the same grid position as the sensor that collected the sample. As each subsequent sample is measured, its value is stored according to its position in the original grid of sensors, or glasses. Each value in the grid corresponds to a picture element (also known as a pixel) in the digital image. Each pixel is displayed on a computer monitor, beginning in the upper left corner, by a tiny light bulb that takes on a particular color or shade of gray according to the number recorded for the grid point it represents. The computer reads the number assigned, checks an index of colors to determine its color equivalent, and then adjusts the corresponding light bulb (pixel) on the computer screen to match the color value. The computer then moves on to the next pixel in the grid, repeating the process until all the pixels in the image are displayed on the computer screen. The end result is a mosaic of dots that fit together and represent the original scene.
|Pixel code values from a typical
digital image. Code values shown
are in hexadecimal. The equivalent
gray tone is shown for selected pixel values.
The illustration at right shows a portion of a digital image in numerical form with the gray value of six different pixels represented by an enlarged circle. The accuracy and detail of a digital image are dependent upon both the resolution of the optical system used in the capture device and the density of the sensors on the CCD. Generally, more sensors means a more accurate and detailed image representation.
Photographic film records an image using light-sensitive collectors called silver halide. As light strikes the surface of the film, individual particles of silver halide change chemically. The longer the exposure, the greater the change. When the film is processed, exposed silver halide is reduced to fine silver particles in proportion to its exposure to light. The final negative represents the original image as a mosaic of fine, irregular particles of silver. The accuracy and detail of the image are dependent upon the resolution of the optical system used in the camera, as well as the density and size of the individual silver halides in the film. By populating a piece of film with finer particles of silver halide, it becomes possible to record more individual samples of light within the same area of film. In general, just as a CCD with more sensors can record a more accurate and detailed image, so too can a piece of film that is populated with fine rather than course silver halide particles. The actual process of capturing an image on photographic film is a bit more complex than the simplified explanation provided here, but for the purposes of this article, it should be adequate to show the similarities between a typical CCD and photographic film. The similarities should be enough to dissuade anyone who has used a camera from believing a digital image is the result of some novel scientific process. Few would dispute the ability of photographic film to record an image accurately. The world has taken photographs for granted since the first daguerreotype was presented on August 19, 1839 (Crawford 1979). Digital images, on the other hand, still represent a mystique when shown to the average citizen, and the origin and validity of digital images are looked upon with some suspicion. This is especially true when the image is introduced into a legal proceeding. Defense attorneys and judges tend to be distrustful of technology. Jurors are easily confused by technical explanations, placing more credibility upon what they can see and touch.
The general population watches satellite views of the changing weather patterns above their neighborhoods, displayed every night on the evening news, without realizing they are looking at digital images captured by digital cameras mounted on a platform floating in space. Few would question the accuracy of the images depicted, and most take them for granted as they plan their day around the information the images provide.
Early adopters of imaging technology include forensic professionals Alan McRoberts of the Los Angeles County Sheriff’s Department, Ed German of the U. S. Army, William Watling of the Internal Revenue Service, and Brian Dalrymple of the Ontario Provincial Police Department. Since early 1987, when Alan McRoberts presented imaging as a way to enhance latent fingerprints at an FBI symposium, digital imaging has gained widespread acceptance among the many professionals working within the forensic sciences. As the use of digital imaging becomes more common among the various law enforcement agencies, throughout the United States and Canada, acceptance by the legal community will be the result of proper procedure and conformity to established rules of evidence. One of the more important rules that need to be considered when using digital imaging technology to collect physical evidence is chain of custody.
An early and defining case, Albert Lopez Gallego v. United States of America, decided in the Ninth Circuit Court of Appeals on March 23, 1960, helped set the standard for admission of physical evidence. The defendant, Albert Lopez Gallego, was convicted of unlawful importation of marijuana. At issue was a can and bag containing marijuana. The defense asserted the marijuana evidence was unaccounted for between the confiscation of evidence from the defendant and its subsequent introduction at trial. The Court of Appeals applied the following litmus test:
Before a physical object connected with the commission of a crime may be properly admitted in evidence, there must be a showing that such object is substantially in the same condition as when the crime was committed. Factors to be considered in making the determination of whether a physical object connected with the commission of a crime is in substantially the same condition as when the crime was committed, so that it may be admitted in evidence, are nature of article, circumstances surrounding its preservation and custody, and likelihood of intermeddlers tampering with it; if upon consideration of such factors the trial judge is satisfied that, in reasonable probability, the article has not been changed in important respects, he may permit its introduction as evidence.
Why this particular test is relevant to digital imaging will be shown in the following discussion.
On August 21,1986, the Texas Court of Appeals affirmed the conviction of defendant Gary Wayne McEntyre (Gary Wayne McEntyre, Appellant, v. The State of Texas, Appellee) for solicitation of capital murder. Several issues were addressed in this case, but the issue most relevant to digital imaging was the admissibility of audio recordings of McEntyre’s conversations with an undercover informant. McEntyre claimed that a seven-minute gap in the audio recording was proof that the tape had been altered. The State of Texas claimed that the gap in the recording was a result of interference and that the recording had not been changed or altered in any way.
The appellate court applied several tests to determine the admissibility of the tape recording:
The particular case in which a recording is offered is part of the circumstances to be considered in determining whether chain of custody has indisputable fundamental trustworthiness necessary for the recording’s admission into evidence. The burden is on the proponent to establish the necessary predicate for admission of the tape recordings. If alteration of a tape recording offered in evidence is accidental and is sufficiently explained so that its presence does not affect the reliability and trustworthiness, recordings can be admitted. When the defendant makes an attack on the chain of custody of a tape recording offered in evidence, the burden of disproving authenticity is not shifted to the defendant.
The court in this case used a much narrower criteria for determining proper chain of custody. Gallego stated that evidence must be in substantially the same condition. McEntyre defined chain of custody, when applied to tape recordings, as “demonstrating to the court that nothing occurred that would affect the trustworthiness or reliability of the tapes.”
The finding in this case was that the tape recordings of conversations between the party equipped with a police transmitter and the defendant regarding solicitation of murder were not sufficiently altered to make the recordings inadmissible. This was supported by testimony of the recorder operator that unclear portions in the recording were due to interference and that the recording had not been changed or altered in any way, notwithstanding the seven-minute gap in the tape was due to a loss of transmission from the microphone. Police testimony established that the tapes were kept in a locked steel cabinet in the police department’s evidence room and that when the tapes were signed out by district attorney staff, no one else had access to them. The court found that the state established proper chain of custody of the recordings of conversations between the informant and McEntyre and that the recordings were admissible in the defendant’s trial for solicitation of murder.
Chain of custody can be one of the most difficult issues faced by the forensic professional trying to introduce a digital image as evidence in a criminal case. If a defendant alleges an image has been altered, or could have been altered, the burden of proof falls upon the state to prove otherwise. If the image is a fingerprint linking the defendant to a crime scene, it is inevitable that the defense attorney will raise a question about the integrity of the image. In many cases, the success of the argument will hinge upon the procedures used to safeguard the security of the images.
Until a digital image is either printed or displayed on a computer screen, it has no visual form. It is completely dependent upon a host computer for its existence as a visual record. The potential for alteration or corruption of a digital image is great. Electrical power surges can scramble the binary bits that define the image. Hardware failure can destroy the media upon which the image is recorded. Computer viruses can seek out and destroy the image. Coworkers who say “I know all about computers” can be a serious threat to digital images. One or two errant commands could be enough to destroy precious image data.
Controlling access to a personal computer is important, and so is tracking and preserving the images. The original image should receive the same consideration a latent lift card or a photographic negative does. Any enhancement applied to an image must take place on a copy of the original. If the original image is enhanced, there will be no way to reproduce the results. The original image serves the function of control, much the same as any control used in scientific analysis. Without effective controls, any conclusions drawn from the evidence will be suspect.
It is essential to standardize and secure image-handling procedures. If even one case comes to trial, and the imaging evidence is found to be defective or questionable due to sloppy procedures, the damage will be felt by every other jurisdiction using or considering the use of digital imaging to capture and enhance evidence. Once the damage is done, it could take years to repair.
One of the first cases tried in the United States involving the use of DNA evidence to connect the suspect to the scene of the crime was the murder of a woman and her two-year-old daughter in the Bronx, New York, on February 5, 1987 (Levy 1996). A suspect was developed shortly after the murders and was interviewed by the lead detective on the case. During the interview, the detective noticed a stain on the suspect’s watchband. The watch was taken as evidence and later analyzed by Lifecodes Laboratory in Westchester, New York, which concluded that the stain was blood and matched that of the adult victim with a one in 100 million probability among the Hispanic community. Armed with this information and the testimony of several witnesses, Deputy District Attorney Risa Sugarman felt she had an open-and-shut case. The public defender assigned to the case enlisted the help of attorneys Peter Neufeld and Barry Scheck, who were beginning to build a reputation for their knowledge of DNA and forensic evidence. With the assistance of Dr. Eric Lander, an MIT scientist, they set out to prove that Lifecodes’ procedures were faulty and, as a result, that their conclusions were overstated. Dr. Lander examined Lifecodes’ DNA test films and discovered two unexplained differences between the DNA found on the suspect’s watchband and the DNA of the adult victim. Dr. Lander questioned Lifecodes’ laboratory procedures and found there were inadequate controls in place to guarantee that tests would not be influenced by bacterial contamination. In the end, Lifecodes admitted their controls were insufficient to ensure the reliability of the laboratory’s test results. In a deal designed to minimize the damage to Lifecodes’ reputation, they agreed to change their conclusion from one of almost certainty to “The DNA results were inconclusive.” The defendant in this case later agreed to a plea bargain in which he admitted to both murders, but at a greatly reduced sentence. Interestingly, the validity of DNA as a means for identifying a suspect in a crime was not drawn into question; only the procedures used by Lifecodes to form a conclusion were attacked. As a result, damage was limited in this case.
The American justice system has documented hundreds of criminal cases that have been lost due to sloppy or inadequate evidence handling procedures. Digital imaging is a science and, as such, must conform to accepted standards and the applicable rules of evidence. Ignoring this basic requirement risks damaging the acceptability of this powerful tool in future prosecutions in every state in America.
Rule 1001 from the Federal Rules of Evidence serves to define the originality of evidence. Variations of this rule have been adopted by nearly every state in the United States. The Texas Rule of Evidence 1001 is a typical example of how most states have chosen to adopt the federal rule:
An original of a writing or recording is the writing or recording itself or any counterpart intended to have the same effect by a person executing or issuing it. An original of a photograph includes the negative or any print therefrom. If data are stored in a computer or similar device, any printout or other output readable by sight, shown to reflect the data accurately, is an original.
One of the most restrictive variations of ER 1001 comes from California. Section 1500 of the California Evidence Rules states:
Except as otherwise provided by statute, no evidence other than the original of a writing is admissible to prove the content of a writing. This section shall be known and may be cited as the best evidence rule.
Section 1550.6, added in August of 1996, defines originality of video and digital images by stating:
Images stored on video or digital media, or copies of images stored on video or digital media, shall not be rendered inadmissible by the best evidence rule. Printed representations of images stored on video or digital media shall be presumed accurate representations of the images they purport to represent.
The last sentence in 1550.6 defines the burden of proof when an issue is raised regarding the authenticity of an image. It appears to be a subtle refinement of Gary Wayne McEntyre v. the State of Texas, discussed earlier:
If any party to a judicial proceeding introduces evidence that such a printed representation is inaccurate or unreliable, the party introducing it into evidence shall have the burden of proving, by a preponderance of evidence, that the printed representation is the best available evidence of the existence and content of the images that it purports to represent.
A defensible procedure that reasonably protects an original image will minimize the risk of losing an image during a suppression hearing. An open and consistent procedure designed to minimize access to the original image will help to eliminate any question of impropriety that might be raised and presents an appearance of fairness and professionalism.
Any process used to enhance a digital image must be reproducible by a third party who has similar training and experience. Enhancement processes should be documented sufficiently to allow for the recreation of enhancement results. Any software used to enhance an image that is subsequently used as proof in a criminal matter could become the subject of either a Frye or Daubert hearing—a type of suppression hearing designed to determine the validity of any new scientific methods (Frye v. United States 1923; Daubert v. Merrell Dow Pharmaceuticals, Inc. 1993). Prior to a Frye or Daubert hearing, a defense attorney could demand access to the source code for any software that was used during the image capture and enhancement process.
Modern computer software is extremely complex.
Source code is the basis of all computer programs and consists of a structured set of instructions that tells a computer how to perform specific operations, such as saving an image or increasing image contrast. The source code is written in an English-like programming language and is proprietary to the company that produces it. The last step in the process of creating a computer program is the conversion of the source code from English to binary machine code, which is made up of ones and zeroes. Once converted, a computer can execute the program’s instructions, but anyone attempting to inspect the binary code will see only undecipherable gibberish. It is normally impossible to reconstruct a program’s source code from the binary machine code, so proprietary programming techniques and methods are safe from disclosure to competing companies.
Defense attorneys trying to gain access to a computer program’s source code might argue:
Given the myriad of algorithms used to filter and process digital images, it is conceivable that an error in one or more of the algorithms used to process this image could have created a defect. It is also conceivable that the resulting defect in the image leads to an erroneous conclusion by the witness in this case. Before we can properly cross examine this witness about the enhancement process used, and in order to establish whether or not such an error could have occurred, it will be necessary for our expert to examine the source code for all the software used to “alter” this image.
A number of other, similar arguments could also be raised to cast doubt upon the image and the process used to create it.
To gain access to a manufacturer’s source code, the defense would normally have to agree to the terms of a nondisclosure agreement to prevent the disclosure of any proprietary information. In most cases, the manufacturers’ attorneys are going to be presenting their own arguments against disclosure. If the manufacturer of the software is successful in denying access, the defense could move to suppress any results that were derived through the use of that software. The source code of any software program is generally considered a trade secret by its owner and is protected with great vigor. Agencies considering the use of any software for the purposes of image enhancement should obtain assurances from the manufacturer that if it should become necessary, they will allow a defense attorney’s expert to inspect their code.
Modern computer software is extremely complex. Any alleged defects would take an expert unfamiliar with the source code months or even years to find. Once found, it would likely take months more to show the errors created a defect sufficient to lead to an erroneous conclusion. A defense attorney who files a motion designed to gain access to the source code of a software product, such as Photoshop®, is gambling that the manufacturer is going to successfully deny access. Preplanning can help deflect this kind of argument and save considerable grief.
A defensible procedure for documenting a digital image can mean many things to many people. Any procedure designed to protect the chain of custody must, at a minimum, be able to address the following questions:
- Who captured the image and when?
- Who had access to the image between the time it was captured and the time it was introduced in court?
- Has the original image been altered in any way since it was captured?
- Who enhanced the image and when?
- What was done to enhance the image and is it repeatable?
- Has the enhanced image been altered in any way since it was first
How these questions are addressed depends on many factors, including department policies and the policies of the local prosecutor. The important thing to remember is that a digital image used in a legal context is evidence and must be treated as such. The goal of any effective image-tracking procedure should be to eliminate the opportunity for unauthorized persons to access images, thus avoiding the argument that someone could have altered or substituted an image. Procedures range from simple to sophisticated.
Various methods have been devised to authenticate the integrity of digital images once they have been captured. Many rely on some type of software system to validate the images. Some systems document the chain of custody by tracking image capture and enhancement processes and restricting access to authorized personnel. One system, in particular, stores a mathematically unique value for each image as part of the tracking process. Any changes made to the image can be detected immediately. The process is designed to answer questions about the integrity of an image without altering the actual image information in any way. Any changes that occur during the enhancement process are restricted to a copy of the original image. All changes can be recorded as part of the tracking process.
Other methods use an encryption scheme to scramble the image information, making it impossible to view the image without knowing the encryption key. This might be sufficient to prevent any meaningful tampering, but the encryption process itself alters the original image data. In order to view an encrypted image, it must first be reconstructed. Once encrypted, the original unencrypted image is not retained. The encryption process alone does nothing to document the chain of custody. In addition, encrypting large images can take a significant amount of time; especially when there are more than a few images being encrypted.
A third approach is based upon an ancient art called steganography, which means covered writing. Steganography was originally used to hide secret messages so they could not be seen. Spies used the technique to hide secret information within innocent documents, such as books or letters, in order to move information past an enemy without detection. Invisible ink is one example of a steganographic process.
When applied to a digital image, the process used is known as watermarking. Watermarking takes its name from the world of paper and ink. Currency has a watermark to prevent counterfeiting. Fine writing papers use watermarks to identify the manufacturer and build brand loyalty. A watermark is synonymous with quality and integrity. Digital watermarks are generally used for one of three purposes—data monitoring, copyright protection, and data authentication. Digital watermarking can be either visible or invisible to the viewer. A visible watermark is primarily designed to display a copyright notice, telling the viewer who owns the image rights. An invisible watermark can also be used to assert a copyright, but it is designed to catch those persons who would try to infringe on the copyright by using the image without permission of the owner. Watermarks can also be used to authenticate the integrity of an image. When used for this purpose, a software program is used to calculate a unique number using the image data. To establish specific ownership of the image, an encryption key assigned to the owner of the image can be used to encrypt the unique number generated from the image. The encrypted number is then inserted, by various methods, into the image itself. If ownership is not important, then only the unencrypted value is used. Though the actual techniques used are complex and vary from one manufacturer to another, the process of inserting this value into the image alters the image data. Some techniques can introduce frequency distortions or other types of degradations that are visible to the human eye (Kutter and Petitcolas 1999). A number of papers have been published that describe a number of techniques that can be used to remove or disable digital watermarks (Petitcolas, Anderson, and Markus 1998; Johnson 1999). Software tools such as Stirmark™ (Version 3.1) or unZign™ (Version 1) are specifically designed to scan images for watermarks and remove them.
Watermarking and encryption may seem like good answers for those who would question the integrity of an image. When these processes change the original image data, however, answers to questions such as “Is this image in the same condition as when it was originally captured?” become clouded with doubt. Visible or not, the image was altered and, at the very least, every piece of software that touched that image is going to be subject to examination. A claim could be made that the enhancement process is valid, but because the image was contaminated by the watermarking or encryption process, any conclusions based upon a “contaminated” image are suspect. An affirmative defense against such a claim would have to show that the watermarking or encryption process did not affect the results obtained. To do that, it would be necessary to repeat the enhancement process on a “clean” version of the same image. A comparison could then be made to determine whether the encryption or watermarking process influenced the results of the enhancement process. Should it be determined that there is little or no influence, the next logical questions would be: Is this result unique to this image or does it hold true for any image? What is the error rate? Can this be calculated? In this scenario the availability of a clean version of the image in question was assumed. Where did it come from? It is unlikely that any system using encryption or watermarking is going to maintain a second copy. Not only would this double storage space requirements, it would create difficulties if not impossibilities in the authentication and tracking of the second copy, which was not watermarked or encrypted.
Agencies contemplating the use of any system that will alter the original image, whether for authentication purposes or to save storage space, would be well advised to run tests. At a minimum, testing should be done to determine any potential effects watermarking or any other alteration might have on enhancement processes. A sufficient number of tests should be run to determine whether there is a measurable error rate as a result of the alteration. In other words, can the same image be consistently captured, processed by the system in question, and then enhanced, with the result being the same as those obtained from a “clean” image? If not, how often do the results differ? Why? What happens when the same series of tests are run on different images? Is the error rate, if there is one, the same? In the case of images that are used for analysis purposes, such as fingerprints, are there any artifacts or false minutiae in the altered image that did not appear in the clean image? Answers to these questions will go a long way toward building an affirmative defense in court, should the issues be raised.
An effective image-handling system should be able to track the chain of custody and ensure image integrity without having to alter original image data. Systems that rely on a watermark or similar process that alters image data as part of the authentication process open the door to a myriad of questions that will only confound and confuse the average juror.
An issue that bears mentioning at this point is the potential for claims that an image was intentionally altered to implicate an otherwise innocent person. The O. J. Simpson murder case (State of California v. O. J. Simpson: acquitted of murder charges, 1995; found liable in subsequent civil trial for wrongful death,1997) should serve as a warning to those who forget just how effective claims of evidence tampering and planting of false evidence can be. Because digital imaging is still relatively new to the legal system in the United States, and because most juries have been exposed to digital imaging in the form of television entertainment, a claim of evidence tampering has the potential to be quite effective in those cases in which the chain of custody is poorly documented.
In December 1995, this issue was raised briefly during a murder trial in Seattle, Washington, in which a digitally enhanced palm print was the only conclusive evidence that tied the defendant, Eric Hayden, to the crime scene. The defense raised questions about the ability of the computer to make arbitrary changes to an image without the knowledge or consent of the operator. He implied deliberate changes could have been made to the image, by the operator, which resulted in a false identification. The defendant claimed the use of digital image processing violated the best evidence rule and questioned the acceptability of the technology by the scientific community. The best evidence rule is more accurately called the “original document” rule, because it expresses a preference for originals, and is the basis of Evidence Rule 1001.
Many in the photographic community have established rules for the use of digital imaging within their disciplines. News organizations in particular do not allow any images that are used for documentary purposes to be altered. Even images used for editorial purposes are strictly regulated. The credibility and integrity of documentary images must be protected to ensure their acceptance. Once the integrity of an image becomes suspect, so too is the subject portrayed.
|A bloody print (left) found on a mattress pad was processed with
Amido Black to darken the fingerprint ridge details and improve their
visibility against the fabric background. A computer-enhanced image
of the same print (right) sharpened the contrast of the fingerprint
and removed the fabric pattern from the background (Warrick 1999).
Several magazine and newspaper articles have been written over the past few years on the subject of image integrity and reliability. One article, written by a legal photographer in Boston, Massachusetts, states: “It’ll be very difficult with digital photography to prove beyond a reasonable doubt the authenticity of a photograph admitted into evidence” (Silverman 1993). Another article, published by the Wall Street Journal, describes how O. J. Simpson claimed, as part of a civil trial for wrongful death, that photographs of him wearing a pair of Bruno Magli™ shoes were fakes (Nelson 1997). The author, Emily Nelson, also highlighted several criminal cases where the defendant questioned the integrity of photographs.
Law enforcement has only just begun to grapple with the issue of image integrity. There are no rules or standards to regulate the application of digital imaging. Agencies such as the Federal Bureau of Investigation have formed study groups in an attempt to establish minimum guidelines, but so far these groups have concentrated on image quality and the associated hardware used to capture images. Those who would use digital imaging to record objects or events that will later end up in court would be wise to consider these issues carefully. When the defense claims an image was altered, it will be the prosecution that has to prove it is not so. Adequate documentation is a good place to start, but specific knowledge about digital images, how they are created, and how they can be changed, must be learned through training and practice. Policies and procedures establish how and when digital imaging will be applied. They also demonstrate to the defense that an agency stands behind the use of digital imaging as a tool. When followed, policies and procedures can also serve to reinforce the testimony of those who must defend its use in court. When they are not followed, or when there are no established policies or procedures, the witness is left to him or herself against a group of people who are paid to find fault, no matter how small or insignificant.
There is little doubt that, over time, digital imaging will replace film as the preferred method for recording crime scenes and the evidence found in them. How the American legal system adjusts to this technology is yet to be seen. History has demonstrated time and again that American jurisprudence adjusts to new methods slowly and deliberately. The American courtroom was modeled after early nineteenth century English courtrooms that were designed to cast a defendant’s accusers as actors in a stage production that frequently resembled the latest plot in a modern soap opera. The attorneys produce the show, and a judge ensures everyone plays his or her part according to the rules. This description might seem trite to some, but recognizing the similarities between a legal battle and a Broadway play can help an expert witness to better understand his or her role and how to play that role effectively.
The American system of justice is not known for its expedience. American legal tradition demands that any deviation from historical precedent be considered carefully. New case law is not something most judges seek to create. How the courts will ultimately address the issues raised here must, for now, remain an educated guess. In the end, when an image is introduced in court, the first question that will need to be answered remains, “Is this image a fair and accurate representation of the scene or object as it was found?” Questions about chain of custody or the validity of specific computer enhancement techniques will ultimately be answered in accordance with the recognized scientific principles of the day.
Blond, N., Bahn, M., Loring, S., and Meyers, W. Blond’s Evidence. Sulzburger and Graham, New York, 1994.
Crawford, W. The Keepers of Light. Morgan and Morgan, New York, 1979.
Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 US, 579 (1993).
Frye v. United States, 54 App. D.C. 46, 293 F. 1013, 1014 (1923).
Frye v. United States, 293 F. 1013, 1014, 34 A.L.R. 145 (D.C. Cir. 1923).
Gary Wayne McEntyre, Appellant, v. The State of Texas, Appellee, 717 SW 2d 140, 147 (1986).
Johnson, N. An introduction to watermark recovery from images. In: Proceedings of the SANS Intrusion Detection and Response Conference, San Diego, California, February 1999.
Kodak. Digital imagery in the courtroom [Online]. (October 5, 1999). Available: http://www.kodak.com/global/en/professional/hub/law/filmdig/imagery.shtml/
Levy, H. And the Blood Cried Out. Basic Books, New York, 1996, pg. 36.
Kutter, M. and Petitcolas, F. A. A fair benchmark for image watermarking systems. Presented at Electronic Imaging ‘99, Security and Watermarking of Multimedia Contents, San Jose, California, January 1999.
Nelson, E. Fake-photo claims get lots of exposure, Wall Street Journal, February 7, 1997.
Nolan, J. and Nolan-Haley, J. Black’s Law Dictionary. West, St. Paul, Minnesota, 1990.
Petitcolas, F. A., Anderson, R. J., and Kuhn, M. G. Attacks on copyright marking systems. Presented at the Information Hiding, Second International Workshop, Portland, Oregon, April 1998.
Silverman, D. A. Changing times: How will computers impact forensic photography?, Photo Electronic Imaging (1993) 36(2):12–14.
Stirmark Version 3.1 [Computer software]. (November 3, 1999). Available: http://www.cl.cam.ac.uk/~fapp2/watermarking/image_watermarking/stirmark/
Tiller, N. and Tiller, T. Case report: The power of physical evidence: A capital murder case, Journal of Forensic Identification (1992) 42(2):79.
United States v. Kilgus, 571 F. 2d 508, 510 (CA9 1978).
unZign Version 1.2 [Computer software]. (November 3, 1999). Available: http://altern.org/watermark/
Warrick, P. King County Sheriff’s Latent Lab assist in Akron PD homicide investigation, Pacific NW IAI Examiner, July-December 1999, pp. 12–13.