Home About Us Laboratory Services Forensic Science Communications Back Issues July 1999 Presentations from the International Symposium on Setting...

Presentations from the International Symposium on Setting Quality Standards for the Forensic Community (Part 4; Forensic Science Communications, July 1999)


July 1999 - Volume 1 - Number 2


Presentations at the
International Symposium on Setting Quality Standards for the Forensic Community
San Antonio, Texas

May 3-7, 1999

Part 4

The following abstracts of the presentations are ordered alphabetically by authors’ last names.

Impact of Quality Assurance on
Discovery in Criminal Cases
R. P. Harmon

Alameda County District Attorney’s Office
Oakland, California

During the past several years the forensic science community has developed numerous groups and programs designed to improve the quality of forensic science and the qualifications of forensic practitioners. Each of these commendable efforts has produced volumes of information and documentation.

In criminal prosecutions each defendant is entitled to certain information in order to afford that person a fair trial. The process that provides this information is known as discovery. It is unquestioned that defendants are entitled to case-specific materials about forensic examinations in his case, as well as information about the qualifications of the forensic expert. It is as yet an open question whether or not a defendant is entitled to non-case-specific materials concerning various organizations such as SWGDAM and SWGMAT. The legal process of discovery recognizes a balance between the defendant’s right to a fair trial and the specific demonstrable need for the information.

A recent Arizona Supreme Court decision, State v. Tankersley (Az.1998) 916 P.2d 486, involving DNA-PCR typing demonstrates this point. The defendant sought voluminous materials in discovery that were not related to the case-specific work. The forensic scientist declared that it would take 500 hours to prepare the materials and that he would have to close his lab to provide the materials. The trial court declined to order the materials to be turned over to the defense. The Arizona Supreme Court upheld this decision because the defendant had not demonstrated a substantial need for the materials and had failed to show that there was no substantial equivalent for these materials elsewhere.

The significance of the justification for the decision is important for forensic scientists to appreciate. In DNA cases, in the PCR era, there is almost always evidence remaining for defense reanalysis. A defendant would be hard-pressed to meet the burden enunciated in Tankersley if evidence remained for retesting. Even more important, many forensic analyses are nondestructive on nonconsumptive. In those areas a defendant may have the same forensic evaluation performed.

In summary, a defendant has a right to a fair trial, not a perverse trial. Applying the rationale of Tankersley, prosecutors should seek to limit the scope of criminal discovery to the case-specific work done and the qualifications of the forensic expert.

Back to index


A Global Approach to Standards, Training, and Quality
M. M. Houck

Federal Bureau of Investigation
Washington, DC

“You ought to see that bird from here,” said Rabbit. “Unless it’s a fish.”
“It isn’t a fish, it’s a bird,” said Piglet.
“So it is,” said Rabbit.
“Is it a starling or a blackbird?” said Pooh.
“That’s the whole question,” said Rabbit.

C from Winnie-the-Pooh, by A. A. Milne

In The Ontongeny of Criminalistics, Paul Kirk wrote more than 35 years ago that “for the most part, progress [in forensic science] has been technical rather than fundamental, practical rather than theoretical, transient rather than permanent” (Kirk 1963:235). Regrettably, today, the same statement could be made about the progress of forensic science in the last 30-odd years. In many of the fields of science that are applied to legal or public issues, a deficiency in the fundamental theories and principles exists, unlike most, if not all, other sciences. Some may argue otherwise, but as a response, the following is offered: Why, then, if these theoretical underpinnings are fully developed, are admissibility challenges still being heard on such disciplines as hairs, fingerprints, and documents? Kirk would ascribe this “what, me worry?” mentality to the “misconception that science consists merely of an orderly presentation of facts or methods, rather than an elucidation of basic laws and principles” (Kirk 1963:235). Simply because a full roster of proven methods exists does not make a science complete. Additionally, some methods are being lost because they are not perceived to be objective or efficient (McCrone 1999). It is imperative that the practitioner not only know how to do something, he must also understand why it is being done. Otherwise, like Rabbit in A. A. Milne’s books, one argues in circles using facts without understanding.

Recently, scientific working groups (SWGs) have stepped in to attempt to fill this theoretical gap, as well as catalog and refine current methodologies. One of these, the Scientific Working Group for Materials Analysis (SWGMAT), addresses the analysis of trace evidentiary materials, such as paint, glass, hairs, fibers, and adhesive tapes. The approach SWGMAT has taken, in conjunction with working groups in Europe and Australia, is to produce voluntary consensus guidelines for analysis, training, and quality issues in the trace evidence disciplines. The analytical guidelines (see SWGMAT 1999, for example) are aimed at the qualified bench analyst who must have a working knowledge of a wide variety of potential methods, their benefits, and pitfalls. Each subgroup in SWGMAT is drafting such analytical guidelines for the previously mentioned disciplines. Because these guidelines are voluntary, no enforcement is necessary or even possible by SWGMAT. Each laboratory can use all, part, or none of the published guidelines as they see fit, on the basis of resources, needs, and personnel. For a forensic laboratory facing accreditation, the SWGMAT guidelines could be a very useful reference. But supporting the current generation of bench analysts is only part of the process.

In turn, each subgroup will draft a training workbook, which will be a competency-based, self-contained trace evidence laboratory training program. Each workbook will be designed to mimic a graduate-level laboratory course notebook. The sections will be organized into the following headings: Introduction, Theory, Required Reading, Materials, Exercises, Review Questions, and a Competency Checklist for the trainee and the training manager. Thus, a new employee could be handed the Fiber Training Workbook and, under the supervision of the training manager, be able to complete a standardized course of practical learning with objective criteria for successful completion. Without a common, uniform basis, trace evidence, as a discipline, will never achieve two of Kirk’s (1963:237) three basic criteria of professional acceptance:

  • A profession is based upon an extensive period of training at a high education level, and
  • A profession requires established competence.

Once the basic criteria for competency are standardized, then the analytical guidelines fall easily into their place in the laboratory.

SWGMAT’s analytical guidelines are already being adopted, with some changes, by the European Network of Forensic Science Institutes’ (ENFSI) working groups, starting with the European Fibers Group (EFG). Because of differences in equipment, funding, and legislation, SWGMAT’s analytical guidelines are being altered to fit within the framework of ENSFI’s best practice for forensic laboratories. The resulting guidelines will retain much of what SWGMAT drafted, with notable differences. The same will be true in time with the Australian Special Advisory Groups (SAGs). This will leave us with a core of methods and techniques that are recognized worldwide by all forensic laboratories (Figure 1). As a model, this is the first step toward a series of global standards in trace evidence analysis. The benefits of global standardization are numerous and include the following: refinement of methodology, uniform practices, more efficient analyses, improvement of equipment, increased laboratory resources, and a firmer foundation for courtroom acceptance and interpretation.

Additionally, uniform standards improve quality and communication and lead to more forensic laboratories that meet accreditation standards. Rather than a top-down approach—one that only allows change to come from upper management, the dominant paradigm since the industrial revolution—the private sector is altering its paradigm to a bottom-up view of change (Senge 1990). In this model, each individual is an agent for improvement, learning, and growth. The institution engenders or cultivates this change by educating and investing in each worker so that they better understand and execute their duties. This feeds the worker’s knowledge and creativity back into the institution and yields higher quality, better services, and a quality product. So, too, with a forensic science laboratory (Figure 2). This produces a complex adaptive system that induces constructive change within a scheme (Gell-Mann 1994). If the education, training, and creativity of the bench analyst is stinted, then the entire laboratory suffers. SWGMAT is working to provide the resources to assist forensic science analysts and management in the attainment of higher quality trace evidence analysis.


Gell-Mann, M. The Quark and the Jaguar. W. H. Freeman and Company, New York, 1994.

Kirk, P. L. The ontogeny of criminalistics, Journal of Criminal Law, Criminology, and Police Science (1963) 54:235-238.

McCrone, W. C. Attention laboratory directors, American Laboratory (1999) April:37-44.

Scientific Working Group for Materials Analysis (SWGMAT). Forensic Fiber Examination Guidelines, Forensic Science Communications [Online] (1999). Available at http://www.fbi.gov/programs/lab/fsc/backissu/april1999/houcktoc.htm.

Senge, P. M. The Fifth Discipline: The Art & Practice of the Learning Organization. Doubleday/Currency, New York, 1990.

Back to index


Bayesian Approach to Case Assessment
and Interpretation of Scientific Evidence
G. Jackson

Forensic Science Service
Birmingham, United Kingdom

The Forensic Science Service (FSS) in the United Kingdom has been reviewing and developing its services for a number of years with the aim of providing better value for money to its customers. There is a continuing need to make effective use of limited resources and to provide a robust, reliable, and consistent service. How does a major supplier of forensic science services ensure it meets these requirements? How do individual forensic scientists make sensible decisions when selecting items for examination and when choosing from a battery of analytical techniques?

Working with experienced colleagues in the FSS, a small lead team has developed a model, on the basis of a Bayesian framework, for assessing the needs of a case, devising a case strategy, and interpreting findings in a logical, robust way (Cook et al. 1998a, 1998b; Cook et al. 1999; Evett et al. 1999).

This work has highlighted and given fresh insight into the

  • difference between addressing issues and answering questions,
  • type and level of issue a scientist can address,
  • consistency in interpretation between scientists,
  • use and construction of databases,
  • communication with customers, and
  • role of a forensic scientist.

It has helped to expose, define, and clarify those areas in which a scientist can add value to an investigation and to court proceedings.

Through workshops with operational colleagues, a range of case studies is being developed to demonstrate applications of the approach to a wide range of casework.


Cook, R., Evett, I. W., Jackson, G., Jones, P. J., and Lambert, J. A hierarchy of propositions: Deciding which level to address in casework, Science and Justice (1998a) 38:231-239.

Cook, R., Evett, I. W., Jackson, G., Jones, P. J., and Lambert, J. A model for case assessment and interpretation, Science and Justice (1998b) 38:151-156.

Cook, R., Evett, I. W., Jackson, G., Jones, P. J., and Lambert, J. Case preassessment and review in a two-way transfer case, Science and Justice (1999) 39:103-111.

Evett, I. W., Jackson, G., and Lambert, J. More on the hierarchy of propositions: Exploring the distinction between explanations and propositions, Science and Justice (in press).

Back to index


Analysis, Design, Construction, and Curation
of Scientific Databases
L. Kerschberg
George Mason University
Fairfax, Virginia

Scientific databases are data collections that are acquired, organized, stored, accessed, and disseminated to support users in their scientific endeavors including forensic science. They can be used to search for and visualize complex relationships among data items, to discover patterns and knowledge in the data, and to support the scientific discovery process.

In this presentation, a scientific database is defined, and several examples of scientific databases that provide access to multi-media data including text, images, graphs, temporal, and spatial data via the World Wide Web are provided. In addition, I review elements of object-oriented conceptual modeling of a scientific database application domain, and I show how that model can be used to integrate information obtained from multiple data sources of differing degrees of data quality and source reliability.

Finally, I address issues of data quality and quality assurance within a scientific database process model that addresses activities such as requirements analysis, database conceptual modeling, database design, database deployment, and database curation and evolution.

This presentation may be downloaded as a Microsoft PowerPoint7presentation or as an archive at the following URL: http://ise.gmu.edu/~kersch/FBI/index.html.

Back to index


Use of Statistics in Trace Evidence
R. D. Koons
Federal Bureau of Investigation
Quantico, Virginia

For the purposes of this talk, the term trace evidence will be defined as those items of transfer evidence whose population distributions are not genetically controlled. Examples of materials commonly encountered as trace evidence are manufactured items such as glass fragments, fibers, or paint chips and natural items such as soil. Rulings in several recent court cases have shown an interest on the part of various judges and attorneys to place a probability measure on the statement that two items of evidence are consistent with a single source. This feeling, repeated by other speakers within this symposium, is generally put forth by statisticians and biologists who are used to dealing with evidentiary items that have distributions of measured parameters that are governed by genetics. The purpose of this talk is to discuss the differences between trace evidence and the other forms of evidence presented by the next three speakers (fingerprints, toolmarks, and biological materials), which make it much more difficult, if not impossible, to calculate frequency-of-occurrence statistics.

There are three possible opinions that can be reached in comparison of a transfer item with a possible source item, the typical trace evidence examination. In rare instances, a positive identification can be made: A jigsaw match of a piece of glass with a broken window is a unique occurrence. At the other extreme, an exclusion can be made: The questioned item is so different from the comparison source that it could not possibly have come from that source. In between these extremes lies the conclusion that the questioned and source item have no significant differences that could exclude their once having been part of a single object. The difficulty arises when the examiner attempts to place a measure of significance on this indistinguishability. If the number of independent comparative parameters is increased or more discriminating comparison methods are used and the samples remain indistinguishable, then the significance of that indistinguishability increases. However, the conclusion of this exam must be something like “the glass fragment is consistent with coming from the broken window at the crime scene, or another window that is indistinguishable in the measured parameters.” As the discrimination capability of the analytical methods used improves, the opinion does not change (unless, of course, there is an exclusion). What does change is that the potential of an accidental match with another unrelated window diminishes.

The most important factor making trace evidence different from other forms of evidence is that values of each measured parameter vary for subsamples of a given object. Unlike nuclear DNA, which is invariant among samples from a given person, fragments of glass from the same window may not all have the same refractive index. The relative magnitudes of the within-object variability and the variability across objects of the same product class define the discrimination capability of the analytical technique used to measure the parameters in question. The heterogeneity of a source object, such as a carpet or a broken window affects four aspects of comparison between this source and comparison fibers or fragments: method of comparison, sample selection, definition of analytical indistinguishability, and data requirements.

  • Method of Comparison: Selection of an appropriate analytical method and decisions as to which parameters to measure depends upon both the within-object and across-object variations. The observed variability in a measured parameter for multiple samples taken from an object is a combination of the analytical imprecision and the true sample heterogeneity for that parameter. Instrumental errors affecting precision and bias are important because they can be controlled by the analyst and should be kept low enough to measure the sample heterogeneity. Another concern regarding trace evidence is the lack of well-characterized, appropriate reference samples in the size ranges of evidentiary samples as needed for preparation of analytical standards and for proficiency testing.
  • Sample Selection: The variability of a potential source object determines the number of analytical samples required for comparison with questioned samples. Correct comparison of items of trace evidence cannot be done with one-sample-to-one-sample comparisons as may be appropriate for biological specimens, fingerprints, and toolmarks. Further, the number of samples and their sources required for a given analytical parameter may be different from those required for another parameter.
  • Definition of Analytical Indistinguishability: The decision whether two samples are analytically distinguishable is made on an implicit or explicit statistical basis. Simple comparison of means can be made using well-established statistical tests, if the distribution of the measured parameters over the potential source object is known (or assumed). The number of false associations and false exclusions can be limited somewhat by judicious selection of the hypotheses being tested and the match criteria. Trace evidence is somewhat different from other forms of evidence in that a false positive (that is failure to exclude a single source for two items of different source) is not a serious error, provided the conclusion is considered in the context of the examination performed. Rather, failure to discriminate between two sources is a result of the limited discrimination capability of the examination method being used. Improvements in discrimination capability, such as by measuring additional independent points of comparison, may limit the number of false positive results at the risk of increasing the number of false exclusions. For examinations involving many measurements, multivariate methods to limit the number of false exclusions are available, but they often require a great number of analytical samples.
  • Data Requirements: When discriminating analytical methods are used, calculation of the probability of accidental matches is not generally feasible. The amount of data required to model probability distributions for highly discriminating multivariate analytical methods is extremely large. Differences in the within-object heterogeneity of a given measured parameter vary from one manufacturer to another and from one object to another within a given manufacturer. Therefore, conclusions made concerning one object or analytical parameter cannot be generalized to other objects or parameters. As a result, a great many measurements are required prior to making a probability statement concerning the results of a particular examination.

The second important factor limiting attempts at making probability statements about trace evidence is the temporal variability of manufactured products. Changes in production, delivery, use, and disposal of manufactured products over time effectively negate attempts at databasing and associated probability statements. A database of measured characteristics for a given product may be applicable only at one point in time and at one location. Combining the time and location changes with the large number of samples needed for samples measured using highly discriminating analytical methods makes it at best impractical and at worst impossible to collect appropriate databases for making probability statements. The most appropriate uses of databases in trace evidence examinations are for classification of an unknown object into a product use or manufacturer class and for testing the discrimination potential of an analytical method.

Examiners of trace evidence must be cognizant of the analytical requirements of comparing small items of evidence and the constraints they place on opinions formed. Although it is appealing to assist the trier of fact with some numerical evaluation of the significance of an opinion of indistinguishability, the drawbacks of the various methods for doing this must be considered. In particular, calculation of likelihood ratios, as discussed by Graham Jackson, provides an excellent perspective for evaluating the various factors that must be considered in forming an opinion as to significance. However, for items of trace evidence, some of these factors can be evaluated only in a qualitative sense. Given the current state of knowledge and the spatial and temporal variability of trace evidence, calculation of exact probability statistics are impractical and, perhaps, impossible.


Back to index