Title: National Conference on Science and the Law Proceedings. Series: Research Forum Author: National Institute of Justice Published: NIJ, July 2000 Subject: Criminal justice system 251 pages 630,000 bytes ------------------------------ Figures, charts, forms, and tables are not included in this ASCII plain-text file. To view this document in its entirety, download the Adobe Acrobat graphic file available from this Web site or order a print copy from NCJRS at 800-851-3420 (877-712-9279 for TTY users). ------------------------------ NATIONAL CONFERENCE ON SCIENCE AND THE LAW Proceedings San Diego, California April 15-16, 1999 Sponsored by: National Institute of Justice American Academy of Forensic Sciences American Bar Association National Center for State Courts In Collaboration With: Federal Judicial Center National Academy of Sciences July 2000 NCJ 179630 ------------------------------ Julie E. Samuels Acting Director National Institute of Justice David Boyd, Ph.D. Deputy Director National Institute of Justice Richard M. Rau, Ph.D. Project Monitor Opinions or points of view expressed in this document are those of the authors and do not necessarily reflect the official position of the U.S. Department of Justice. The National Institute of Justice is a component of the Office of Justice Programs, which also includes the Bureau of Justice Assistance, Bureau of Justice Statistics, Office of Juvenile Justice and Delinquency Prevention, and Office for Victims of Crime. ------------------------------ Preface The intersections of science and law occur from crime scene to crime lab to criminal prosecution and defense. Although detectives, forensic scientists, and attorneys may have different vocabularies and perspectives, from a cognitive perspective, they share a way of thinking that is essential to scientific knowledge. A good detective, a well-trained forensic analyst, and a seasoned attorney all exhibit "what-if" thinking. This kind of thinking in hypotheticals keeps a detective open-minded: it prevents a detective from ignoring or not collecting data that may result in exculpatory evidence. This kind of thinking in hypotheticals keeps a forensic analyst honest: it prevents an analyst from ignoring or downplaying analytical results that may be interpreted as ambiguous or exculpatory evidence. This kind of thinking in hypotheticals keeps attorneys thoroughly prepared: it prevents a prosecutor from ignoring alternative theories of the crime that will surely arise in the defense, and it keeps the defense open to raising alternative theories. Our adversarial system of justice relies on thinking in hypotheticals, examining each possibility, looking at all the angles because we expect proof beyond a reasonable doubt. We have already seen too many times what happens when "what-if" thinking breaks down. Consider what happens when a detective refuses "what if" thinking. Exculpatory evidence is not collected at the crime scene; an innocent person may be convicted. Evidence is collected in such a sloppy manner that it cannot be processed by the crime lab; a guilty person may be set free. Consider what happens when a forensic analyst refuses "what if" thinking. A crime lab technique has been accepted for the last 50 years; no one has questioned its validity or reliability because everyone just believes that it works; people may be wrongfully convicted or exculpated by a scientifically unsound technique that is presented as scientific evidence. Or consider what happens when "what if" thinking breaks down in the courtroom. Judges naively accept whatever scientists with a particular set of credentials tell them, the scientist-witness is allowed to represent both the opinions of the entire scientific discipline as well as specific opinions with regard to the case, and the expert witness industry is thriving. Currently, the criminal justice profession has several mechanisms for ensuring that "what-if" thinking does not break down. Daubert--and now Kumho--hearings can highlight serious deficiencies in traditionally accepted forensic sciences. Training for judges and lawyers can upgrade their ability to determine the value of scientific evidence and to distinguish between good investigative leads, which may result from pre- scientific techniques, and solid scientific evidence, which derives from the scientific method. Research by academics or scientific organizations such as the National Academy of Sciences can provide answers to methodological dilemmas which face any science moving from the laboratory to the crime scene. Law enforcement training can provide detectives and departments with best practices for investigation and evidence collection, such as the National Institute of Justice's recent publication on crime scene investigation. Technical working groups that are discipline based, such as the National Institute of Justice's Technical Working Group on Eyewitness Evidence, can provide checks on scientific and investigative procedures and interpretation of results. But even with such homologous ways of thinking, judicial decisions, and educational safeguards in place, science and law continue to be uneasy partners. Questions about this partnership form the basis for the following papers, from scientists, attorneys, and judges, which all address, from differing aspects, the relationship between science and law. It is hoped that by facing these questions directly we shall find answers that enable us to use science and law in the service of truth and justice. Carole E. Chaski, Ph.D. Executive Director Institute for Linguistic Evidence, Inc. Georgetown, Delaware ------------------------------ Table of Contents Preface Executive Summary Welcoming Remarks --David G. Boyd Keynote --C. Thomas Caskey Panel I. Conceptions of Science: Defining the Disconnect --Joshua Lederberg --Margaret Berger --William Gardner Panel II. Admissibility: The Judge as Gatekeeper --Sam C. Pointer, Jr. --Edward J. Imwinkelried --Myrna S. Raeder Luncheon Address --Thomas D. Pollard Panel III. "Junk" Science, Pre-Science, and Developing Science --Andre A. Moenssens --Michael J. Saks --Carole E. Chaski --James E. Starrs Panel IV. Scientific and Demonstrative Evidence: Is Seeing Believing? --Mark Garcia --Robert J. Humphreys --Samuel A. Guiberson --Ronald Reinstein Panel V. Jury's Comprehension of Scientific Evidence: A Jury of Peers? --David G. Boyd --Neil Vidmar --Lawrence M. Solan --Arthur H. Patterson --Shari Seidman Diamond Panel VI. Science, Technical Knowledge, and Skill: Who Is an "Expert"? --Vaughn R. Walker --Paul C. Giannelli --Lawrence M. Mckenna Panel VII. Expert Witnesses: Is Justice Ruined by Expertism? --Barry A.J. Fisher --Bert Black --E. Michael McCann Summary Discussion --David G. Boyd ------------------------------ Executive Summary The National Conference on Science and the Law brought together scientists, jurists, lawyers, and academics to foster understanding of science among legal professionals and of the legal system among scientists. The conference, held April 15-16, 1999 in San Diego, California, provided a forum to examine issues of concern to legal professionals and scientists and to improve communication between the two groups. The meeting was sponsored by the National Institute of Justice, the American Academy of Forensic Sciences, the American Bar Association, and the National Center for State Courts, in collaboration with the Federal Judicial Center and the National Academy of Sciences. Conference speakers explored how conceptions of science work in a judicial environment; the role of judges as gatekeepers for scientific evidence; how to distinguish between junk science, prescience, and science that's currently under development; using technology in the courtroom; juries and how they relate to scientific evidence; and how experts are defined and the effect they have, especially in the scientific arena, as providers of evidence in court. This summary provides a few highlights of the conference. Transcripts of the proceedings following the summary. Participants discussed the perceived "disconnect" between science and the law, problems that can arise when the two converge in the courtroom, and ways to promote greater understanding and appreciation of what both disciplines seek to achieve. One speaker explained that one of the major conflicts between law and science is that "lawyers would like to see science, when it is used in the courtroom, if not infallible, at least mostly accurate, mostly immutable, and certain. That is the very factor that, in the legal mind, makes the evidence also 'reliable.'" "In the scientific community, by contrast, knowledge is forever changing," he continued. "It is adapting; it is sometimes reversing direction, and thereby also advancing. In the process of advancing scientific knowledge, science may also be correcting erroneous conclusions of the past, despite the fact that these now out-of-date conclusions may already have become embedded in our case law as legal principles that are due great deference, if not controlling effect. It's very hard for courts to abandon holdings, rules based on scientific tests--whatever 'scientific' may have meant to a particular judge--that were adopted many years ago, in many jurisdictions, and by some eminent jurists." Another speaker said that the "source of the disconnect between science and criminal law is that we have not made a sufficient effort as a society to develop rigorously evaluated forensic methods." He said that while there are differences between science and the law, they are not "unbridgeable." "There is important conceptual work to be done to construct these bridges," he said. "But what really needs to be done to connect the fields is empirical research to develop reliable forensic procedures." Another speaker said he does not see the "clash" between science and law that other conference participants mentioned, and that the criminal justice system should use science more. He said that if the overall quality of expert testimony in criminal cases is to be improved, the focus should be on crime laboratories and ensuring that the laboratories are fully funded and provided with the resources to be run as scientific laboratories. He also said he favors scientific evidence because of problems with other types of evidence such as eyewitness identifications and confessions. One speaker suggested that a national forensic commission or board be created that could mandate policies and procedures for forensic practitioners. He added that judges and lawyers also have a responsibility to improve their level of science and technical knowledge as it relates to their professions. Conference participants discussed at length and repeatedly referred to three U.S. Supreme Court cases that cover admissibility of expert witness testimony--cases one conference participant called the "expert trilogy": Daubert v. Merrell Dow Pharmaceuticals, Inc., General Electric Co. v. Joiner, and Kumho Tire v. Carmichael Co. Daubert requires judges to determine if expert scientific testimony is based on sound science before allowing it into evidence. Kumho Tire expanded the scope of the Daubert decision, requiring that any expert, scientific or otherwise, be scrutinized before testifying. In Joiner, the Court ruled that trial judges can specify the kind of scientific testimony that juries can hear. Regarding using technology in the courtroom, speakers said that during a trial, visuals can enhance the witness and his or her credibility. Diagrams, photographs, and physical evidence can be very powerful and in some ways can overshadow a witness. In their everyday life, jurors are accustomed to visuals. Experts who work well with juries are the ones who can break down their information to make it as understandable as possible to a nonexpert. While computer simulation can be helpful, one speaker cautioned that in some cases visuals can be far more effective if they are simple illustrations of witnesses' testimony (e.g., a crime scene diagram illustrating where the parties were positioned and what route they took, based on their testimony). On juries' comprehension of expert testimony, one speaker said that while some studies have shown that jurors have difficulty responding to "probalistic complex statistical evidence," the literature on the subject "tends to paint the jury as a competent decisionmaker. If the jury is communicated to properly by the lawyers and experts and instructed properly by the judge, it performs reasonably well most of the time." Another speaker said, "I promise you that I am not going to answer any questions about jurors' comprehension of scientific evidence, and the reason I'm not going to answer any questions about it is because I don't think there's a question. I don't think there's an issue. It's really very simple, which is, sometimes human beings understand things and sometimes they don't. And when they understand it, it's usually because somebody made it clear to them, and when they don't, it's usually because someone didn't make it clear to them." The tone of the conference was largely hopeful and positive. As one scientist said at the meeting, "When it comes to the law . . . scientists are generally pretty mystified about what you all do. Thus, I think that we have a lot to learn from you and you from us. Scientists wonder particularly about the way the courts handle technical matters. Thanks to meetings like this, these concerns are rapidly being transformed into thoughtful discussion and engagement and, hopefully, action on some fronts." ------------------------------ Welcoming Remarks David G. Boyd Director Office of Science and Technology National Institute of Justice Washington, D.C. Mr. David G. Boyd: The National Institute of Justice is probably best known for its work in police soft body armor, the body armor that you see most police wear. And we're particularly proud of the development of that technology, because it's credited now with saving well over 2,000 police officers' lives. But we've also had a very long and distinguished history, as the forensic community knows, in the forensic sciences, and in fact, even with a very tiny budget, we've been the primary funders of research and development in the forensic sciences in the law enforcement community over the last several years. Earlier, we did little projects: trace evidence; we funded the initiation of the laboratory accreditation programs--of the development of programs to certify the proficiency of lab technicians and such. And for years, we had a very large forensic laboratory handbook that was in wide use. Coming into the modern era, we still have a forensic laboratory handbook, but it's now on a single CD-ROM. Our greatest contribution, however, I think, was in DNA, where beginning years ago, back in the 1980s, shortly after the British first established its effectiveness as an identification tool, we funded the projects to bring the first of the technology to the States. And over the next few years we were fortunate enough to be in a position to be able to fund all of the initial developmental work in each of the major areas that contributed to today's success of DNA identification technology. And what we've done now, now that we have more money--because Congress finally has provided a significant enough funding base so we can actually begin to fund some serious things--is that we have begun, for the first time, to look at the request of the forensic field itself, at the scientific foundation of a number of forensic techniques that have been used for a long time. Now I'd like to tell you we did that because we were farsighted, and we really knew this was a serious thing, and we ought to be on top of it. But there was a thing called Daubert, which got our attention in a very big way, and caused us to begin to make some serious investments in the very expensive and very painful work of looking at the foundation for each of these. Now, your work here in this conference today is a critical part of that. And in fact, the breadth of that responsibility, I think, is clear just from the topics that you're going to be covering as you look through the conference agenda and at those who are currently sponsoring it. In fact, it's a very impressive cover: the American Academy of Forensic Sciences, the American Bar Association, and the National Center for State Courts who are cosponsoring the conference with us in collaboration with the Federal Judicial Center and the National Academy of Sciences. Today, you're going to look at conceptions of science--that's an interesting starting point. How do conceptions of science work in a judicial environment? At the role of the judges as gatekeepers for scientific evidence; at how we might usefully distinguish among junk science, prescience, and science that's currently under development; at the reliability of eyewitness evidence; at how juries look at evidence; and finally, how we go about defining an expert and what the impact is of experts, especially in the scientific arena, as providers of evidence in court. Now, we have a very impressive list of speakers, beginning with our keynoter, Dr. Caskey, whom you'll hear from in just a moment. But I think it's appropriate that I start by thanking a number of folks who have done all of the hard work to pull this together, and I'm not going to try to talk about all of the things they've done, but let me suggest that they've been meeting on a fairly regular basis in person and telephonically for some time, fighting through all of the nitty- gritty little details of how to put together a conference like this. Joe Cecil, from the Federal Judicial Center; Carole Chaski, who helped structure the agenda, who is the executive director of the Institute for Linguistic Evidence and has been a fellow at NIJ for some time; Barry Fisher, who is an old friend of the program, not old--but a friend of the program for a long time, who is the past-president of the American Academy of Forensic Sciences along with past-president of any number of other things, including the American Society of Crime Laboratory Directors, and director of the Los Angeles County Sheriff's Department Crime Laboratory; Anne-Marie Mazza, of the National Academy of Sciences; Tom C. Smith of the American Bar Association; Anjali Swienton, in my own office, who has done a lot of the odd jobs running around to pull things together; and CSR, who actually did the nitty gritty of getting things printed, getting it put in the right place, setting up the reservations, and all the rest. And so, I'd like to thank all of those people for their hard work, and I'd like to thank you for being here to help us begin to look very, very carefully at this very real issue of introducing science. We're going to see more and more hard science evidence, because it's increasingly possible for us to detect things. One of the questions you may at some point want to ask yourselves is, How do we determine when we've arrived at a point that we can detect too much in too small quantities, and what does that mean for us? And so with that, I'd like to turn it over to Dr. Caskey, who is the senior vice president of Human Genetics and Vaccines Discovery at Merck Research Laboratories, and if you haven't read his biography, you really should. He has a very impressive list of awards, and he's a really busy fellow. He is an adjunct professor in the Department of Molecular and Human Genetics, Medicine, Biochemistry, and Cell Biology at Baylor College of Medicine. He's board certified in internal medicine, clinical genetics, and biochemical and molecular genetics. He's received a distinguished faculty award and distinguished service professor award. He's an adjunct professor in the Department of Molecular Genetics and Microbiology at the University of Medicine and Dentistry of New Jersey. How do you do your job as the vice president? At any rate, let me turn it over to Dr. Caskey. ------------------------------ Keynote C. Thomas Caskey Senior Vice President Human Genetics & Vaccines Discovery Merck & Co., Inc. West Point, Pennsylvania Thank you very much. Well, I couldn't turn down the opportunity to participate in this meeting. This is an area of my great interest but has not been an area of my immediate research activities over the last 4 to 5 years. I have transitioned into the pharmaceutical industry and have focused on drug development. I now realize this amazingly liberal organization has invited a Philadelphia drug dealer to come and address you. [Laughter] I thank you for that opportunity. Today it's fashionable to be sure that you give disclosures from the start in your talk. I work for Merck, a legitimate pharmaceutical corporation. We're not involved in diagnostics, and we're not affiliated with activities that would relate to the legal or forensic area. We do use genetic markers in our discovery research. There is one area where Merck has made a rather significant contribution to your forensic program. Merck has developed, over the last 5 years, the Merck Gene Index. It is an effort to characterize all the genes of man. The Merck policy has been to make it available to the public in an unencumbered way. That has been achieved through the Merck Genome Research Institute. Thus we have provided the database which is the largest resource for your genetic STRs and SNPs. The second area is my past genetic research, which focused on the STR genetic markers. Our involvement was early in the development of the STRs for forensic application, and then, at a later time, focused on SNPs. As I look back on this history, I feel Baylor College of Medicine made a mistake because I applied all of the patent royalties derived from these to the M.D./Ph.D. program in the medical school. If we had thought ahead to the future, we would have applied the income to the law schools and not the medical schools; that's where we needed persuasive power. I have an intense interest in this application of DNA technology. I now state my interest, not conflict disclosures. I have an intense interest in the public acceptance of DNA technology. I personally feel that it is one of the most important DNA applications. In the court, where (slide 1) participants are not always blessed with truthfulness, DNA provides truth in evidence, which empowers the courts for just decisions. These are my DNA areas of interest. As one prepares for such a lecture, there are sometimes flashes of your past experimentation and experiences that suddenly come to light out of many years of dormancy. As I reflected, there was a flash of four scientists working in the laboratory at Baylor College of Medicine about 10 years ago. We had been asked to help in the assistance of the resolution of what is now referred to as enemy and blue-on-blue casualties that had occurred in the Gulf War. We worked with Robin Cotton in these studies. We worked intensively for no more than 10 days to examine all the casualty cases that had been submitted and characterized by standard forensic analysis for applying the new STR DNA technology. We resolved all cases submitted to us in this 10-day period and found an error rate of approximately 35 percent in the assignment of these cases by standard forensic methodology. The DNA technology had made its contribution for this situation and pointed the way to the future. It was absolutely clear from this early experimentation effort that we had a handle on a technology that was fast and simple and had a precision for diagnosis that we had not experienced in the area of forensic science. As I look back on my scientific experimentation days and discovery days, I will remember this one with special favor. (Slide 2.) These are the areas that I would like to cover in my comments today. The first will be detection technologies, which you may hear more about than you wish throughout the meeting, but I feel obligated to cover this with you. I wish to emphasize the areas for wider applications of the technologies, and introduce to you some of the issues we are considering in medicine concerning identification of traits which influence individual behavior. I want to bring to reality issues we consider carefully in medicine. A major consideration is, when do we apply these newly gained diagnostic tools for the benefit of health and, in your case, the benefit of the safety of the population and individual care? These issues condense to the concept of risk-benefit--always a debatable issue and one which this organization needs to consider. Presently I'll make points that illustrate why you need to be worried about it. (Slide 3.) There are a variety of DNA strategies that have been used in the past for disease gene discovery. They represent a progression of increasing discovery that has occurred in our knowledge of the human genome. Complex repeats were the repeats that were identified by restriction fragment analysis and shepherded by Alex Jeffries. If you'll remember, he used restriction fragment cleavage of complex traits to develop a spectrum of changes which could be highly informative for personal identification. The second DNA marker was the simple tandem repeats (STRs), which we've already commented on. The new category of single nucleotide polymorphisms (SNPs) and STRs have been accepted by the courts. An interesting spin on this technology recently emerged from sequence information of whole genomes. Such analysis was applied most recently in the malicious infection of a victim of HIV. In this case, the HIV sequencing identified the origin of the infecting agent, associating a perpetrator with the pathogenic disease that occurred in the victim. I bring this case to your attention where entire genome sequencing is now accepted in the courts. There may well be other applications that we're not aware of at the present time that would identify malicious agents that are used in crime situations. Certainly the public health spread of TB and terrorist use of germ warfare are additional possibilities. (Slide 4.) Let's start with complex repeats, the technology of Alex Jeffries. I cannot pass up the opportunity to remind you of what an incredible advancement was provided to us by Alex's insight into the utility of these complex repeats. These were satellite sequences that he had been studying as a research tool when the Leicester rape/murder cases came to his attention. He was located at the University of Leicester. And there were two conclusions that emerged from studies which impact not only the technology's acceptance, but the application of the technology. It's remarkable that both occurred in the very first application. You'll remember an individual with ill health came forward and volunteered himself as the perpetrator of the two crimes. While there were no witnesses to the crime, the confession to both crimes could have closed the case. Alex relates the officers were satisfied that a resolution to the case had occurred. Alex insisted he apply his new technology to confirm the confession. His studies quickly proved this was not the person who committed the crimes. Thus the very first DNA test that was applied excluded a person from the crime. The true perpetrator of the crime was concerned that this technology might, in fact, identify him as the murderer/rapist! It was a small village from which the scientist proposed searching for the perpetrator. The perpetrator persuaded one of his drinking buddies to submit his DNA in place of his own. You can call that sample switching or you can call it confusion. The point is, there was going to be a sample analyzed wrongly in the case, and it would clear the individual who committed the crime. Good detective work actually revealed the plot; DNA diagnosis was correctly applied in the case, identifying the murderer. This one case said so much about the future of the field. What are some of the features of the current technology? It is highly informative; it is gel-based; it's semi-automated for pattern recognition and matching; and you can develop mathematical algorithms to determine its characteristics. It is not very adaptable to PCR technology. Some people may argue that you can take elements of this and apply it, but basically, it's a gel-based pattern matching system and, therefore, has complexity for presentation in the courts. It also has a limitation in many crime scenes because of the amount of DNA material that is available for analysis. This is beautiful early technology. (Slide 5.) Let's examine the simple tandem repeats. Simple tandem repeats are highly informative. They are PCR-requisite. It is therefore extremely sensitive. We were able to show these STRs could be multiplexed with quite good fidelity to increase the power of informativeness. The analysis of multiplexed STRs provided a powerful informative (match) number from a limited number of reactions. STR analysis is automatable. With the application of the Perkin-Elmer automated DNA sequencing instrument, one can, with software packages, quickly obtain the matching information or mismatching information with high automation. So it's taking more and more technicians out of the process and, therefore, reducing the likelihood of human errors. STRs are degradation-insensitive. They are very small; only 300 base pairs to 500 base pairs. A substantial amount of degradation of DNA can occur since only trace amounts of DNA can be amplified. STRs add a sensitivity and power to detection (matching) that is very impressive, and currently superior to all other genetic matching techniques. We became involved in STR development quite fortuitously. It's fun to reflect on the discovery. We had carried out the very first automated DNA sequencing on a human disease gene (Lesch-Nyhan - HPRT) with the group at the University of Heidelberg. We identified a tetrameric repeat and found it to be polymorphic. This stimulated our database searching to explore how common these genetic markers were in the genome. One of the first to be discovered was a CAG repeat. It was found in the androgen receptor. And you can see that when we began to characterize the population distribution of the number of repeat units of the CAG triplet repeat that it was really broad. Furthermore, as we looked from population group to population group--and we just did simple analysis in this illustration of individuals declaring themselves to be Caucasian, Black, or Hispanic--you could see population variation in the frequency. We established that the marker was highly informative and that one would need sufficient databases to be able to draw conclusions with regard to the significance of any match between a crime scene specimen and a suspect for the crime scene, i.e., databases. There were many others discovered by this approach. I wish to illustrate the growth of the STR database. There were more than 1500 STRs readily found at the time of this slide preparation 4 years ago. This is the contribution of the Merck Genome Institute to this objective. The repeat spectrum of simple triplet repeats is both frequent and varied. They're spaced at probably every 200,000 to 500,000 base pairs. If you take into account the 3.0x109 bp of MAN, they are abundant and highly informative. I wish to make a point about STR stability. This was an issue early in the discussion of STRs. We had the opportunity to gain some experience from the study of human heritable diseases. There are now eight human heritable diseases that are the consequence of expansion of triplet repeats (STRs). They're all neurologic diseases. Furthermore, these diseases have the feature of anticipation. Anticipation describes the disease progressing in severity and frequency within a family, generation to generation level. The basis for anticipation was discovered by association of the expansion of the triplet repeat in fragile X in myotonic dystrophy, generation to generation. The more severe the disease, the larger the triplet repeat. We documented the polymorphic variation of STRs and their cause of anticipation in the human heritable diseases. STRs can be unstable genetic elements. They are not unstable to the point it limits utility. They are extremely useful if STRs are used within a defined range of triplet repeat. Generally, a repeat below 36 to 40 is stable. Once you exceed that number, instability becomes evident. This technology has been applied so extensively that we know even when you apply the size criterion, rarely will mutation in STRs be observed. This is a dramatic representation of the difference in the STRs of a patient who had virtually no symptoms of myotonic dystrophy but gave rise to a son who had extensive myotonic dystrophy at the clinical level and had a tremendous expanded triplet repeat. If you look in the blue at the size of his triplet repeat found in his bloodstream, you can see some evidence of the instability. These are all STRs just from his blood. But look at the STRs from his sperm. Thus his progeny has had a tremendous expansion of the triplet repeat, and virtually, he could bear no offspring that were not affected with disease. So why do I tell this story? One, STRs are highly stable if you choose the right ones; even the ones you choose properly will occasionally have expansions and mutations. This is replication DNA error that occurs most commonly in germ cells, not somatic cells. Thus identity matching has tremendous accuracy. I don't expect you to look at all the details of this slide. This is the application of the STR technology by the 377 Perkin-Elmer instrument. It can detect an STR that has a unit length difference of one or two base pairs within a triplet repeat; therefore, a unique marker for that repeat over a range of 500 to 700 base pairs. In addition to the STR polymorphisms, it can detect unique variations in an STR repeat that occur for a single individual. Such variation provides incredible precision. By automation, the cost has decreased; the throughput increased; and analysis is automated. STRs are definitely the technology that will dominate the field for years to come. It's simpler, faster, more precise, and more easily controlled with internal controls. Let's now discuss single nucleotide polymorphisms. They can be highly informative. (Slide 6.) What is meant by single nucleotide polymorphisms? Single nucleotide polymorphisms are single base pair alterations differing from individual to individual, and they are generally bi-allelic. This informative power of each STR is limited compared with the informative power of the triplet repeat, which has multiple alleles (repeats). What is the advantage of the SNP? The SNP variation is frequent. I've given you estimates on the STR. Every 500 base pairs will have a single nucleotide polymorphism. It might be at a frequency of 0.1, 0.001, or, ideally, at 0.5. They will not exceed 0.5. By judicious selection of an inventory of single nucleotide polymorphisms and multiplex amplification of PCRs the informative character can be made very powerful. It is a critical factor in this application that automated analysis be developed. The preferred method at present is the DNA chip. It will be the automated DNA chip analysis that will win the day. Degradation of DNA can be overcome by PCR since these are small 300 to 500 base pair elements. This technology has already been accepted in the courts with a very early entry of the DQ kit assay. It will be possible to have single nucleotide polymorphisms for every gene. There's no difficulty in this objective since genes are generally between 5,000 and 50,000 base pairs and the frequency of SNPs is 1 in every 500 base pairs. This is a powerful genetic reality for medicine which allows detection of disease. Such a risk association is shown in the chip data related to cystic fibrosis carrier detection. Apex technology uses nucleotides corresponding to the gene of interest, and fixed to the chip. The patient sample is hybridized to the chip target. All four bases are read by color and diagnosis of the SNP is made. I would now like to share my thoughts on broader applications of the technology. I can remember in the early days there was a reticence to apply the technology unless you had identified a perpetrator. The perpetrator was typically incarcerated when we carried out the DNA analysis. DNA analysis is now being used as an investigative tool. The first occasion it was used in the Houston area as an investigative tool was in the bandana rapist cases. These events were occurring on the west side of Houston where 23 rapes were committed in a small area by a rapist who wore a facial bandana. The investigators were searching for a single perpetrator. Our DNA analysis on the cases indicated five perpetrators. The investigative groups changed their focus from a single individual to five rapists. The DNA investigative tool gave investigative guidance. There is ample data to suggest a small number of individuals commit the majority of crimes--an ideal application for DNA investigations. The striking example would be terrorists who are small in number, create great havoc, and frequently operate at distant locations. Thus we may be looking for a very, very small number of individuals within that society who have a high repeat rate. Aggressive use of DNA as an investigative tool would be helpful to such investigations. Toward this utility it would be convenient and extremely productive to improve the efficiency of the DNA investigation at Federal, State, and urban levels. Let me expand. (Slide 8.) There are two types of databases you could share. The first would be databases on DNA variation in populations. For example, the population in California may differ from the population in Texas, and both differ from the population of the Northeast. The ability to compare databases develops a greater confidence in identification accuracy. The second database is more sensitive and involves sharing the DNA databases for convicted perpetrators and unsolved crime events. Such would allow identifications of a single perpetrator who commits crimes throughout the United States. The computer can be used as an investigative tool nationwide to be able to associate a crime or detach a crime from other similar crimes in different jurisdictions. There are two types of databases that I think would be extremely useful in this research: 1) convicted criminals and 2) unsolved crimes. I wish to make a comment on multiple DNA methods. I recognize there is an effort to standardize the methodology for the crime lab, thus permitting sharing of databases. We have found in the Human Genome Initiative and also in medicine that the software writers are, in fact, quite agile in their ability to handle different databases. Such would drive various technologies toward a single high-utility database. What occurred in the Human Genome Initiative is that scientists adapt best, simplest, cheapest, and most accurate. Thus in the early days, rather than have arguments about what DNA method has the most advantages, the software writers can provide data for comparisons. The scientists quickly start hitting the most useful databases. Thus we have seen genome science databases closed down because they were replaced by more highly useful databases. Thus, in your debate, do not be stymied by the competition. The software writers will allow you the opportunity to test those programs that are most useful. I wish to put forward for your consideration wider applications of the technology for convicted criminals. I think about this application frequently since my home State of Texas has significant numbers of convicted individuals whose trial preceded DNA testing. These cases frequently are based on circumstantial evidence. There should be an initiative from this forensic community to demand retrospective DNA analysis on these cases. Capital cases would have high priority. I'll relate one story out of the Houston area that speaks to such a need. There was a rape case where the victim was considered very intelligent and quite observant and thus a reliable witness. However, having failed to identify a suspect from several lineups, she was taken home by the investigating officer. As they were driving out of the police station they passed a used car lot. With great assurance, she identified a salesman standing out front as the rapist. The court moved forward with the charges against the suspect based on the eyewitness identification. The investigating officer suggested DNA technology determine the association of this individual to the rape. We rapidly excluded this individual and obtained his release. Thus, rather than be incarcerated for several months, he was cleared of the crime within days and returned to his family. That's a successful case. You can be assured there are individuals wrongly accused incarcerated. There is a need to apply this technology not only prospectively but retrospectively to ensure fairness in the law. I'm constantly reminded of this gentleman from Texas because DNA analysis led to freedom. The incarcerated deserve the same technology application. I feel we should supplement the databases. Let me expand. I would first suggest the military Desert Storm analysis be carried out retrospectively, because the military had not collected DNA samples on the troops. Today DNA samples are stored but not analyzed. Such databases would enhance criminal investigative accuracy and aid utility. Let's examine other individuals who might be in high-risk categories: auto license owners, pilots, police officers, and firefighters. These are high- risk individuals where DNA identification would have utility. Let me fully expand. We need this technology for a database that permits searching on all individuals, much as we now use fingerprints. The fear that such a database could be tested is ridiculous. We should persuade law enforcement agencies to embrace this technology to the point where there is comfort that their data is used as a source of investigations on every crime committed in the United States, using DNA methods. Medical licenses are a second category to test. Also lawyers' bar licenses, government employees' Social Security identification, and newborns could all contribute to the database. We should move to this objective of a universal technology and a complete DNA database in the United States. (Slide 12.) At present we collect DNA on all newborns in the United States for health reasons. The risk/benefit is obvious for the newborn since it alerts physicians to the need for early intervention therapy. We are willing to submit newborns' blood samples for the DNA database and investigative purposes. Thus DNA testing for many treatable diseases is standard practice. Can forensics be justified in the same manner--benefit to society? The Human Genome Initiative will expand these opportunities. We are now able to undertake complex traits such as hypertension, diabetes, and depression, to name a few. But one of the areas this group needs to consider is behavioral traits. These include alcoholism, drug abuse, depression, and schizophrenia. And for each of these four diseases there are genetic loci identified which identify genetic risk markers. In the future we will know which genes affect behavior. At a conference this last week (slide 13) devoted to manic depression (bipolar) disease, it was stated that the criminal incarceration rate was elevated by as high as a factor of 5 for individuals who carry this diagnosis. Thus, there is no doubt that some genes we are investigating for medical purposes could, in fact, impact the area of criminal behavior. At present it is acceptable to develop drugs for bipolar disease, for example, that intervene in its pathology. Imagine you are able to establish that there is a clear association in incarcerated populations, that depression is a risk factor for abnormal behavior--that is, a risk factor for incarceration. This could lead to modification of criminal therapy. (Slide 14.) Thus the ongoing research for human heritable diseases is relevant to criminal risk traits. The future for criminal behavior traits identification is promising. Such identifications allow tremendous options. There are social and behavioral traits that now are being investigated. These include child abuse and rape. We could all probably agree on identifying such genetic risk factors. Others are nonmedical, but societal, such as parking ticket offenders, software copiers, and computer virus designers. I add a few of these newer asocial behaviors for your consideration. I wish now to make some recommendations that don't relate to this meeting. These are trial and fairness issues. (Slide 15/16.) I really feel strongly that we should provide incarcerated persons state-of-the-art DNA studies. This would be particularly important for capital crimes where the death penalty applies. A forensic review could be conducted by an expert panel to determine whether there was any evidence available that would allow application of the new DNA technology retrospectively. I feel we have an obligation to not only provide DNA evidence to new trials but also for individuals who are incarcerated. We need to evaluate the DNA methods for wide acceptance of a uniform method. I'm suggesting here the equivalent to the Guthrie method of newborn analysis. We have achieved this in medicine. It can be achieved in forensic science. The STR technology will satisfy the forensic demand for the next 10 years. Over the next 10 years, the SNP technology will improve. I predict you'll see SNP technology come into its own on the basis of economy and precision. There is a need to establish rules on use of technology. I've used two examples there. One is serious and the second frivolous (the airline multiple booker). The limits need to be set. To implement genetic research studies on antisocial behavior is a controversial item. I remind you of the medical efforts presently ongoing in bipolar disease and schizophrenia. The purpose of this point is to illustrate that in the future individuals might benefit from FDA-proven therapeutic agents which could alter asocial criminal behavior. I'm frequently reminded in this research of the research of Jasper Rhein. Jasper studies (slide 17) dogs. The genetic variation in behavior of dogs is large. All dogs derive from the wolf. We recognize that a cocker spaniel, a pit bull, and a retriever differ. Jasper would argue that these dogs have been bred for their behavioral traits. Man is outbred. But his point is, there are, in fact, genetically determined traits that determine the characteristics of derivatives of the wolf. Some I think frequently of--Flip behaviors are extreme (the famous line of Flip Wilson's Geraldine--"The devil made me do it"). Behavior patterns are driven by endogenous elements--genes- -that we do not fully understand at this time. We will have this understanding in time, however, and we must be wise in its application to forensic science. Thank you very much. Slide 1. DNA TRUTH Slide 2. DNA Variation --Detection Technologies --Wider Applications --Disease (Behavioral) DNA Markers --Rules for Presymptomatic Disease (Behavioral Diagnosis) Slide 3. DNA Variation --Complex Repeats --Simple Tandem Repeats --Single Nucleotide Polymorphisms --Genome Sequence Slide 4. Complex Repeats --Highly Informative --Gel Based --Semi-Automated --Pattern Matching --PCR Incompatible Slide 5. Simple Tandem Repeats --Highly Informative --Highly Automated Gel Analysis --Multiplex Analysis --PCR Requisite/Sensitive --Degradation Insensitive Slide 6. Single Nucleotide Polymorphisms --Highly Informative --PCR Requisite/Sensitive --Multiplex Analysis --Multiple Detection Methods o Automated DNA Chip Analysis o Automated Gel Based Analysis --Degradation Insensitive --Accepted in Court Slide 7. Genome Sequence --Mitochondrial DNA --Nuclear Genes --Infectious Agent DNA Slide 8. Prospective Wider Applications --Crimes Associated with High Recidivism o Rape o Child Abuse/Molestation o Assault o Murder o Terrorist Slide 9. Prospective Wider Applications Shared Databases o Federal o State o Urban Slide 10. Prospective Wider Applications --Databases o Convictions o Unsolved o Multiple Methods Slide 11. Retrospective Wider Applications --Convictions Based on Circumstantial Evidence o Rape o Child Abuse/Molestation o Assault o Murder Slide 12. Vision for Implementation --Supplement Fingerprint Files o Military*/Auto, Pilot, and Gun Licensees o Law Enforcement Agency Employees o Medical Licensees o Lawyers Bar Licensees (Prosecutors Only) o Government Employees o Social Security Identification o Newborns *In Place Slide 13. Disease DNA Markers --Transplantation Database --Newborn Screening-Inborn Error of Metabolism --Hypercholesterolemia --Colon Cancer --Breast Cancer --Sickle Cell Anemia/Thalassemia --Cystic Fibrosis --Triplet Repeat Diseases, Neurodegenerative Slide 14. Behavioral/Neurological/Affective Disease Research --Schizophrenia --Manic Depressive --Ethanol/Drug Addiction --Autism --Migraine Slide 15. Anti-Social Behavior Traits Genetic and/or Acquired --Rape --Child Abuse --Violent Behavior --Parking Ticket Offenses --Software Copying --Designing Computer Virus Slide 16. Recommendations --Apply DNA Methods to Prisoners Incarcerated on Circumstantial Evidence Where Forensic Materials Are Available. --Apply DNA Methods to All Rape/Child Abuse/Assault/Murder Cases as an Investigative Method. --Expand Database Sharing Regardless of Method of DNA Analysis, i.e., Access to DNA Information on Convicted Individuals and Active Cases. Slide 17. Recommendations --Evaluate DNA Methods for Acceptance of a Uniform Method, i.e., the Guthrie Test for Identification. --Establish Rules of Usage Which Are Acceptable to the U.S. Public, i.e., Missing Children Identification or Airline Multiple Booker. --Implement Genetic Research Studies to Investigate Genetic Predisposition to "Anti- Social" Behavioral Traits. ------------------------------ Panel I. Conceptions of Science: Defining the Disconnect Moderator: William Gardner Associate Professor of Medicine and Psychiatry University of Pittsburgh School of Medicine, Montefiore University Hospital Pittsburgh, Pennsylvania Panelists: Joshua Lederberg Sackler Foundation Scholar The Rockefeller University New York, New York Margaret A. Berger Suzanne J. and Norman Miles Professor of Law Brooklyn Law School Brooklyn, New York Mr. David G. Boyd: Now, we turn to our first panel, and I love the title of this one. Professor Gardner is going to lead a panel that includes Joshua Lederberg and Margaret Berger in "defining the disconnect." Professor Gardner? Dr. William Gardner: Thank you very much. We will proceed by each speaking 20 minutes on this topic. Then we will have a round of rejoinders and comments on each other's talks. Our first speaker will be Professor Joshua Lederberg, past president of the Rockefeller University, research geneticist, and a Nobel Prize winner. Dr. Lederberg? Dr. Joshua Lederberg: I appreciate the opportunity for this presentation. I have an opportunity to learn a lot. I've already learned a good deal from my discourse with Dr. Gardner and Margaret Berger, and we may have refined out a lot of the areas of incomprehension. You may be having less of a quarrel than I would have guessed before we got started. My comments are going to be much more abstract than Dr. Caskey's enormously informative presentation, but I will have some prescriptions at the end, and so, bear with me on that. But let me also say that I encountered a book just 24 hours ago; I had the opportunity to read it on the airplane. It's by Kenneth Foster and Peter Huber. It's called Judging Science. I read it through on the airplane coming over here, and I found that, in fact, it embodied considerable confirmatory details, almost all of the perspectives and remarks that I'm about to make. I assure you I did not intentionally plagiarize it, but it's going to sound that way as I go through my remarks, and I do commend it to you very strongly. The culture of law and the culture of science do converge in seeking truth, but for science, this is an end in itself. For the law, this quest is but part of a machinery that aspires to social harmony; to consensual acceptance of pragmatic justice. The adjectives are all important. Science often invites protracted controversy. Justice is often quietistic and, for example, by negotiated settlements, may even enforce the nondisclosure of truths. Truth is then an interest subordinate to that of quieting conflict. This may be particularly troublesome when third-party and public interests may even be excluded from such disclosures. We are also acquainted with the concept of legal fictions. The Encyclopedia Britannica describes this as "a rule, assuming as true, something that is clearly false. A fiction is often used to get around the provisions of constitutions and legal codes that legislatures are hesitant to change or to encumber with specific limitations. Thus, for a legislature, it is easier to turn back the official clock from time to time than to change the law or the constitution." In my community, we would be quite troubled about scientific fictions of that ilk, like changing the clock. But not to belabor the point too far, it is my understanding that established legal fictions are beyond questioning by scientific experts or jurors in the courtroom. Now, I have read a little about the history of fictions, and I understand their constructive role, to put it boldly, in empowering judicial legislation. But I also have to remark how the prevalence of such fictions mystifies the lay and the scientific onlooker and may be one of the most important reasons that it is hard for outsiders to understand what the law really means when it is, in fact, so pervasively penetrated by these fictional constructions. If there was ever an example of the social construction of reality, this would be it. Science's principal role is the discovery of generalizable truths: the laws of nature, of greatest use in predicting the consequences of future acts or in postdicting complicated paleological or historical or cosmological data. The law is most often concerned with establishing the facts or inferences about concrete, historic events: who done it? But that ascertainment may depend on the application of scientific laws, which is why we are here. Scientists might find themselves relating to law in any of several roles, principally as expert witnesses, the commonest zone of intersections; as jurors, tasked with the weighing of evidence; or as an aberration, even as defendants, possibly charged with fraudulent manufacture or concealment of data or of meretricious interpretation or plagiarism. These do not especially concern us here, but I only bring it up to point out that, in fact, the code of science is generally far stricter than the criminal law, and questions have been raised about the appropriateness of due process against alleged infractions on the part of offices of research integrity of Federal granting agencies that go far beyond what would be permitted in conventional judicial proceedings. By the way, I applaud that strictness of the code; don't misunderstand me. To go back to my list of roles, as an expert witness, a scientist is uncomfortable playing the hired gun, and with the understanding that one could probably find an expert with formally acceptable qualifications who will deliver whatever opinion anyone wishes--or who is in a position to pay enough for. And the public interest may suffer from the courts having limited access to disinterested expertise, in contrast to what is brought forward by the adversarial process. For my part, I have refused involvement as an expert when I was cautioned that I could only respond to questions raised by counsel; that it was not my responsibility to volunteer information no matter how relevant I felt it was, especially that which might be adverse to the interests of my employer. My scientific code enjoins me to reveal all the evidence, and especially that which might diminish the claims I was trying to assert. This is my expectation in my scientific discourse with every one of my scientific colleagues and competitors, that they will go out of their way to inform me of what might be potential flaws in their argument; they will eventually be found out anyhow, and we have no sensible exchange unless we operate by that shared ground rule--not perfectly enforced, but it is pretty well. In fact, retrospectively, great historic figures have been criticized for publishing only the experiments that seem to have worked according to their prior expectations, and that goes back even to Gregor Mendel and to Robert Millikan's historic experiments at a time when our criteria of statistical balance and insight were less finely refined than they are today. As advisors to public regulatory, procurement, or quasi-judicial bodies, scientists are subject to criminal penalties if they do not disclose their conflicts of interest. Adversaries are expected to be thoroughly interested, evidence that in most settings, scientists are expected to play a disinterested role. This now applies even in many journals as a condition of accepting research papers that we disclose what might be financial or other interests. Besides disinterestedness, as pointed out by Robert K. Merton, our great sociologist, universalism, the sharing, publication of data, and a system of organized skepticism are cardinal features of the scientific process and the key to its efficiency and authenticity. Of these, universalism--equal standing, regardless of personal origins--is one further norm shared between the scientific code and the U.S. Constitution. The others are hard to achieve in the judicial context. Expert testimony is, in principle, not secret, but in practice, it is rarely subjected to the same scrutiny as published articles that are part of scientific discourse. Its authenticity, then, depends solely on the competence and integrity of the witness and should not be confused with the weight of scientific authority, which can only be properly invoked when it is subject to the process of critical discourse throughout the entire scientific community. Skilled adversarial counsel, as an alternative, could impeach the testimony of a specific witness, but this continues to give advantage to those who can afford to buy those skills. These imperfections apply to scientific judgments by any expert body outside the courtroom or in it, left to the judgment of the expert without the validation of critical external discourse. New technological tools like the Internet may make more feasible forms of publication enhancing that organized skepticism. I do not know of any disinterested retrospection on the quality of expert testimony in run-of-the-mill cases. I don't know how much there is, really, to worry about. There has not been that kind of analysis, as far as I am aware, which might help us decide how much to invest in ameliorating the system. There have been many suggestions directed to giving courts access to independent expertise, and I'm sure there will be much discussion about this at the conference. To turn to another role, not much talked about, as jurors, in fact or in prospect, scientists may be in the sharpest confrontation with judicial workings. Here, I find the rules of evidence the most troubling, however indispensable they are for justice. The exclusion of evidence obtained by unlawful search has plainly obstructed truth-finding in many well-celebrated cases. It also undoubtedly has encouraged reform in police practice. Less comprehensible to me is the exclusion of ancillary information, like prior arrests and convictions regarded as prejudicial to the defendant, as if the juror is unable to exercise his own critical judgment about matters that are not directly probative. Admonitions to jurors--you see this in the TV presentations about courtroom trials--admonitions to jurors to disregard testimony inappropriately conveyed, although you've heard it, lead to a kind of internal hypocrisy or a mental gymnastics most would find impossible to verify. Above all, on the part of the juror, to be barred from asking questions directly would cut against the grain of career-long investigative experience in cutting to the chase and solving complex problems, a skill in which scientists can claim some established expertise. But no sensible attorney on either side is likely to find the scientific temperament acceptable to their conception of what they seek in a juror. Remedies: I've already discussed one; I want to repeat it: peer discourse. And by this, I mean far beyond peer review, far beyond the initial gatekeeping that's involved in getting papers published in a journal. That's only the first step in the process. The important function of peer review is publication; it is out there. It is there for your friends, your critics, your adversaries, the whole world to examine the texture of your argument and its feasibility, and there is an ample basis for rebuttal, and that is how scientific progress is, in fact, made. We've seen an outstanding example of this phenomenon, probably unprecedented in forensic history, in the way that the quality of DNA evidence and the necessity for precautions and so on have been very thoroughly debated in the scientific and technical community. We wouldn't be very much troubled if the same degree of attention had been given to other complex litigation in which scientific issues had been raised. But my other recommendation is the redefinition of expertise. I would say the expert is the person who could be reasonably regarded as possessed of the integrity and the wit to understand and articulate the current state of knowledge on a given topic and particularly to give a balanced account of current controversy. Thank you very much. Dr. William Gardner: Our next speaker will be Professor Margaret Berger, who is professor at the Brooklyn School of Law. She is a noted expert on legal evidence. Dr. Margaret Berger: Thank you very much. I greatly appreciate the invitation to be on a program with such eminent authorities, and I will try to pick up on comments that Dr. Lederberg made with my take on the differences between science and the law. First of all, as Dr. Lederberg also said at the beginning, science and the law, of course, have very different goals. He spoke of the goal of the law as quieting conflict, and I would not agree with that completely. I would say that the essence of the law is doing justice, and I think that doing justice and science become incompatible or at least have problems with each other at various points. The insistence on justice means that in deciding how to handle a scientific issue, the law at times will take account of factors that are simply irrelevant to a scientist, and the resulting determination may be viewed by the scientist as antithetical to good science. And that is really, I think, in many instances, because the scientist does not realize that there are extraneous policy objectives that dictate that decision and that the determination does not rest on scientific grounds at all. The best available scientific solution is not always compatible with policy concerns grounded in achieving justice. Let me start, since we just heard so much about it, with DNA evidence. Certainly, we all know, after listening to Dr. Caskey and what we can see as the developments in the field, that it's only a question of time before experts in cases involving DNA used for identification purposes are not going to speak any longer about the probability of a random match. There is going to be so much evidence available that the law will clearly decide at some point that everyone's DNA profile is unique, and the expert is going to be able to speak, once the law decides on what that definition of uniqueness is, not about the probabilities of a match but the fact that if there is a match, this means that the two samples being tested come from the same source. I think that we all know that that moment will probably come in the fairly near future and that it is going to be a triumph of scientific endeavor. From the standpoint of the law, however, that scientific achievement might possibly have undesirable results. One possibility is that the police efforts directed to solving a crime will focus more and more exclusively on simply finding crime scene biological samples that can be tested for DNA and less and less on finding traditional kinds of evidence, an endeavor that may be boring, dangerous, and time-consuming. Why would this be problematic, given the strength of DNA evidence? One reason is that we know that deliberate and inadvertent error will creep into any human endeavor, and other kinds of evidence would act as a check on the reliability of DNA results and check erroneous results from creeping in. Furthermore, we probably don't want the police to lose skills they need in cases in which DNA evidence is not going to be available. If, ultimately, lawmakers see a link between inadequate police work and ever-increasing reliance on DNA typing, they might react by finding that the loci at which testing will be done should be limited, so that the probability associated with a match would not alone suffice for a conviction. Now, I'm not saying that this will happen or that this should happen; I'm just suggesting that science can lead to results in the nonscientific world, in the real world, that the law at times will confront by making a policy determination that is not going to be compatible with the best scientific solution, simply because the spheres of the two disciplines are so very different. The same kind of thing can happen with toxic tort cases. Let me give you a hypothetical. Imagine that epidemiological studies indicate that persons who are exposed to the defendant's product at work are at a greatly increased risk of developing a particular disease that is not, however, a signature disease. Let's say that the product was made in a number of different formulations, and additional epidemiological studies plus animal studies plus any other kind of study that can be done--in vitro studies, whatever --strongly point to one particular formulation. Let's call it formula X. That's the culprit. Workers exposed to defendant's formula A, B, C don't seem to be at any more of an increased risk for the disease than the population at large. Let's say the latency period for this disease after exposure to formula X is over 20 years. Consequently, with regard to most workers, records are no longer in existence showing which formula they were exposed to. All that they can prove is they worked in a place where defendant's product was in use, but they have no evidence to prove which formula was the one to which they were exposed. If plaintiffs who are now trying to bring actions against the defendant must prove as part of their case that they were exposed to formula X, then a large number of them will not be able to meet this burden. Even though, from a scientific standpoint, only defendant's formula X is implicated in causing excess disease, the law might, perhaps, be concerned about defendants getting a windfall with regard to all these people with the disease who cannot prove their exposure to formula X, and these are, after all, persons who never had the relevant records in their possession. Any records that existed were probably in the possession of defendants or third parties. Under these circumstances, who should bear the risk? Again, this is really not a scientific issue but a policy question. If the courts relieve plaintiffs of proving which formula they were exposed to, scientists may read these decisions as another instance of the law ignoring science. They know that defendant's formulas A, B, and C don't cause harm, but again, this would simply be another instance of the law deciding that justice requires something other than the best scientific result. A second difference between law and science, which, again, is a difference that Dr. Lederberg alluded to, is that science is interested in generalizable truths, and the law is interested in the specific fact. And this conflict between the two that arouses the most controversy undoubtedly occurs when science and technology are used to recreate the truth of an event that occurred in the past. In such a situation, the person trained in the law immediately begins to pick away at the generalizations of science in light of the particular facts of the case. The lawyer sees every case as potentially an exception to the general scientific rule. The lawyer's nitpicking questions may be of little interest to the scientist, who is interested in the big picture and who deals with case-specific contingencies by means of an error rate, but to a criminal defense lawyer, the possibility of error signals reasonable doubt. Furthermore, scientists may misconstrue the nature of the lawyer's attack. For instance, some scientists undoubtedly thought, in the early days of DNA evidence in the courts, that ignorant lawyers simply did not understand the basic theory of DNA evidence. But if you look at some of those challenges--there may have been some lawyers falling into that category as well--but in addition, you had lawyers who were raising issues about DNA that simply had not been of any significance when work was being done in the laboratory. For instance, the accuracy of DNA testing when the evidence had been degraded by having been buried underground for 2 years is simply not something that the scientist doing research projects in the laboratory had to worry about. Now eventually, of course, when certain kinds of questions keep coming up in court and are used to attack an expert, additional research may be done. But case-specific kinds of questions have a habit of arising about issues about which there simply has been no research. Now, the law's insistence on not ignoring the facts of the particular case being litigated can be seen in the Supreme Court's latest opinion on the admissibility of expert proof. One advantage of being early on the program is that I get to be the first person to mention the Supreme Court's latest decision in the Kumho Tire case, which I'm sure we will be hearing a lot more about before this program ends. It was decided on March 23, 1999, and in Kumho Tire v. Carmichael Co., the Court considered the admissibility of an engineer's expert testimony that he could tell that the tire on plaintiff's minivan had blown out as a result of a manufacturing or design defect. The trial court initially rejected the expert's intended testimony because his theory had not been assessed pursuant to the four factors the Supreme Court had identified as bearing on reliability in its 1993 opinion in Daubert, a case which purported to deal with the admissibility of scientific expert testimony. The four factors which the Court had discussed in Daubert were 1) whether the theory had been tested; 2) whether it had been subjected to peer review or publication; 3) what the error rate was connected with the theory; and 4) the degree of acceptance of the theory in the scientific community--the relevant scientific community. In the Kumho case, the plaintiff asked for reargument, and it was granted by the trial court, which reconsidered and held that the Daubert factors should be applied flexibly and that the four factors were simply illustrative. The Eleventh Circuit, however, reversed, finding that the Daubert test was applicable only when the case involved the application of scientific principles rather than skill or experience-based observation. Engineering, according to the Eleventh Circuit, did not necessarily involve science, and therefore, the trial court had used too stringent a test. The Supreme Court has reversed again, and found that the trial court had not abused its discretion in excluding the testimony as unreliable. And it has made a number of statements in the course of its opinion which I am sure we will be talking about. It concluded that Daubert's general principles applied to all "the expert matters described in Rule 702," which is the applicable rule of evidence. The bottom line of Kumho is that all expert testimony must be reliable. The point that I'm concerned with here is not whether Kumho leads to the greater exclusion of expert evidence or even what the factors are that must be applied. The point I'm interested in is the light that Kumho sheds on the law's preoccupation with the specific facts of the case being litigated. Justice Breyer's opinion for the Court considers how the trial judge should go about determining the reliability of proposed expert testimony. The Court declined to set out general hallmarks of reliability that every scientific theory would have to satisfy and pointed out that the listed factors in Daubert were meant to be helpful, not definitive. "Indeed," said the Court in Kumho, "those factors do not all necessarily apply even in every instance in which the reliability of scientific testimony is challenged." The Court explained why it declined to set forth a general rule for assessing reliability. I'd like to read that: "The conclusion, in our view, is that we can neither rule out, nor rule in, for all cases and for all time the applicability of the factors mentioned in Daubert, nor can we now do so for subsets of cases categorized by category of expert or by kind of evidence. Too much depends upon the particular circumstances of the particular case at issue." In other words, the Supreme Court recognized that in assessing the reliability of an expert's theory, the theory must be assessed in the context of the facts of the individual case. In the legal system, the general principles that purport to establish what happened must be tied closely to the case-specific facts of the matter being litigated. The meaning of Justice Breyer's statement that "too much depends upon the particular circumstances of the particular case at issue" emerges in part III of the Kumho opinion, where the Court applies its approach to determine whether the trial judge was justified in excluding the engineer's proposed testimony. The intensity of this case-specific inquiry is immediately apparent. It is really quite extraordinary that an opinion by the Supreme Court of the United States should contain such a detailed analysis of the facts and that the subject of the Court's exhaustive scrutiny should be one warn, old, repaired automobile tire, a picture of which accompanies the opinion. The Court states that the specific issue is not "the reasonableness in general of a tire expert's visual and tactile inspection to determine whether over-deflection caused the tire's tread to separate from its steel-belted carcass. Rather, it was the reasonableness of using such an approach, along with [the expert's] particular method of analyzing the data thereby obtained, to draw a conclusion regarding the particular matter in which the expert testimony was directly relevant." But I don't want to bore you with all of the detailed facts about this tire which the opinion relates, including rim flange impressions, tread depth, discolored sidewalls, bead groove patterns, and much more. The Court's message, I think, is clear: Abstract theories are inadequate unless they are anchored to the facts of the case. But this insistence of the law on the facts may cause considerable tension when an expert seeks to offer an opinion, whether during the pretrial stage, at deposition, or at the actual trial. Experts who are professional witnesses, of course, know what to expect, and forensic scientists certainly fall into this category. But the scientist who has little experience in the law may feel that this insistence on facts rather than on the general validity of the theory on which his or her opinion is based is simply badgering about insignificant details, especially since the vehicle for bringing out these facts is cross-examination by the other side. Cross-examination and the adversary system are not the way the scientific community goes about reaching consensus with regard to a dispute. The resulting distaste that many scientists feel with the way the law delves into the reliability of a proffered expert opinion has at least two unfortunate ramifications. In the first place, many qualified persons, such as Dr. Lederberg, who would be of great value to the legal system, particularly as the importance and prevalence of scientific and technological issues in our courts continue to grow, want nothing to do with litigation. This is so not only because they view the legal process as distinctly unpleasant and unscientific and therefore a waste of time, but also because they know most of their peers agree with this assessment, so that participating in judicial proceedings will not enhance their professional standing in their chosen disciplines. The second unfortunate result is that for some who do appear as experts, the perceived defects of the legal enterprise produce a mindset that somehow justifies making claims in court that these persons would not dream of making in the context of their professional fields. I have long thought that some of the professional societies might consider codes of ethics for their members who testify in court or might have columns in their publications in which they publish some excerpts from testimony given in court. I think a little peer review would be very helpful. Some expert witnesses seem to conclude that almost anything goes in judicial proceedings, because everything is an adversarial game rather than a search for the truth. I don't believe that that is so. Even though scientific conclusions may not be the sole factors that control a court's determination, and even though the law seeks to ascertain the truth by procedures that vary tremendously from the scientific approach to achieving a consensus, and even though the law's concern with particular past events produces a focus on the specific rather than on the general, this does not mean that when the law is seeking to ascertain the truth that it will tolerate a double standard of truth-telling by experts. In Kumho, the Supreme Court expressed this clearly when it stated that in order to ensure the reliability and relevancy of expert testimony, the trial judge must "make certain that an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field." The bottom line of the Court's decision in Kumho was: "No one has argued that the expert himself, were he still working for Michelin, where the expert had worked for years, would have concluded in a report to his employer that a similar tire was similarly defective on grounds identical to those upon which he based his conclusion here." At this point, law and science converge. The expert cannot offer judgments in court that he or she is incapable of making in his or her professional life outside the courtroom. Achieving this objective is not easy, given some of the differences just chronicled that separate lawyers and judges. Greater understanding of these differences and more appreciation of what the other discipline is seeking to achieve might produce a better utilization of scientific and technological expertise in courtrooms. An occasion such as this meeting is a wonderful starting point, because it offers the opportunity for constructive dialogue among those who play many different important roles when science and technology enter the legal system. I look forward to a stimulating and educational 2 days. Thank you. Dr. William Gardner: One difference between the cultures is that if you're a professor in a medical school, you can't talk without a slide. [Set up slides.] I'm in the Departments of Medicine and Psychiatry at the University of Pittsburgh School of Medicine; I'm a working scientist in the area of health services research and also a coinvestigator in several studies of the relationship between mental illness and violence, the topic that Dr. Caskey commented on. In my comments, I want to be a bit of a devil's advocate. There's a premise in this discussion that there are fundamental conceptual differences between law and science. Clearly, there are major cultural differences. And each field has totally impenetrable jargon and so forth. There are different styles of writing. I can't believe what an enormous challenge it must be for law professors to put 20 pages of thought in 150 pages of text. So I agree that there are differences. The second premise is that these barriers are a principal obstacle to the use of science in the courtroom, and that's the serious thing that we are here to address. Professor Berger gave an excellent account of this point of view. I'm going to argue the other point of view, that the significance of Kumho was to increase the connection between law and science. So, what does Kumho require? The bottom line of Kumho is that judges must determine whether expert testimony--all expert testimony, not just scientific testimony--has a reliable basis. Moreover, the following answer for "Why does this have a reliable basis?" won't cut it. You can't just say, "I am professor of this, that, or the other from Harvard and Oxford and the Sorbonne all at once"; that's not good enough. You have to give specific reasons and justification for your testimony, which is entirely within the scientific spirit. But, what are the criteria for reliability? There, as Professor Berger pointed out, the gate has been opened. Kumho says that the four tests that Daubert put forward aren't always necessary criteria. They are meant to be illustrative. I am confident that they are sufficient criteria: if a given piece of expert testimony met all those tests it will be admitted to the courtroom. But the four Daubert tests are not always required. In fact, Kumho said that it is the judge's task to determine the criteria for reliability of expert testimony. We're now putting judges in the role of being metascientists, of coming up not only with whether a piece of scientific testimony or expert testimony is reliable, but also of determining the criteria for judging that reliability. It's quite a burden. Nevertheless, I still think that the thrust of Kumho increases the integration between science and the law. You can see this in Justice Breyer's discussion of Carlson, the engineer in the Kumho case. Justice Breyer's writing on this is a cogent scientific criticism of the methodological basis of Carlson's testimony. As in any scientific criticism of an empirical study, it hews closely to the facts about the procedures actually used, rather than discussing abstractions. I must say that I was surprised that Professor Berger believes that science differs from law because science is over concerned with abstraction, and less with particular facts. If Professor Berger has the view that scientists are not concerned with specific facts surrounding a piece of evidence, then she needs to come to visit my lab, because the process of science involves an exacting tearing apart of the specific circumstances that surround a given experimental finding. There are, of course, sciences with large bodies of abstract law. I suspect that if quantum mechanics is ever an issue in a trial that you are part of, you will be subjected to testimony about abstract law. However, many sciences consist primarily of empirical generalizations, as opposed to abstract laws. Almost all of clinical medicine, for example; and the process of diagnosis is precisely finding out, with some sort of reliable determination, about the facts of a particular case at hand. So a focus on abstract law as opposed to specific facts is not a difference between science and law. In the Kumho case, Carlson testified that a tire blowout was due to a faulty design. He considered four tire features. He used the rule that if two of these features were absent, then the design was faulty. Such a rule is not different in kind from any number of medical procedures you can find in diagnostic manuals, most strikingly in psychiatry. Now, what did Breyer say? Breyer was not concerned whether tire failure analysis was a science. I am critical about the supposed distinctions among prescience, postscience, junk science, real science, and the like. The distinction seems to imply that nonscientific disciplines can't have reliable procedures, and might be taken to imply that a procedure within a science is reliable just because it is associated with that science. In fact, within well-established sciences, whenever you want to introduce something new, you've got to go through the same process of empirical validation. Furthermore, as Breyer mentioned, you have examples such as a person who can reliably distinguish between different scents of perfume. Perfume discrimination is not a scientific procedure, but you can validate it using an empirical, scientific method. So if Breyer was not concerned about the scientific status of tire failure analysis, what was he concerned about? His concern was that there was no basis for believing that the particular criteria and the specific cutoff that Carlson proposed to a court reliably indicated the design fault. (This is, again, an example of how science can focus on the specific as well as the abstract.) If Carlson wanted to provide such testimony, he needed to validate his rule. I'm currently working on a paper that reports the validation of a screening test for certain childhood psychosocial problems. I can't just assert that the test is valid based on my expertise and experience dealing with children with these problems; I have to provide data on the error rates of the screening test. So why is there a perception of a disconnect between law and science? In my view, Breyer is not asking for any sort of validation for testimony that a scientist would consider to be unusual. Similarly, the kinds of empirical validation that scientists can offer are not foreign to the law. Science just asks, "How often does it work under specified conditions?" I think it's a matter of just going out and doing the empirical validation. For criminal justice in particular, the testing of evidentiary procedures must be exceedingly thorough. We need to know that a procedure works not only in the lab, not only in the pristine situation, but that it also has been tested where it's actually applied, out in the rain at a crime scene. But this is just like clinical medicine. In mammography, one is interested not just in lab studies of radiology issues. You also want to know whether the actual clinical deployment of mammography in hospital settings, in primary care offices works reliably and whether mammography has different error rates across those different settings. The particulars of a case, the actual circumstances in which it was used, is a matter of intense concern in clinical work. Validation is a big job. It takes a lot of people, and it takes a bunch of money to do these studies. But I want to make sure that we understand that the validity of a medical procedure or other scientific work does not rest on anybody's credentials as a scientist, however strong. Even if Dr. Lederberg presents something, he is going to have to provide data backed up by rigorous methods. So I think that the true source of the disconnect between science and criminal law is that we have not made a sufficient effort as a society to develop rigorously evaluated forensic methods. DNA evidence is an example of what needs to be done. In addition, we need to develop a set of methodological principles for what I would call, and maybe is called, for all I know, jurimetrics. This would be a discipline similar to biometric analyses that would give researchers a clear idea of how to develop reliable methods. Above all, we need to cultivate and grow a research community through extramural, peer-reviewed grant funding. The NIH extramural system was the real horsepower that produced the fascinating results that Dr. Caskey reviewed. Finally, while we do this research, we need to make sure we're paying close attention to the ethical, legal, and social implications of science. In this light, I can't help but comment on one aspect of Dr. Caskey's talk. It is true, as he mentioned, that the prison populations have many more people with mental illnesses than does the general public. However, if you statistically quantify the evidential value of knowing that someone has a psychiatric illness, its value for predicting a violent behavior is extremely small, almost to the point of negligibility in many cases. As this example shows, we want to be very careful that we are not only doing the best possible science, but that we also take enormous care in presenting it. So, yes, there are differences between science and the law. However, I don't see them as being at all unbridgeable. There is important conceptual work to be done to construct these bridges. But what really needs to be done to connect the fields is empirical research to develop reliable forensic procedures. Thank you. Dr. William Gardner: Now we'll start our process of brief rejoinders among our panel. Dr. Lederberg. Dr. Joshua Lederberg: Well, let me respond to something that Margaret Berger alluded to when she talked about formula X. I guess the conclusion that she recited is a very good example of a legal fiction. Let us, for purposes of equity and some other direction, disregard what has been said about the specificity of formula A, B, C, D, and E, because that will, in some way, enable a possibility of recovery among those who had been injured by X, and we certainly want to not disallow that possibility. Now, from a scientist's point of view, as long as this is transparent, as long as the judge understood what was going on, you might say the responsibility of the scientific expert ended right there. If I react to the conclusion, it's then not as a scientist, it's as a citizen. Is it, in fact, justice to impose a liability on a provider who had, in fact, done nothing wrong, had not been negligent, merely produced the materials? In fact, that particular provider had done no injury, but in order to serve the interests of compensation for an injured category, was lumped together with the other providers who were included in X. Dr. Margaret A. Berger: Oh, no. I'm saying that defendant made all the formula but only formula X caused the disease. Dr. Joshua Lederberg: Well, then there is no ambiguity about the culprit and the question is whether there is a larger category of those to be compensated and those who were actually injured. I'd put that in a different category. But let me give a hypothetical alternative, because there have been cases of exactly that sort, where you have several providers of potentially toxic materials--in other words, the case that I was then presenting and where the courts had reached a somewhat similar conclusion, but my questions are, one, is that really justice? And there will be an argument about that, and then, two, is it good social policy? Because if, in fact, you impose penalties on individuals for acts over which they have no control and, in fact, where they were not, in fact, personally culpable, that obviously is going to have a chilling effect in the future. And we know today that there are sources of medical devices and medical materials who have opted out of the market because it is too capricious, it is too unpredictable what someone else might do that might then bear on their own activities. Those are not scientific questions, though. Those are public policy ones. Dr. Margaret A. Berger: Well, I would like to ask Dr. Gardner in terms of his presentation of Kumho, whether he really is saying that, in each instance now, when one is going to introduce expert testimony, one has to validate the theory of the expert by research, and I'd like to draw a distinction at this point between the criminal case and the civil case, which nobody has mentioned. But certainly when we're talking about the criminal case--and I assume that most of you here are probably more interested in that--we are talking about the prosecution with the burden of proof and also with enormous resources, both of money and also often with technical and scientific expertise, because they did the research in the first place. Research is going forward, can go forward in the area of DNA under the auspices of government funding. We all know about the genome project. It seems to me that in the criminal case there is an obligation to validate. It's very different from a policy point of view than saying to the plaintiff in a civil lawsuit--I'm talking now about an individual plaintiff who is not part of a huge class of plaintiffs, where there may be funding and interest in doing some research-- saying to this plaintiff, who is trying to sue for a defective tire, "From now on, you can't prove anything involving a tire unless there are tire studies." Now, I don't think the court in Kumho is suggesting that that has to be done. I think that's why the opinion is so open-ended, and I think that's why so much is left to the discretion of the trial judge, but if you were to take this approach--and again I'd like to ask Dr. Gardner what he thinks--then that would be the consequence. That you would have to have, for every expert opinion, regardless of what it rests on, some empirical validation. The money for that is simply not there in many of these cases. That really would be tantamount in some areas of law to saying to plaintiffs, although in theory, you have a right to collect under tort law or under contract law, in practice, there is simply no way that you are ever going to be able to prove your case. Dr. William Gardner: I agree that the situation for criminal and civil situations is different. I know that I do not have your knowledge of what those differences are, but I agree that the situations are different. I was particularly thinking of the criminal case here and of issues that Dr. Caskey was talking about, for example, people wrongfully convicted of capital crimes. I absolutely want to see empirical validation of scientific procedures in criminal justice on a routine basis. I want to see empirical validation in civil cases as well. In my view, plaintiffs have a right to collect when there is good reason to think that they suffered harm that was caused by the defendant. If there is expert testimony about that causation, then there should be scientific evidence supporting that causation. If there is no evidence supporting that causation, other than the expert saying so, then I would say to the courts, "Get this charlatan out of here." Dr. Margaret A. Berger: May I respond? Dr. William Gardner: Please. Dr. Margaret A. Berger: I would just like to ask you another question. It seems to me that the most interesting question raised by Kumho is what to do about clinical physicians testifying about causation, which seems to me is not what they do in their ordinary lives when they are rendering diagnoses. They are saying perhaps this is this disease and not that disease. I need to know which is which in order to treat, but that is not always the same as knowing what caused the disease. Dr. William Gardner: First, I'm a statistician, not a medical doctor. However, I observe a lot of diagnoses being made. Suppose a doctor sees a patient, looks at a chest x-ray, sees a certain pattern there and says, "Yes, this is community-acquired pneumonia." What she is saying is that the person's lungs are infected and that infectious agent is the cause of the illness. I would argue that diagnoses typically do involve a causal attribution. Dr. Margaret A. Berger: But those aren't the kinds of issues that arise in toxic tort litigation. Dr. Joshua Lederberg: I know there's been another case in which there was the allegation that medicine is not a science and therefore not subject to the rules of Daubert. I am not a practicing physician, but I spent many years working with people who were developing computer-based aids to diagnosis, and I got a pretty good handle--Jack Myers--regarding how they operate. Now, they pretended to be empiricists. They pretended to use nothing but, you know, Bayesian logic, that they had the statistics of a certain number of cases that had been corroborated and that they would accumulate a set of statistics in their head. The fact is that there's an enormous amount of tacit knowledge. When somebody goes and looks at that picture on the screen, yes, the fine detail of differential diagnosis is often not dependent on a finely-grained understanding of etiology, but the overall context is. A person looking at that radiograph knows what a lung looks like, he knows the structures, he knows its development, he knows its circulatory pattern. He has seen, you know, dozens of alternative pathologies that you might call personal experience, but it has a very deeply textured scientific and theoretical base. Furthermore, it is publicly accessible. And the judgements that are made on the basis of that experience in clinical judgement are under very close scrutiny by a very tightly-knit community. So, in my view, there's no question that that fits the criterion of a scientific endeavor. Dr. William Gardner: Perhaps we can have questions from the floor. Participant: [Inaudible.] The first is I think there's a certain resistance that currently exists in the study of behavioral traits as they relate to behaviors that are outside of the law. The medical community is having no difficulty in studying traits of schizophrenia [inaudible]. I think there is a considerable concern that we're not in a position to be able to undertake societal behavioral traits, and I think the time is now to start that work. It's either resistance or it's just totally out of focus for a community that has that [inaudible] contact responsibility. For example, I could illuminate with my simple dog analogy, incarceration rate, et cetera, that [inaudible] there are genetic factors to behavior [inaudible]. Anybody who thinks that's wrong, put some money down and I'll be happy to take your money and time. Let me finish up. Now, what I think is a really debatable issue, though, is what is the weighting of genetic environmental factors in this particular circumstance. That would be very important for me to know as I begin to think about how I'm going to -- [inaudible]. If it's 98-percent environmental effects and 2- percent genetic, then I know I've got to focus all my spending on trying to alter the environmental effects to achieve better social behavior [inaudible]. Now, let me give you a couple of medical situations that will reinforce why I have [inaudible] determining genetic predisposition for asocial behavior. Let's take coronary artery disease. We know that diet influences [inaudible]. We know that weight influences it. We know that hypertension influences it. These are environmental factors. Secondly, we know that there is a tremendous genetic weighting toward coronary artery disease. Now, if you look at outcomes research and what we've done to improve health outcome in the case of susceptibility to coronary artery [inaudible], progress is greatest in that of drug development, because drugs are affecting specific pathways that are predisposed to coronary artery disease. So, if you had spent all your money on altering diet, offering some other method other than drugs, hypertension, or by weight, you would not create the impact on this disease that is being created now by antihypertensive, [inaudible], variety of excellent agents that are modifying coronary artery disease. The second example I would use is just to repeat an example [inaudible] child [inaudible] PKU and you don't [inaudible] that child, no medications in PKU, it's just dietary restriction. The child that has dietary restriction in the case of PKU ends up with normal intelligence. The child who does not has disease. Before I spent money on the intervention, I'd want to sort out [inaudible] tremendous resistance to obtaining genetic information. Dr. William Gardner: Dr. Lederberg? Dr. Joshua Lederberg: The question of criminal genetic personalities has beleaguered serious human genetic study for well over a century, and it's easy to elicit a lot of confusion about that enterprise. I don't want what Dr. Caskey has said to get in the way of getting on with the job. For one thing, I think it's a mistake to discuss the research enterprise in the same breath as we're talking about mass screening, mass collection of data, and so on, because it's plainly grossly premature, and I think some of the opposition that will be provoked by this kind of discussion is a misunderstanding that there ought to be a movement in the near future to collect data to apply principles of what genetic component or determination there is for criminal activity in the foreseeable and near future. We're a long, long way from the scientific basis for that, so I think there ought to be a very clear distinction between those enterprises. Now, it's a very hard problem. It's hard enough getting at the genetic etiology of the primary diagnoses of schizophrenia, of manic depressive illness, and so forth, than to go the first step-- a step further about how it interacts with other existential environmental factors that then results in criminal behavior, and even the remark about incarceration. It would be very hard to put down that the critical factor is, these are the people who get caught. No, that's not a joke either. We have no data on the primary incidence connected with these diagnoses. Now, it's perfectly common sense that people who are atypical in their psychosocial reactions are more likely to get caught up in the claws of the law and more likely to do things that are inappropriate, but it is such a complicated pathway, from the gene to the final phenotype that it's a tough enterprise. I subscribe to your view that this ought to be investigated and ought to be investigated more thoroughly than it's been. It will be a lot easier to do once you've completed the enterprise of having a SNP for every gene, because in principle, when you have segregating pedigrees, you will be able to tease out genetic correlations right away. A very serious part of the problem is we simply have not had the appropriate tools for this intricate examination until, really, tomorrow afternoon at 3 p.m. And an enormous amount has been claimed on behalf of these approaches which has absolutely fallen to the ground. So, by all means, let's do it, but be very humble about the complexity of the task. Dr. C. Thomas Caskey: The acquisition of the knowledge, the science and research, my point is [inaudible] the application of the knowledge requires, just as we do in medicine, a risk-benefit analysis--what is the benefit to the patient and to the public of applying this technology versus the downside--and that has to be applied in every circumstance. I illustrated breast cancer one and breast cancer two. There's a heck of a lot of debate out there right now as to whether those tests should, in fact, be applied if you do not have a definitive therapeutic dressing for that particular risk factor. So, risk-benefit comes next. First is the discovery. Dr. Joshua Lederberg: The pathogenetic pathway for breast cancer is already very complicated. It's enormously simpler than it is for criminal behavior. Dr. William Gardner: Professor Faigman. Dr. David Faigman: I have a question for Margaret Berger. Margaret, of course you're aware that many courts distinguish between general causation and specific causation. I'll just take an example of silicone implants. General causation, of course, is whether silicone implants are associated with atypical connective tissue disorder, specific causation being whether the particular plaintiff's connective tissue disorder is attributable to silicone implants. Do you read Kumho as abolishing the distinction between general causation and specific causation, and if not, how do you see the role of general causation after Kumho, especially in the clinical medical context? Dr. Margaret A. Berger: Certainly a wonderful question to which I do not know the answer. I do not know whether the Court was thinking in those terms at all. Certainly the court seems to say that the person who is going to decide this in the first instance is the trial court, and the abuse of discretion standard, which the Court emphasizes over and over again, is, I think, going to cause problems, ultimately, in terms of just the kind of question you've raised. I don't see how you can end up having different answers to some very basic questions like that. They're going to differ depending on who the trial court was. Whether these questions are going to fall within the abuse of discretion area or whether, ultimately, the court is going to have to say that some of these are issues of law that are going to have to be resolved as matters of law by the various circuit courts, I really don't know. Dr. William Gardner: Bert Black had his hand up. Mr. Bert Black: I'd like to take up a point that Dr. Lederberg made about whether or not clinical medicine is or is not science. There are probably two polar cases on this now. One is a case from the Fifth Circuit that holds quite clearly that clinical medicine is science, and in fact, it quotes somebody, maybe one of your colleagues, a former colleague from Yale, Alton Finestein, pointing out that, whether it is an epidemiologist, a clinician, or other scientist, determining causation is a scientific enterprise. And then there is a case from the Second Circuit called McCulloch, and in the McCulloch case, the court said you have an experienced physician, and based on his experience, he's qualified to reach conclusions about causation. To me, it seems that Kumho Tire is saying that even experience-based testimony has to be validated in some sense, which supports the Fifth Circuit, saying that clinical medicine is science, and in fact, does away with the approach of the Second Circuit. And I would like the comments of both Dr. Lederberg and Dr. Berger on that point. Dr. Joshua Lederberg: Well, I think that's a good case to apply my suggestion for the definition of an expert witness, and that's the one who has both the wit and the integrity to report on the current state of play of knowledge on the part of others in the field, work that has been published, the critical discourse that others have offered. So, I wouldn't want to pay much account to a physician who said, based on my personal experience, this is the direction of causation if this individual was not in a position to understand what everybody else in the world had been working on, the research that they'd been doing, and so forth, and to fit his personal experience into that context. Dr. Margaret A. Berger: Again, I am really not sure about what Kumho would necessarily say in this situation, because, on the one hand, it seems to say that personal experience and knowledge will count for a great deal. I am not at all sure that Kumho is saying that a tire expert's theory, Mr. Carlson's theory in the Kumho case itself, would have had to be thrown out by the court if it weren't so clear that this was an old, worn-out, abused tire that should have been taken out of service a long time ago, that the expert couldn't even tell how many miles it had been driven, no one knew how many miles it had been driven, it was a second-hand vehicle, and that, under those circumstances, the Court just couldn't believe that there was any cause for this tire blowout other than old age--precisely--a natural death, and that was it. Given that, in what is really a fairly simple case--compared to what we have been talking about today in terms of DNA and predicting character traits and whatnot--I think it's easy to read Kumho very equivocally as to what would be required with a tire. When you start applying this opinion to far, far more complicated cases such as Bert Black has been referring to, I do not know what the courts are going to do. They have a great challenge on their hands, and I think that they will deal with it in part in the context of the cases in which these issues arise. I do think that, despite the fact that, of course, Kumho says nothing about distinguishing between criminal cases and civil cases, neither does Rule 702, that of course there should be some differences that the courts will take into account, because the courts, too, are there to do justice, and I think that they are not oblivious to these differences. How can one be? I think that's what makes this a fascinating topic for all of us. These are very, very difficult questions. Dr. William Gardner: I think we have time for perhaps three questions at most, if we're very quick on both our questions and our replies. Participant: [Inaudible.] Dr. William Gardner: Let me comment, and then I'm sure Dr. Caskey will want to comment. The first part of your question was do scientists have a self-critical awareness about the social implications and the ethical aspects of the science that's done. There are now well-established rules and regulations on research ethics and how experiments with human subjects have to be conducted. That's different from what you're talking about, but it is an issue on which the medical sciences, at least, have had to address ethical issues about how we conduct our work. I would also point to the program on the ethical, legal, and social implications (ELSI) that is part of the human genome project. A fixed percentage of the money of the human genome project has been devoted to sponsoring research and discussion of the social and ethical implications of that science. It's hard to evaluate how much ELSI has accomplished, having watched it from a distance. I think that simply making the effort to systematically examine the implications was a very positive development. Dr. C. Thomas Caskey: I think, in science, we have the same difficulty that you have in the legal system. Our discovery rate is incredibly high right now. But how you act on that discovery is being taken much more cautiously. And in the genome project, these issues are debated extensively. I'll give you two examples. The discovery of the cystic fibrosis gene was made a long time ago. There was a great debate on whether we should embark upon a nationwide screening program for the CF gene, and after due deliberation--and there were parties in both camps, now's the time to act, now's not the time to act. Out of that discussion came the following decision for 1999 and probably a few years. Couples who wish to find out if they are at risk for bearing a child with cystic fibrosis should have the option of genetic testing to determine their risk. It's a family, prenatal diagnostic decision. The application of the CF testing for the general population was not recommended, because it was felt that the risk outweighed the potential benefit for the general population. So, CF testing is available, but CF testing is perceived to be of use in only certain settings. A second example, discovery of the Huntington's Chorea gene. You can do a molecular diagnosis for Huntington's disease that will predict, 10 years, 15 years from now, you will have that disease, and be very accurate, quite accurate, in that prediction. So, the question is do we begin now widely applying the Huntington's Chorea diagnostic situation to any [inaudible]. The decision that was made was the following. The diagnostic is so precise that anyone who comes in with a movement disorder should not deny the application of that test, because it gives you proof and precision. Now, when you extend beyond the index case, the affected individual, this gets to be something that is optional for any family member, and they should only use the testing with applicable instructions. So, these are some examples of how the availability of the diagnostic was there as soon as the genes were discovered, but the application can come in a variety of formats. Dr. William Gardner: Dr. Lederberg can add the last comment. Dr. Joshua Lederberg: I just also wanted to refer that Attorney General Reno has taken a special interest in the topic and has mandated a commission being managed by the National Institute of Justice on the use of DNA forensic evidence under the CODIS regime from, really, all of the aspects that you're concerned about. Shirley Abrahamson, who's Chief Justice of the Supreme Court of Wisconsin, is chairing that commission. Dr. William Gardner: I'd like to thank everyone on the panel and the audience for a very stimulating discussion. ------------------------------ Panel II. Admissibility: The Judge as Gatekeeper Moderator: Sam C. Pointer, Jr. Chief Judge U.S. District Court, Northern District of Alabama Birmingham, Alabama Panelists: Edward J. Imwinkelried Professor of Law University of California, Davis, School of Law Davis, California Myrna S. Raeder Professor of Law Southwestern University School of Law Los Angeles, California Chair, Criminal Justice Section, American Bar Association Dr. Richard Rau: [in progress] There would be a court case in the meantime that would make this session even more significant, but the second title for it--"The Judge as Gatekeeper"--I think, is rather appropriate. I asked the panel if they shouldn't talk about Kumho here, but unfortunately I can't really impose that on them. The moderator is the Chief Judge of the U.S. District Court in the Northern District of Alabama, and we're very pleased that he's here. I think you know his credentials. Then Professor Edward Imwinkelried. If you'll look in your program--I don't think we have to introduce these people to you in any detail. I'll let them speak for themselves, and I think you'll be impressed. Judge Sam C. Pointer, Jr.: Thank you very much, Dick. Let me start off with a little bit of a disclaimer. Toward the end of the session, someone raised issues about general and specific causation in the context of breast implant litigation, and as at least my bio indicates, I've been involved in the Federal coordination of some 27,000 of those cases. Many of you would be aware from news media reports yesterday and today--newspapers, TV--that we have some unusual problems in that case, actually, right at the moment. Namely that the plaintiffs filed on Tuesday of this week a motion to vacate the court appointment of four experts whom I had appointed to be mutually objective to assist under Rule 706 in that litigation. And the allegation in this motion to vacate the appointment is that one of the panelists engaged in inappropriate communications and relationships with one of the defendant manufacturers. That particular motion will be heard by me next Monday, and depending upon what happens there, we are scheduled on Tuesday to go forward with essentially the trial examination of these four experts. My disclaimer is this, for those of you who have sort of followed some of that in the newspaper. I am not receiving any honorarium for appearing here. My expenses are being paid by the Federal Judicial Center on a Government per diem basis, and though we have some people in this audience who are involved in or interested in that litigation, to the best of my knowledge, none of them are involved in any kind of payment to me. We will be dealing with the issue of the judge's role as a gatekeeper. Ed is going to start off with something about a way of analyzing what judges have done, or perhaps should be doing, in the way of treating Daubert motions--typically, preliminarily, in motions in limine, though sometimes at trial--and perhaps how the case decisions are coming out based upon what is actually presented to the judge. Myrna will then follow up with some additional comments on that subject, as well as getting to some of the problem areas, perhaps solutions or changes in how we go about this. I'll be coming back to give some reflections on this subject area, really based on about 28 years of being a Federal trial judge, because 28 years ago, we had at that point issues about the judge's role in handling expert testimony. Although it's gotten much more prolific, in many respects the problems have only become better defined, not newly emerging. And in that context, I will be talking about the tools that perhaps judges are using or may use and, indeed, will make a few comments about difficulties in trying to deal with either court appointments of experts or with the use of others to assist the court in making Daubert-type opinions. Because of the time limitations, although my two colleagues were originally to be given 20 minutes in which to make presentations, I'm going to exercise some judgment on this, and drop them down to 16 minutes per individual. This will give us more time for dialogue. I'll impose appropriate standards of limitation on myself. I think I can live up to that; at least I'll be embarrassed if I don't. I should say that, when we were talking in the early morning session about different cultures between those in the legal profession and those in the scientific community, some indication was that you can tell the differences by whether slides are used or not. My two colleagues share, to some degree, both science and law, and accordingly, they'll be using maybe two or three slides each and not a full presentation. Ed? Dr. Edward J. Imwinkelried: [Dr. Imwinkelried's remarks are presented in manuscript form.] The Judge as Daubert Gatekeeper: Adapting Old Maps to the Unfamiliar Terrain of the "Brave New World" In his opinion on remand in Daubert, Judge Alex Kozinski opined that the new Daubert test [1] would propel the Federal courts into a brave new world.[2] Judge Kozinski added that the judiciary's performance of its new gatekeeping role would prove to be a "daunting task."[3] It can be unsettling whenever anyone conjures up images of Huxley's Brave New World.[4] It can be positively unnerving if one speculates about the implications of thrusting a seemingly conservative institution such as the judiciary into a visionary future. Although the American judicial system ordinarily proceeds by gradual evolution rather than dramatic evolution, the system has another important characteristic: its exquisite adaptability.[5] The courts have repeatedly demonstrated their capacity to adapt to even radical developments such as the advent of technologies.[6] By way of example, the courts are now in the midst of the process of accommodating traditional First[7] and Fourth Amendment[8] principles to the novelties of cyberspace. My thesis today is that adaptability can serve the courts well in the context of performing their assigned gatekeeping and screening tasks under Daubert.[9] To be more specific, as intimidating as these new tasks might appear to the typical judge who lacks formal training as a scientist,[10] the judge can find reconnoiter in this brave new world by analogizing[11] to a familiar body of law. That body of law is the jurisprudence governing the initial burden of production or going forward at trial. There are several parallels between that body of doctrine and the judge's screening duty under Daubert. In both cases, the judge is performing a gatekeeping duty. Under the initial burden, the judge decides whether the cause of action, crime, or defense should be submitted to the jury. Under Daubert, the judge must decide whether a particular item of evidence ought to be submitted to the jury. Assuming that the judge assigns a party the burden on a particular fact of consequence at trial, this body of law determines whether the party has made out a submissible case and is entitled to have the factual dispute resolved by the trier of fact.[12] For purposes of this seminar, the corresponding question is whether the proponent of purportedly scientific testimony is entitled to have it submitted to the jury. The trial judge conducts his or her gatekeeping inquiry under Daubert for the express purpose of answering that question. In addition, under the jurisprudence governing the initial burden of production, in determining whether the proponent is entitled to get to the jury on a particular issue, the judge considers both the proponent's evidence and the contrary evidence submitted by the opponent.[13] The common denominator is that in Daubert, Judge Blackmun made it clear that Rule 104(a) governs the issue of whether the proponent's proffered testimony constitutes admissible "scientific . . . knowledge" within the meaning of that expression in Rule 702.[14] Under Rule 104(a), the judge attempting to screen out "junk science" must consider the evidence on both sides, pro as well as con,[15] on the issue of whether the proponent's testimony qualifies for admission under Daubert. Finally, in both settings, the proponent and opponent progress through various stages. Under the initial burden of production, the proponent can: lose because his or her showing is too weak,[16] reach the trier of fact when the issue is rationally arguable,[17] or fail because the opponent's contrary showing is overpowering.[18] As we shall see, the scientific evidence cases suggest that the proponent and opponent of that type of testimony can work through comparable stages. To be sure, there are differences between the two bodies of doctrine. The foremost distinction is that, when the judge passes on the question of whether the proponent has met the burden of going forward, the judge must ordinarily[20] accept the proponent's testimony at face value. The judge may not consider the credibility of the proponent's testimony. In contrast, when the judge assesses the testimony on a foundational or predicate question under Rule 104(a), the judge is entitled to pass on the credibility of the testimony.[21] However, that difference does not preclude using the sequence of stages for analysis under the initial burden for the purpose of developing a similar model under Daubert. It is true that under Rule 104(a), the judge must evaluate the credibility of the foundational testimony. However, after the judge has done so and identified the believable testimony on both sides, the judge must decide whether to admit the proponent's testimony. The Daubert decision comes after the credibility determinations. At the point of decision, the judge could theoretically use the same basic model to guide his or her decision. My contention today is twofold. First, in the process of evaluating the Daubert foundation, the judge can identify differing states of the record similar to the various states of the record under the initial burden of production. Second, and just as importantly, the identification of the type of state of the record can guide the judge's Daubert ruling, in much the same way as it dictates the judge's decision under the initial burden of production. We may be able to use the law governing the different states of the record under the initial burden of production as a rough map to help us find our way in the brave new world of Daubert. The objective of this short article is to develop these two theses. The first part of the article reviews the jurisprudence on the initial burden of production. This part distinguishes among five different states of the record under the initial burden and indicates the appropriate judicial ruling for each state. The second part of the article constructs the parallel to the gatekeeping inquiry under Daubert. Using examples drawn from published opinions, this part of the article argues that there are likewise at least five different states of the Daubert record. Further, the article contends that in each state, the judge should make an admissibility decision similar to the judicial ruling for the corresponding state of the record under the initial burden of production. ONE TERM OF THE COMPARISON: THE JURISPRUDENCE UNDER THE INITIAL BURDEN OF PRODUCTION OR GOING FORWARD In some jurisdictions in certain civil actions arising under contract[22] or tort[23] law, a defendant's insanity is treated as a defense to liability. Assume that a defendant properly raised the issue of his or her insanity by way of an affirmative defense in the responsive pleading. When a fact of consequence is properly raised at the pleading stage, at trial the judge must assign the initial burden on the fact to one of the litigants.[24] In most cases, the initial burden follows the burden of pleading; the party with the burden of raising the issue in the pleadings also has the initial burden of production or going forward on the factual issue.[25] Thus, in our hypothetical, the civil defendant would probably have the initial burden. In an attempt to meet that burden, the civil defendant could present any logically relevant, admissible evidence, including competent opinion testimony. A person's insanity is a proper subject for both lay[26] and expert[27] opinion testimony. Given that allocation of the initial burden and the admissibility of both types of opinion testimony, at our hypothetical trial the defendant and plaintiff could progress through the following stages, inter alia:[28] A. STAGE #1: The Burdened Party (the Civil Defendant) Fails to Produce Any Evidence to Sustain the Initial Burden of Production. Suppose that the defense attorney counted on one of the defendant's acquaintances[29] to testify that at the relevant time, the defendant was acting irrationally. However, the prospective witness fails to appear at trial. Consequently, the defense case-in-chief includes no admissible evidence that the defendant was insane. At the instructions conference after the close of all the evidence, the defense attorney requests that the judge instruct the jury on the defense of insanity. This state of the record presents the easiest decision for the trial judge. In this state of the record, the judge must deny the request. The judge makes a peremptory ruling, withdrawing the issue of the defendant's insanity from the jury.[30] In effect, the judge proclaims the burdened party's loss on the issue as a matter of law without ever submitting the issue to the jury. Since the record contains no competent evidence of the defendant's insanity, it would be irrational for the jury to infer insanity.[31] B. STAGE #2: The Burdened Party (the Civil Defendant) Fails to Produce Sufficient Evidence to Sustain the Initial Burden of Production. Vary the facts in the hypothetical. Now assume that the expected defense witness appears. However, the witness's testimony is not nearly as definite as the defense attorney had hoped for. The defense attorney anticipated that the witness would express a definite opinion that the defendant was acting irrationally at the relevant time. However, the witness's testimony is much more guarded. The witness is willing to testify only that the defendant "might have been acting a bit peculiar." Once again, after the close of the evidence the judge and parties retire to chambers for the instructions conference and, as in the previous variation of the hypothetical, the defense attorney asks the judge to instruct the jury on the defense on insanity. This is a more difficult case for the judge than the initial variation. In that extreme variation of the fact pattern, the defense failed to present any evidence of insanity; and it would obviously be irrational for the jury to find the defendant insane. However, here the defense has at least made an attempt to meet its burden of production on the issue of insanity. However, in the final analysis the attempt is unsuccessful. The defense must do more than produce some evidence relevant to the issue of insanity;[32] rather, the defense must submit enough evidence to permit the jury to rationally infer insanity.[33] Although indisputably relevant, a mere scintilla of evidence does not amount to a submissible case.[34] The evidence must be solid enough to sustain an objectively reasonable inference.[35] The judge must police the rationality of the inference and forbid "the jury to draw an inference from unsufficient data . . . ."[36] The judge would conclude that, even if the jurors chose to believe the witness's testimony that the defendant "might have been acting a bit peculiar," without more that testimony would not support a rational inference of the defendant's insanity.[37] There are degrees of "peculiar" behavior. Only highly peculiar conduct would sustain an inference of insanity at the time of the conduct. Thus, unless the proponent supplements the lay witness's testimony, the judge would make the same ruling as in the original variation of the hypothetical. The judge must deny the defense request. The judge would make a peremptory ruling as a matter of law that the defense has not made out a submissible case on the question of the defendant's insanity. C. STAGE #3: The Burdened Party (the Civil Defendant) Barely Sustains the Initial Burden of Production by Submitting Enough Evidence to Support a Rational Inference of the Existence of the Fact in Issue. Change the facts. As in the second stage, the prospective witness appears at trial. However, in this variation of the fact pattern, the witness gives the testimony that the defense attorney had hoped for. Rather than testifying only that the defendant "might have been acting a bit peculiar," the witness testifies categorically that the defendant was "definitely acting in an irrational, crazy manner." Based on this testimony, the defense counsel once again requests an instruction on the insanity defense. Now the judge ought to grant the request. The burdened party has shouldered the initial burden. The pivotal question is whether the state of the record would permit a rational finding[38] of insanity. The resolution of the question is governed by logic and experience rather than any artificial rules.[39] Nor is there an invariable requirement for direct evidence.[40] The judge issues peremptory rulings to ensure that the jury findings are rational. However, when the judge concludes that the inference is one which the jury "may logically and reasonably . . . draw[],"[41] a peremptory ruling is inappropriate. The factual dispute should be submitted to the jury. The jury should initially perform its classic function[42] of determining the credibility of the testimony and then decide which inferences, if any, to draw from the testimony it finds credible. At most, the judge would give the jury two instructions. One instruction submits the issue of the defendant's to them, allocates[43] the ultimate burden of proof on that issue, and states the pertinent measure[44] of the ultimate burden. The second instruction would inform the jurors that, if they chose to believe the witness's testimony, the jurors may infer the defendant's insanity.[45] The latter instruction would be couched as a permissive inference rather than a mandatory one.[46] D. STAGE #4: The Burdened Party (the Civil Defendant) Presents Sufficient Evidence to Support a Permissive Inference of the Fact in Dispute, the Opposing Party Presents Contrary Evidence, but the Opposing Party's Evidence Is Not So Powerful That It Would Be Irrational for the Trier to Infer the Fact in Dispute. In the prior three states of the record, we focused exclusively on the burdened party's evidence. In those variations of the hypothetical, the opposing party argues that he or she is entitled to a peremptory ruling, withdrawing the issue from the trier, because the burdened party's evidence is so weak.[47] However, the opponent need not be content to rely on the weakness of the burdened party's evidence; the opponent can also submit contrary evidence.[48] For instance, during the rebuttal stage of the case, the opponent, here the plaintiff, might call another acquaintance of the defendant who happened to have observed the defendant at the same time as the defense witness. Assume further that after describing the extent of his acquaintanceship with the defendant, the plaintiff's witness testifies that he "did not notice anything strange or out of the ordinary in" the defendant's conduct. Under the initial burden of production, the judge monitors the rationality of the jury's findings. Just as "we do not permit the jury to draw an inference from insufficient data . . . we should not permit the jury to act irrationally by rejecting compelling evidence."[49] In this variation of the hypothetical, the question is whether the plaintiff's contrary evidence is so "compelling" that it would be arbitrary for the jury to choose to believe the defense evidence and draw the inference of insanity. On these facts, that question should be answered in the negative. Both sides have elicited testimony from a single witness on the issue. As acquaintances of the defendant, both witnesses seem equally qualified to opine about the apparent rationality of the defendant's behavior, and both opinions are comparable in their degree of definiteness. It would be fair to say that here the evidence is such that "reasonable and fair-minded men in the exercise of impartial judgment might reach different conclusions . . .."[50] The range of rational decisions in this case would include a finding of sanity as well as a finding of insanity. As at stage #3, rather than making a peremptory ruling, the judge would tender the issue to the jury. However, the judge's instructions to the jury might be a bit more complex. As in stage #3, the judge would submit the issue to the jury, allocate the ultimate burden of proof, and inform the jury of the pertinent measure of the burden. Again, as in stage #3, assuming that the instruction did not violate any local restrictions on judicial "comment,"[51] the judge could inform the jurors that, if they chose to believe the defense witness's testimony, the jurors may infer the defendant's insanity. However, the judge could also point to the contrary plaintiff's evidence and direct the jury to evaluate that evidence as well before deciding whether to believe the defense witness and infer insanity. E. STAGE #5: The Burdened Party (the Civil Defendant) Presents Barely Enough Evidence to Support a Permissive Inference of the Facts in Dispute, but the Opposing Party Presents Such Overwhelming Contrary Evidence That It Would Be Irrational for the Trier to Infer the Existence of the Disputed Fact. In the preceding variation of the hypothetical, the opposing party, the plaintiff, presented some evidence to dispute the factual issue of the defendant's sanity. However, the plaintiff's evidence was minimal. Since the defense evidence was comparable in quantity and quality, at most the plaintiff's evidence placed the state of the issue in equipoise--a state of the record in which the issue should be submitted to the jury. Vary the facts a final time. Now assume that the plaintiff does far more than call one of the defendant's acquaintances to give lay testimony about the defendant's sanity. Suppose that the plaintiff calls a number of eminently qualified mental health experts to corroborate the lay opinion. Even assume that a court-appointed expert[52] comes to the identical conclusion as the plaintiff's experts. It is true that the jury is almost always entitled to disbelieve facially sufficient testimony.[53] However, the opponent's evidence can be so extensive and credible that it would be irrational for the jury to reject it.[54] In some cases involving disputes over sanity, a party's submission of substantial, patently qualified expert testimony has prompted courts to withdraw the issue from the jury and resolve the question in that party's favor as a matter of law.[55] To be sure, such cases are rare. However, when the opponent submits evidence that is "irresistibly convincing, the jury should not be left to refuse to draw the only rational inference."[56] There are special constitutional concerns which preclude the prosecution from obtaining an absolute peremptory ruling against an accused.[57] However, apart from those peculiar constitutional protections for the accused, at trial it is just as much an affront to rationality and justice for the jury to reject an overwhelming case as it would be for the jury to accept too weak a case.[58] In this exceptional state of the record, the judge withdraws the issue from the trier of fact-- not because the burdened party's evidence is so weak but rather because the opposing party's evidence is so strong. The judge would make a peremptory ruling as a matter of law rather than submitting the issue to the jury. If that issue were dispositive of the case--for instance, if the defendant had admitted the allegations of the plaintiff's complaint and opted to rely solely on the affirmative defense of insanity--the peremptory ruling on insanity would mandate judgment for the plaintiff. II.--THE SECOND TERM OF THE COMPARISON: THE CONCEIVABLE STATES OF THE RECORD FOR THE PROPONENT'S DAUBERT FOUNDATION This part of the article endeavors to construct parallels between the jurisprudence on the initial burden of production (described in Part I) and the Daubert empirical validation test[59] for the admissibility of scientific testimony. Drawing on published opinions--some rendered before Daubert and others postdating Daubert--this part attempts to demonstrate that the state of the Daubert record can also fit into five different categories and that the judge's ruling, determining whether the proponent may submit the scientific evidence to the jury, should parallel the judge's decision for the corresponding stage of analysis under the initial burden of production. Before reviewing the five possible states of the record, it is important to remember the juncture at which these five states arise. As previously stated, since Federal Rule of Evidence 104(a) governs this foundational issue, the judge can pass on the credibility of the foundational testimony tendered by both sides. We shall assume that the judge has already made his or her credibility determination; the judge has already decided which testimony to reject on credibility grounds. Having done so, the judge is now in a position to make his or her Daubert decision. The thesis of this article is that, at this juncture, it will be helpful for the judge to attempt to categorize the record as falling into one of five possible states. A. STAGE #1: The Burdened Party (the Proponent) Fails to Produce Any Evidence to Show that the Expert's Hypothesis Has Been Empirically Validated. In State v. Smith,[60] an expert proposed testifying about gunshot residue analysis (GSR). In the past, the Harrison-Gilroy test had been accepted as a technique to determine whether a suspected shooter had gunshot residue on his or her hands. The rub in Smith was that the expert had not used the Harrison-Gilroy test itself but rather a modification of that test. According to the appellate court, the record reflected that no one, "including [the witness], has ever conducted any experiments to attempt to objectively determine" the validity of the modified test.[61] Although the case was a state decision antedating Daubert, the court reached the same result as Daubert would dictate: The court ruled the evidence inadmissible. In this state of the record, the proponent fails to submit any "solid empirical research."[62] The proponent's expert may have developed a plausible,[63] testable hypothesis, but Daubert requires more. The underlying hypothesis was that, like the Harrison-Gilroy test, the modified procedure was a valid technique for detecting the presence of gunshot residue. It was plainly unscientific to accept that hypothesis without subjecting the hypothesis to empirical testing. Daubert requires that the expert take the next step and actually engage in testing. Just as the prior stage #1 is the most clearcut decision for the judge under the initial burden of production, this stage is the easiest decision for the judge applying Daubert. B. STAGE #2: The Burdened Party (the Proponent) Fails to Produce Sufficient Evidence to Show that the Expert's Hypothesis Has Been Empirically Validated. The prior state of the record presented such a clear decision for the trial judge precisely because the proponent utterly failed to offer any foundational evidence of the empirical verification of the expert's hypothesis. However, assume that the proponent comes forward but only with foundational testimony of meagre probative value. Consider several examples. Initially, although the expert states that there is an empirical study of the hypothesis, the expert's testimony tells the judge next to nothing about the design of the study. In United States v. Kime,[64] the Court of Appeals for the Eighth Circuit held that an expert's conclusory assertion that there has been a scientific test of the hypothesis falls short of satisfying Daubert. Next, elaborating on the study, the expert discloses that the study entailed only a small database. In a pre-Daubert decision, Nelson v. Trinity Medical Center,[65] the North Dakota Supreme Court held that a trial judge may bar a scientific opinion resting on a small database. The proponent's showing amounts to little more than a collection of anecdotes.[66] In the words of the North Dakota court, quantitatively there has "been too little research."[67] Assume now that the expert testifies that the database included 1,000 subjects but that all the subjects were infant animals rather than adult human beings. The testimony about 1,000 subjects might allay the quantitative concerns mentioned in the previous paragraph, but now there are qualitative concerns. Is the database representative on the subject which the expert proposes opining about? If the expert ultimately contemplates testifying to an hypothesis about medical causation in human beings, the issue is whether, standing alone, the animal study is sufficient to carry the proponent's burden. The majority confronted this issue in General Electric Co. v. Joiner.[68] There the majority stated: The studies involved infant mice that had developed cancer after being exposed to PCBs. The infant mice in the studies had massive doses of PCBs injected directly into their peritoneums or stomachs. Joiner was an adult human being whose alleged exposure to PCB was far less than the exposure in the animal studies. The PCBs were injected into the mice in a highly concentrated form. The fluid with which Joiner had come in contact generally had a much smaller PCB concentration of between 0-500 parts per million. The cancer that these mice developed was alveologenic adenomas; Joiner had developed small- cell carcinomas. No study demonstrated that adult mice developed cancer after being exposed to PCBs.[69] The majority stopped short of announcing that animal testing could never constitute adequate validation for an hypothesis about human beings. However, in part due to the composition of the animal databases, the majority concluded that "[t]he studies [involving infant mice] were so dissimilar to the facts presented in this litigation that it was not an abuse of discretion for the District Court to have rejected the experts' reliance on them" [70] to uphold an hypothesis about medical causation in humans. Joiner expressed concern about the test conditions as well as the composition of the database. A pre-Daubert case, People v. Law,[71] directly addressed the former concern. In Law, the proponent, the prosecution, offered sound spectrography or voiceprint evidence. In Law, the prosecution was attempting to prove that the accused was the person who had placed a phone call in which the caller had obviously made an attempt to disguise her voice. To lay the foundation for sound spectrography evidence, the prosecution pointed to a large number of studies involving hundreds of subjects. The difficulty was that, in the studies, the subjects were speaking naturally with no attempt to disguise their voices or mimic someone else's voice. In short, the conditions obtained during the experiments did not approximate the conditions involved in the case. In part for that reason, the court ruled that the experimental verification was inadequate. If the test conditions do not match, the requisite "fit"[72] between the research and the facts of the instant case is lacking. Finally, assume that, while the foundation satisfies all of the trial judge's concerns about the size and composition of the database and test conditions, the researchers reported a very substantial margin of error. In Daubert,[73] Justice Blackmun specifically stated that the validity and error rates are factors which the judge should consider in deciding whether the expert's hypothesis qualifies as empirically validated "scientific knowledge." Consider, for instance, some of the foundational testimony in the Court's more recent decision on polygraphy, United States v. Scheffer.[74] In the lead opinion, Justice Thomas emphasized that the trial record included testimony about empirical studies in which reseachers "have found . . . the accuracy rate of the 'control question technique' polygraph is 'little better than could be obtained by the toss of a coin,' that is, 50 percent."[75] In short, the trier of fact might as well rely on random chance as trust the opinion of an expert employing this scientific technique.[76] When the accuracy of the technique is roughly the same as random chance, surely the judge should conclude that the foundation is insufficient to sustain the proponent's burden under Daubert. In effect, all these fact situations are variations of the theme in Joiner. In Joiner, the majority commented that "there" was "simply too great an analytical gap between the data and the opinion proffered."[77] As a matter of logic, it is too great a leap or extrapolation[78] from the research data presented to the ultimate inference which the expert contemplates drawing from the data. Such a leap is an act of faith rather than scientific analysis. Just as the judge polices the rationality of inferences under the initial burden of going forward, he or she must monitor the permissibility of inferences from the empirical data under Daubert. When the proponent's foundation gives the judge little or no detail[79] about the supporting research or discloses a small, unrepresentative database, unrealistic test conditions, or a high error rate, the foundation is insufficient. The proponent's testimony is logically relevant to the question of whether the expert's hypothesis constitutes admissible "scientific . . . knowledge" under Rule 702; but without more, the judge should find these variations of the record wanting. C. STAGE #3: The Burdened Party (the Proponent) Barely Sustains the Burden by Submitting Enough Evidence to Show that the Expert's Hypothesis Has Been Empirically Validated by Sound Scientific Methodology. In the second stage, although the proponent tenders some evidence relevant to the empirical validation of the expert's hypothesis, the evidence is inadequate--the database is minuscule, its composition is unrepresentative, the test conditions do not approach the conditions obtaining in the instant case, or the test yielded a high margin of error. Assume alternatively that the proponent submits a foundation which does not suffer from any of those defects. The expert elaborates on his or her study, the size of the database is substantial, the composition is representative, the test conditions are realistic, and the validity rate is high. Given this state of the record, the judge should admit the evidence even if the hypothesis is a novel one.[80] A post-Daubert DNA case, Commonwealth v. Rosier,[81] is illustrative. Rosier involves a third-generation DNA technique, short tandem repeat (STR) analysis. As far as the courts are concerned, STR analysis is a relatively new technique. To date, Rosier is the only published appellate opinion addressing the admissibility of STR testimony. However, there have been several empirical investigations into the validity of this technique as a method of identifying DNA markers.[82] The relevant databases include hundreds of subjects.[83] Cellmark has utilized the test since 1991[84] in tens of cases.[85] Although the opponent attempted to disparage STR analysis as "unreliable because it is too new,"[86] the opponent failed to present any contrary expert testimony to demonstrate the unreliability of the technique.[87] The Rosier court conceded that the technique was avant-garde in the sense that "we have not been directed to any decisional law approving STR testing."[88] However, the court was impressed by the extent of the empirical validation of the technique. Pointing to one of the published studies,[89] the Rosier court concluded that the available research established that the underlying "methodology" was "scientifically valid."[90] The court's conclusion was correct. The research into STR analysis is admittedly not as extensive as the research validating either restriction fragment length polymorphism (RFLP) or polymerase chain reaction (PCR), but the studies conducted to date suffer from none of the deficiencies identified in stage two. Further, in Rosier, while the opponent noted that STR testing is of somewhat recent vintage, the opponent failed to submit any expert testimony finding fault with the methodology of the studies validating STR analysis. At the third stage of analysis under the initial burden of production, the judge ought to permit the burdened party to submit the case to the trier of fact; and at this stage in Daubert analysis, the judge should allow the proponent to submit the proposed scientific testimony to the trier. D. STAGE #4: The Burdened Party (the Proponent) Presents Sufficient Evidence to Show That His or Her Expert's Hypothesis Has Been Validated by Sound Scientific Methodology, the Opposing Party Presents Contrary Evidence, but the Opposing Party's Evidence Is Not So Powerful That It Would Be Irrational for the Trier to Accept the Proponent's Expert's Hypothesis. In the two immediately preceding variations of the state of the record, the proponent was the only party who submitted evidence to the judge. The outcome of the Daubert ruling turned solely on the judge's assessment of the sufficiency and strength of the proponent's foundational showing that the expert's hypothesis has been empirically validated. The fourth state of the record presents a more difficult decision for the trial judge. In this state of the record, the opponent goes to the length of presenting contrary expert testimony. Assume, for instance, that while the proponent submitted the same foundational testimony about STR analysis as we hypothesized in stage #3, the opponent presented expert testimony in rebuttal. The quandary for the judge is that, like the proponent's foundational showing, the opponent's rebuttal evidence appears to rest on a study which is based on a large, representative database and which was conducted under conditions approximating the conditions involved in the case. However, although the proponent's expert attested to a high validity rate, the opponent's expert is prepared to testify that she discovered a substantial margin of error. Assume further that the judge finds both experts' testimony believable.[91] What should be the judge's ruling? It is submitted that, in the fourth state of the record, rather than attempting to decide which scientific hypothesis is "correct,"[92] the judge should rule both side's evidence admissible and permit the proponent as well as the opponent to submit their expert testimony to the trier. In its brief in Daubert, Merrell Dow Pharmaceuticals argued for the exclusion of the plaintiffs' testimony about the epidemiological reanalysis. However, Merrell Dow conceded that, if the courts abandoned Frye and shifted to an empirical validation standard, there would be times when it would be appropriate for the trial judge to submit the "battle of the experts" to the jury. Merrell Dow acknowledged that there might be "several competing . . . [scientific] claims" which satisfied a validation standard.[93] In the words of Merrell Dow's brief, the state of the research record might be such that there could be a "genuine debate in the scientific community."[94] Justice Blackmun's opinion reinforces the conclusion that, in this state of the record, the judge should allow both parties to submit their expert testimony to the trier.[95] Near the end of his opinion, the justice addressed Merrell Dow's fear that "abandonment of 'general acceptance' as the exclusive requirement for admission will result in a 'free-for-all' in which befuddled juries are confounded by absurd and irrational pseudo-scientific assertions."[96] Justice Blackmun countered: In this regard respondent seems to us to be overly pessimistic about the capabilities of the jury and of the adversary system generally. Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.[97] This passage serves no function unless Justice Blackmun believed that there will be times when it was appropriate for the judge to tender the battle of the experts to the jurors for their resolution rather than peremptorily embracing one position or the other. In the passage, Justice Blackmun expressly mentions the "presentation of contrary evidence." That mention makes no sense unless there will be cases in which the proponent has submissible scientific testimony and the opponent also possesses "contrary evidence" which passes muster under Daubert. The justice's reference to the burden of proof is also explicable on the assumption that on occasion, the jury will be called upon to arbitrate a battle of experts.[98] That is the relevance of the ultimate burden of proof. If the jury finds the battle even and the conflicting expert testimony equally believable, the burden of proof dictates a decision for the defendant. If both sides' scientific claims have the hallmarks of good scientific methodology-- characteristics such as large, representative databases and realistic test conditions--both sides are entitled to present their claims to the trier. In its recent decision, Kumho Tire Company v. Carmichael,[99] the Court again recognized the possibility that the evidentiary record would establish a genuine battle of the experts. In explaining why it upheld the exclusion of the proffered expert's testimony, the Court remarked that the testimony "fell outside the range where experts might reasonably differ, and where the jury must decide among the conflicting views of different experts . . .." E. STAGE #5: The Burdened Party (the Proponent) Presents Barely Enough Evidence to Show That His or Her Expert's Hypothesis Has Been Empirically Validated, but the Opposing Party Presents Such Overwhelming Contrary Evidence That It Would Be Irrational for the Trier to Accept the Hypothesis. The fact pattern in Daubert itself can be used to illustrate the final stage. As Professor Faigman and his coauthors have quite correctly pointed out, Daubert was a unique fact situation.[100] In a Bendectin case preceding Daubert, Ealy v. Richardson-Merrell, Inc.,[101] Judge Abner Mikva emphasized that the question was not so much the validity of the plaintiff's epidemiological reanalysis considered in isolation. Rather, the real hurdle for the plaintiffs was that their evidence was arrayed against a "massive"[102] "wealth"[103] of contrary, published epidemiological studies reaching a contrary conclusion. Justice Blackmun described the key defense evidence in his lead opinion in Daubert: the defense expert. Doctor Lamm stated that he had reviewed all the literature on Bendectin and human birth defects-- more than 39 published studies involving over 130,000 patients. No study had found Bendectin to be a human teratogen (i.e., a substance capable of causing malformations in fetuses). Petitioners did not (and do not) contest this characterization of the published record regarding Bendectin.[104] Judge Mikva characterized the defense evidence as an "overwhelming body of contradictory epidemiological evidence."[105] He distinguished the Bendectin litigation from "a classic battle of the experts," where the state of the research supporting the competing claims is more evenly balanced.[106] In Daubert, Justice Blackmun noted that several courts of appeals had found that there was a "massive weight" [107]of epidemiological research rebutting the plaintiff's expert's theory. If the defense showing in Daubert did not attain the fifth stage, the showing certainly came quite close; and with the benefit of corroborative testimony from a court-appointed expert,[108] the showing would reach the fifth stage. Judge Mikva's distinction between records in cases such as Daubert and records which reveal much more closely contested battles of the expert is well taken. In essence, it is the distinction between the fourth and fifth stages of Daubert analysis. When both sides present conflicting expert testimony but the studies marshaled on both sides rest on large, representative databases, conducted under realistic conditions, there is an authentic battle of the experts. If Justice Blackmun's discussion near the end of his opinion means anything, it must signify that in the fourth state of the record, the judge ought to allow both sides to present their testimony to the jury. However, as in the case of analysis under the initial burden of production, "we should not permit the jury to act irrationally by rejecting compelling evidence."[109] In unique[110] cases such as the Bendectin litigation, the opponent can argue that the proponent's evidence is inadmissible not so much because the proponent's own foundational showing is too weak but rather because it is arrayed against a truly "overwhelming body of contradictory . . . evidence."[111] Summary of the States of the Record To sum up, depending on how many sides submit foundational testimony and the strength of the evidence, the foundational testimony could yield one of five possible states of the record: --The first stage could be called "No Evidence." Neither the proponent nor the opponent submits any foundational testimony on the question of the empirical validity of the hypothesis which the proponent's expert proposes to testify to. Since the proponent has the burden on the issue, the judge must make a peremptory ruling in the opponent's favor; the judge will bar the testimony by the proponent's expert. --The second state can be called "Meagre Evidence." Here one side, the proponent, submits foundational testimony. On the one hand, the testimony is logically relevant to the question of the empirical validity of the hypothesis. On the other hand, the foundational testimony is badly flawed. The database is too small, the database is unrepresentative, the test conditions do not approximate the conditions obtaining in the instant case, or the research yields a high margin of error. Once again, the judge should make a peremptory ruling in the opponent's favor and exclude the proponent's testimony. Thus, in the second and third states, as gatekeeper the judge denies admittance. --The third state is "Sufficient Evidence." As in the second state, only the proponent submits foundational testimony. However, in this variation of the record, the testimony is not only relevant to the question of the scientific validity of the hypothesis; more to the point, the testimony is sufficient to satisfy Daubert. The testimony describes a body of research which does not suffer from any of the serious deficiencies mentioned in the previous, "Meagre Evidence" state. Thus, the judge should rule the proponent's scientific evidence admissible. --The fourth state is a "Genuine Battle of the Experts." This stage differs from the three prior states in that for the first time, the opponent goes to the length of submitting contrary foundational testimony. Moreover, like the proponent's foundational testimony, the opponent's testimony rests on decent scientific methodology; it is not vulnerable to any of the attacks which are fatal in the second state. This state of the record is an authentic battle of the experts. The judge should rule both sides' scientific evidence admissible. The judge ought to permit both sides to submit their testimony to the jury and ask the jury to arbitrate the battle. In the fourth and fifth states, the gatekeeper grants entry. --The fifth state is "Overwhelming Contrary Evidence." As in the fourth state, both sides submit foundational testimony. As in the fourth state, considered in isolation, the proponent's foundational testimony would arguably suffice to pass muster under Daubert. Again, as in the fourth state, the opponent submits contrary testimony. However, in this final stage, the opponent's contrary testimony is much more powerful than in the fourth state. The opponent's testimony does not merely leave the record in equipoise, permitting the jury to either accept or reject the proponent's hypothesis. Rather, the opponent's testimony is so overwhelming that objectively, the only rational course would be for the trier to reject the proponent's hypothesis. As in the first and second states, the judge ought to make a peremptory ruling in the opponent's favor--not because the proponent's foundation is so weak in an absolute sense but rather because the opponent's testimony is overpowering in a relative sense. III. CONCLUSION In this article, I have suggested an analogy between the judge's analysis under the initial burden of going forward and the judicial inquiry under Daubert. To be sure, the analogy is imperfect. For one thing, there are more stages of analysis under the burden of going forward. For example, there is the possibility that the proponent will create a true presumption[112]--a mandatory inference--and the further possibility that the opponent will create a counter- presumption.[113] Those stages do not appear to have any analogues under Daubert. I also would be the first to admit that this approach to judicial decisionmaking is unoriginal. I quite shamelessly have borrowed from the Post- Conviction Relief Working Group of the new National Commission on the Future of DNA Evidence. In its report, the working group will attempt to give helpful guidance to judges and prosecutors receiving requests for post-conviction relief based on exculpatory DNA test results.[114] The draft of the group's report sets out several "categories"--general states of the record--and suggests judicial and prosecutorial guidelines for each state of the record. This paper is an attempt to adapt the same approach to judicial gatekeeping under Daubert. Although the analogy is neither perfect nor original, it is submitted that it is both useful and feasible. If we can develop a general sense of what the varying states of the record look like and generate a consensus on the appropriate judicial admissibility rulings for each variation, that sense and consensus could assist judges performing their assigned screening function under Daubert. By analogy the jurisprudence on the initial burden of production can help us identify five potential states of the record, and the same body of jurisprudence strongly suggests that the proponent should be allowed to reach the trier and submit his or her scientific evidence to the trier only in the third and fourth states. The approach should certainly prove to be feasible. Of course, the feasibility of the approach turns upon judges developing a sense of what the various states of the record look like for various disciplines such as toxicology, epidemiology, and pathology. In that connection, Federal Rule of Evidence 706 can be of assistance.[115] One of the desirable impacts of Daubert has been that trial judges are appointing their own experts with greater frequency.[116] Even if the judge does not want to ask his or her expert to undertake an indepth analysis of the case and render an opinion on the merits, the judge can ask the expert to educate the judge on the various experimental stages which an hypothesis in the expert's discipline could progress through. Court-appointed experts can serve as cartographers drawing generalized maps of the various stages of Daubert analysis for the scientific discipline involved in the case the judge is presiding over. A court-appointed speech scientist might advise the judge about the size of the database that would be appropriate to investigate a particular hypothesis. Or a court- appointed toxicologist might counsel the judge as to the proper design of a study to test an hypothesis in that field. Daubert might thrust the judiciary into a brave new world; but with carefully adapted maps of the various states of the record, trial judges should be able to find their way through even unfamiliar terrain. Judge Sam C. Pointer, Jr. I'm going to throw Myrna just a slight curve ball about laying out this situation before her. I've got a case I have required under Rule 26(a)(2) pre-trial disclosure of expert reports. They have provided those. The defendant comes in with a motion to exclude under Daubert in advance of trial one of two key experts for the plaintiff's case. I schedule a hearing on that. I walk into the courtroom. The attorneys are there. There are some depositions there. There are some reports or decisions by judges in other parts of the country who have looked at a similar issue with respect to the plaintiff's experts. There are two people there in person prepared to testify. I look down to the people and I say, "Okay, here I am," and the defendant says, "Well, the plaintiff has the burden of going forward in this hearing," and the plaintiff says, "No, it's the defendant's [inaudible], and I've got these materials in front of me." And where are you going to take this from there with those problems? Professor Myrna S. Raeder: I've always been told to answer the judge's questions first, before anything else I do, and actually, it is a good lead-in to some of my difficulties with Professor Imwinkelried's methodology. Let me start, in part facetiously, by saying while it is true that I have tremendous respect for Professor Imwinkelried, and I think that anybody who's read any of his work also does, I'm a little troubled about his conclusions, and like Daubert it's because of his methodology. One of the issues raised by his formulation is "Who goes first?" Ed's analysis presupposes that you use it when you have the burden. However, a very real preliminary question is whether the significant differences between criminal and civil cases that Professor Berger has alluded to should affect who has the burden. Maybe in the civil case, the burden should be on the opponent to show enough evidence to even require this initial Daubert hearing. In contrast, in the criminal case, where the defendant typically has much less resources and questions may arise about the extent of discovery, the burden should always be on the government, particularly when the forensic community is the originator of the technique that is being offered. So, it is significant to determine who shoulders the burden in the first place. And I think the question is real in terms of Kumho, which obviously I'm going to reference in my little amount of time here. Kumho said at one point that judges have discretion not only in terms of their ultimate conclusions, but also in terms of how to decide reliability, and I quote, "Otherwise, the trial judge would lack the discretionary authority needed both to avoid unnecessary 'reliability' proceedings in ordinary cases where the reliability of an expert's methods is properly taken for granted, and to require appropriate proceedings in the less usual or more complex cases where cause for questioning the expert's reliability arises." Thus, I don't think we want to presuppose that Daubert or Kumho requires a hearing in every case, because that would be an incredible burden on the court system. I have two other major concerns with the methodology that has been proposed. The first is a cultural one. Those of you in the room who are lawyers, law professors, and judges, remember back to why we went to law school. It was to avoid science and math. Undeniably, there are judges who have always been willing to look at the underlying scientific validity, and then there's everybody else, who says, "How can I get away from this?" Well, in fact, I'm not sure that substituting this kind of burden analysis gets away from it. Why? Because the academic evidence community took a survey a number of years ago about who's teaching what, and virtually nobody was teaching presumptions or burdens. What did we all say: "Oh, we don't have enough time to do it, and anyway, it's impossible to do. They'll learn it when they're in practice." And in practice, I think what happens is that both judges and lawyers try to fudge, except in contexts where you can't, i.e., summary judgments and directed verdicts, and sometimes when deciding constitutional questions about presumptions and burdens in criminal law. And ultimately, what happens in the hard cases is that they generate Supreme Court decisions that are incomprehensible to many. As a result, I do not believe that we can escape the unavoidable question, "How do you characterize expert evidence in terms of trying to figure out its reliability? I'm also very troubled by the fact of unintended consequences. In my discussions with Ed and other professors, it's clear that the academic community is concerned about junk experts. On the other hand, we're equally concerned about reshaping the role of judge and jury and that, in fact, is what Daubert and Kumho may be doing; particularly with those detailed examples you've already heard, suggesting to judges that you've got to dot every single i, you've got to second-guess, you have to find the perfect expert. This moves away from saying, yes you're a gatekeeper, but it's the jury's job to decide the evidence, they're the ones who ultimately have to view this, and all you're doing is figuring out if its too speculative to get to them. While I understand that Professor Imwinkelried's formulation was not intended to favor exclusion of experts, it seems to mandate decisionmaking by numbers--if you meet number three you're in, but number two or five means you're gone. Whereas to me, Joiner and Kumho say that the judge has discretion, and therefore, I, agree with the view stated in Justice Stevens' separate opinion in Joiner, where he reminded us that the court was not holding that if the judge had admitted the expert testimony, it would be an abuse of discretion. And I believe the same is true in the Kumho case, that had the expert testified that ruling would not have been reversed, although I know there is already some disagreement in the academic evidence community about whether it would have been an abuse of discretion to let that evidence in rather than to exclude it. Assuredly, this is an area that will receive considerable future attention, but I'm very concerned that Professor Imwinkelried's five- step standard will be viewed in a mechanical way, as dictating answers, and that judges will forget the discretionary aspects of their rulings. So I would predict that the judges who don't want to undertake deep scientific analysis are still going to fall back to what they feel comfortable with, which we all know is relying on general acceptance of the expertise in question. Let me switch to what was originally intended to be my introduction. If I could have my first slide which shows Smokey the Bear testifying authoritatively as an expert witness that "Where there's smoke there's fire." A couple of days before I came here, I received a brochure in the mail that said that I could be provided with lists of experts, in fact more than 7,000 categories of experts. We're not talking one or two! We're not even talking 200. We're talking 7,000 plus! In light of Kumho how will these experts fare? Kumho has essentially mandated, as to each of those categories, that the judge has a gatekeeping responsibility unless, of course, the judge falls back to that quote I previously read you, which grandfathered in the ordinary cases, where reliability can be taken for granted. Trust me, determining what types of expertise are so commonplace as to avoid scrutiny will be a hot issue in the future. It also appears to be incredibly easy to become an "expert." I glanced at a few Web sites that suggested my name could be included in whatever category I chose to identify my expertise in, by filling out a form, and of course in some cases paying a fee. Thus, the qualifications of self- proclaimed experts may be suspect, and even seemingly authoritative figures like Smokey the Bear watching. I remember for a number of years telling my students that the major limitation they would face when selecting what types of expert testimony to offer in their cases when they became lawyers would really only be their lack of imagination. That is not hyperbole now that we have reached 7,000-plus categories of expert. Instead, that's what has really been happening in practice. Obviously, such excess signals that we badly need to find a principled way to get out of this bind; a bind that started with the adoption of the Federal Rules of Evidence which immensely expanded the use of expert testimony. However, I'm just not sure that Kumho, Joiner, and Daubert, which I will refer to as the expert trilogy, is the best resolution. I expect the expert trilogy, which is no doubt how these cases will be referred to in the future, similar to the summary judgement trilogy in the late 1980s, will basically shape the legal context for a number of years. But unlike the Court's implication that the trilogy represents a liberalized view, which it alluded to in Joiner, when it reiterated that Daubert rejected the "austere" standard established by Frye, experience indicates otherwise. The trilogy will no doubt limit the number of experts testifying, but ironically some experts who should be admitted won't be, and some who should not will be, and the abuse of discretion standard will provide cover for bad results. Moreover, I don't think there's a huge difference in result between Federal courts that are applying Daubert when compared to Frye courts. Let me actually ask, how many of you, in terms of your own State law, are from Frye jurisdictions? All right. About half of the group, which may be aided by the fact that this conference is in California. About 17 States still follow Frye. Truly, not a lot, but on the other hand it includes several large States such as California, Illinois, New York, and Pennsylvania that so far have refused to abandon Frye. I want to take a quick look at Frye, because two of Daubert's criteria directly implicate Frye, general acceptance, and peer review. [Describing slide that quotes Frye and lists issues concerning nature and scope of standard.] We all know the test, but some of the questions below, about how you determine admissibility, are going to be relevant whether you apply Daubert or Frye. One real issue is, how do you treat the forensic community? Professor Thompson, who is here, has argued recently that the forensic community is really not the scientific community, but are mainly technicians, who, because of their culture, may view their principal motivation in large part to be doing justice by aiding their primary client, which in their service industry is really the government, particularly prosecutors. [See William C. Thompson, A Sociological Perspective on the Science of Forensic DNA Testing, 30 U.C. Davis L. Rev. 1113 (1997)]. Now, one may agree or disagree or agree in part with that conclusion, but certainly it suggests valid questions about how to look at the forensic community when determining admissibility. And this issue is equally relevant when determining reliability, because many judges will fall back to general acceptance, and therefore, whether the community is interpreted narrowly or broadly in the circumstances will matter. The problem of self-interest is also of concern. Do we have to exclude experts who are, in effect, promulgating their own techniques, because they do have self-interest, and that is an issue that courts have dealt with, with some requiring independent experts to validate the science. Frye's applicability to social science varies by jurisdiction, while Kumho has told us that we include absolutely every expert in its reach. But what was happening in State courts, and in those Federal courts before Kumho that placed limits on social science testimony, was not very different in their end result. Ultimately, I think that many judges do fall back to general acceptance, and I think that Kumho leads us there even more than earlier Supreme Court decisions. And the reason why--there's already been some e-mail discussion about this issue among academics--focuses on how Kumho looked at social science. We've already heard about the reference to the perfume tester. Well, that and all of the other references in Kumho relating to evaluating the non-"science" evidence fell back to general acceptance. It was just general acceptance in the particular nonscientific community. Undoubtedly, determining the admissibility of social science evidence is really going to be a continuing problem area for all courts and may impact both the prosecution and defense in criminal cases. Remember the examples cited in Kumho included experts in drug terms, handwriting analysis, and criminal modus operandi who are all typically prosecution witnesses. [Referring to slide that quotes Proposed Amendment to Federal Rule 702] I believe the proposed rule and Committee Notes will set the stage for current as well as future interpretation of admissibility of expert testimony. I spoke to Professor Capra, who is the reporter to the Evidence Advisory Committee, and my understanding is that proposed amendments to rules 702 and 703 will be submitted to the Standing Committee--there may be some wrinkles in terms of changing language, but the substance of those amendments will, in fact, be offered. (The rules were approved by the Standing Committee in June, 1999, and will be submitted to the Judicial Conference). Unquestionably, we need to think about reliability factors, and [referring to slide quoting five factors listed in Advisory Notes to Proposed Rule 702] those comments to the rule indicate what issues judges are looking at. This is important, because it tells lawyers how to orient their arguments and in turn provides guidance to judges who wonder how other judges are determining reliability. (Summarizing from slide): Is the testing, the research--is it independent of the litigation or not? Extrapolation we've already heard about. Does the expert account forgo obvious alternative explanations? Would this be the same methodology used in the expert's professional work? What is the field of expertise, because there may be some fields that, by themselves, are devoid of reliability (i.e., astrology). There are also other issues that pose problems for judges determining reliability. For example, when the research only seems to rely on temporal proximity, such as basing causation on the advent of symptoms within a short time span. Judges clearly look at that kind of testimony with more skepticism. Similarly, the lack of testing; the scientific testimony not reflecting the facts of the case. And generally, all the things that suggest anecdotal subjectivity. Now, I do want to say something about psychiatric evidence, because this is a significant issue in criminal cases where evidence ranges from traditional psychiatric opinions in insanity cases to nontraditional psychological evidence, often concerning syndromes. What does Kumho mean in such settings? Professor Slobogin has recently written a fascinating article about the effect of Daubert on psychological testimony [Slobogin, Christopher, "Psychiatric Evidence in Criminal Trials: To Junk or Not to Junk," Wm. & Mary L. Rev. 40 (1998): 1]. He found that experts don't believe that they are being subjected to reliability determinations at all in traditional psychiatric cases. He also concluded that currently there doesn't seem to be an agreed upon principled way for judges to determine the admissibility of syndrome evidence. As a result, he suggests that courts should create a distinction based on the type of testimony being offered. Is evidence being put in on a past mental state? If so, general acceptance by other psychiatrists may be enough in that setting because of the difficulty of proving a past mental state. On the other hand, is the evidence being introduced for purposes of proving a past act? Take rape trauma syndrome for example, which is used to prove that the person was actually raped. This is quite different from supplying evidence of prior mental state. And maybe this difference suggests that we really need a full exploration of its reliability, questioning the underlying validity of the syndrome when offered as evidence of historical accuracy, in addition to a careful weighing of undue prejudice against probative value. Such testimony raises another significant question, that cuts across all nonscientific evidence--how do we validate social science? Many of us are concerned about ethical issues. In a number of social science contexts you can't have controls, you can't necessarily do the same kinds of tests that are commonplace in science. Of course, there are instances where you can ask the same type of questions, such as, "What were the methods used? What is good social science research?" So it's not as if we are left without any guidance. But it becomes apparent why judges take comfort in hearing that a particular syndrome is generally accepted by relevant practitioners, even if its reliability is not demonstrated. One interesting way to help courts determine the relevancy of certain social science evidence and therefore more easily weight probative value against undue prejudice was mentioned by Professor Slobogin, and I'm sure that some of you have heard of it previously--the relevancy ratio--developed by Professors Lyon and Koehler ["The Relevance Ratio: Evaluating the Probative Value of Expert Testimony in Child Sexual Abuse Cases," Cornell L. Rev. 82 (1996): 43, 46-50]. The ratio takes the proportion of the symptoms found in the target population, such as battered women or abused children, and compares it to the proportion of the symptoms found in the larger population to determine its significance. This is one tool that can help guide judges to separate the forest from the trees when figuring out what syndrome evidence should and should not be heard by juries. An overarching approach to many of these knotty questions is found in proposed Uniform Rule of Evidence 702, which basically says, don't worry about actual reliability until after making an initial presumption, based on Frye, about whether the evidence is reliable. We'll presume the testimony is reliable if it meets general acceptance; we'll presume it's unreliable if it doesn't. Whoever loses can then challenge the presumption based on a number of reliability factors that include experiential as well as scientifically based testimony. Actually, judges don't really need for this rule to be enacted in order to make use of its commonsense approach. Judges currently use a number of factors, weighting some more heavily than others. Thus, they are already relying so heavily on general acceptance even though there are all of these other Daubert factors to be considered. One caveat that should be raised concerns the amount of reliance on general acceptance that we're seeing by judges. The separate opinion in Kumho indicates three justices' views that, in particular settings, it can be an abuse of discretion not to consider Daubert criteria. This leaves open how restrictively appellate courts will review trial judge's rulings for abuse of discretion. Finally, both Kumho and proposed Federal Rule of Evidence 702 make clear that concerning forensic evidence, it's too late in the day to continue arguing that application doesn't matter, that protocols don't matter. It's apparent to me that they do matter, whether you're testifying about pure science, social science, technical, or any other type of experiential expertise. As other speakers have noted, Kumho requires a very specific look at the evidence in question, not simply at the theory that spawned it. Judge Sam C. Pointer, Jr.: Following Daubert, it became rather standard practice, at least in the Northern District of Alabama, for there to be a Daubert motion in limine filed, for virtually every product's liability case and virtually every [inaudible] tort case. I think that Kumho has said that rejecting, at least implicitly, the teaching of at least one circuit court that suggested that, whenever such a motion was filed, the court was obligated to go forward with some kind of hearing, could not wait until the trial of the case to take up issues relating to qualifications or opinions. I believe that you can read Kumho as saying that it's not that kind of a requirement, the court does not have to respond to every such motion. There nonetheless remains . . . and I'm not sure whether some sense of . . . might be guilty of malpractice, whether it's a pressure from the defendant [inaudible] saying, "Go after the [plaintiff's] expert." My sense is we are still getting too many such motions to try to direct the court's attention, and not necessarily to Daubert. Certainly, in many respects, it is a much more comforting feeling from my standpoint to make a ruling in the context of a particular trial and with the development of the facts that relate to it and more frequently hearing the testimony of people who [inaudible] and simply some form of written statement. At the same time, if you're talking about a day, a half day, or 2 days of hearings that might result in, number one, the elimination--and I put it in the context of the defendants challenging, [inaudible] that's typically the way it comes up. If that ruling in the half-day or 2-day hearing would result in the court saying plaintiffs [inaudible] and, in turn, that leads the plaintiff's case as insufficient from the standpoint of a motion for a judgment as a matter of law--incidentally, it's been changed now in the Federal court. It is no longer directed verdict, it's judgment as a matter of law, but I still like to think of it as directed verdict. If that's what results, [inaudible] a savings in time, a savings in money [inaudible], how do you [inaudible] in limine, in advance of trial, or at trial, what is it that one uses, however, in terms of resolving the issue? Before Myrna started, I laid out the scenario which does happen. You come into the courtroom, or maybe it's in chambers, but I've got material and the people are trying to decide what to do. Certainly, we anticipate there may be people justifying [inaudible], maybe the plaintiff's expert, though frequently the plaintiff's expert is somewhere else, is not available, unless it's a really important high-tech case where they could get their own expert there to testify--yes, I'm an expert, here is what I based it on. The defendant making the motion may have one or more of their experts who are going to testify [inaudible] not consistent with what our whole academic world says about [inaudible]. That's fairly standard. That's just hearing people testify, and I do get the chance in that context actually to ask questions, try to learn more. One of the difficulties that was expressed earlier about the problems of scientific evidence being presented in courtrooms is the inability, ordinarily, for jurors to ask questions effectively when they don't understand something, to try to bring out and get a better understanding. At least in that context, as a judge, when I don't understand something, I can, if I'm willing to expose my ignorance, ask questions and get some help. This morning, during the discussion about DNA, if I had been a judge listening to Dr. Caskey, I would have immediately asked him some questions, because he was using some terms I was not familiar with. At least in the context of having to make a Daubert hearing decision, it's helpful, certainly, to have witnesses be able to testify, but it's not going to always happen. So, you then have next depositions taken in this case. Well, that's very easy, at least from an evidentiary standpoint, because we would allow depositions to be used [inaudible], no problem. Next, there would be the report of the expert prepared in this case that is offered, from plaintiff, from the defendant, and what would the standard be there? Well, typically, that would not be admissible in evidence at trial. Only the oral testimony, whether in person or by deposition, of the expert would ordinarily be admissible in trial. The report of the expert is essentially only a pretrial disclosure to the other side of what's there, is not independently admissible. It may, however, be used [inaudible] 104 hearing, because 104 says the court, in making a 104(a) ruling, is not bound by the rules of evidence such as hearsay or other rules dealing with [inaudible] Therefore, I can have presented this report of the experts, plaintiff, and defendant, whether or not they're present, whether or not they have been deposed in the case. I can also estimate under that same standard the testimony, perhaps given at a trial in another case, in which the plaintiff's expert was a proposed testifying witness, either when that expert did testify at that trial or testimony from a defendant's expert is given at that trial that would attack in some way the defendant's expert witness. Rather, this could be, even though the parties [inaudible] beforehand, either plaintiff or defendant or both, were not parties in that other case. Again, it may be hearsay, but under 104(a), I can receive an understanding. I may have presented to me, as a matter of fact, some ruling that a judge in the Southern District of New York made in rejecting this--the testimony of a plaintiff's expert or perhaps one in Texas, allow that testimony to come in. Ordinarily, those would be clearly hearsay statements, it's another person [inaudible], statement about what would [inaudible]. So, it can come in, can be considered, in my view, under 104(a), as a part of the totality of the matter I am considering in making this decision. So, it's a very wide open form of evidence, and there may even be articles and the like that no expert testifies about, simply one side or the other puts in before me, which would not satisfy the standards of 803(18) for admissibility in trial not being supported through a testifying expert, but I might reserve on this issue. Now, I'm still going to be left with difficulties, depending upon what's presented to me. I do consider myself, frankly, very knowledgeable in some statistical areas. I'm very knowledgeable in certain computer areas. I am woefully bad in chemistry, and when I've got a chemistry problem, I need all sorts of help. It's confusing and always was and remains confusing. The judge is going to vary, depending upon the nature of the case, in terms of [inaudible]. What do you do? Well, number one, of course, you rely on some of the parties' experts, through good examination, good testimony, and so forth, but that's not always helpful, particularly when the parties' experts are at what may be extreme ends of the spectrum on something. I can't be sure, particularly if some body, some society does not make any stand on an issue. I don't know what's the extreme. [Inaudible.] Well, at least two things have been done in terms of trying to help us out. One is the use of-- although it's not clear what the authority is, frankly--the Court simply appointing someone, perhaps locally, in the area of science that's involved. The court has some confidence, doesn't think that they're involved in any way in the case, says come sit by my side, you listen with me to the testimony or reviewing these materials the parties have submitted, then you present something to me that gives me some assistance in understanding it or in evaluating it and the like. We had an illustration of that with Judge Jones in Oregon, who did that kind of technique. Again, it's not clear what the real authority is other than general authority in a complex case to do some things you might not do in another context. Okay. That has some advantages in terms of the closeness of the relationship, not having to go through a lot of structures with the parties. You simply appoint people. Another option is one that's been used in the breast implant litigation, at least up until Monday or Tuesday of next week--we'll see where it goes from there--which is the court appointment of an expert under Rule 706. This is structured in such a way that it's not altogether pleasant and comfortable to work with, for several reasons. Number one, you have to go through a process, before you select the experts, of getting essentially party input. This also means the potential for the parties to make inquiries about perhaps biases and the like, which is a little bit awkward. In the particular case of the breast implant litigation, we did have a first-step process of a panel--Margaret Berger happened to be the chairman of that panel, the others were from the scientific world--who were the ones to go out and try to locate the experts, so it's like a two-step process. But it can be time consuming, it can be expensive. It's not something you do, certainly, in the run-of-the-mill case. It's an awkward structure. Once the experts are selected, the court makes the decision about those people. There are problems of communication between the court, the expert, and the parties in dealing with these people. We don't really know what they are. Rule 706 puts up some structures about it, and the court says that, ultimately, when the 706 experts testify--and the contemplation ordinarily would be that a 706 expert walks in the courtroom during trial, sits down, sworn, and starts giving answers. The rule contemplates that the court may not even allow the jury to be informed who it was that appointed the expert. There is discretion by the court as to whether or not to allow that to be disclosed.. It does not displace the parties' experts in and of itself. It simply will be potentially supplemental, and particularly if the court allows the disclosure of court appointment to be made, may be particularly influential on a jury, though it's not altogether clear whether that's so or not. Sometimes we who are in the judiciary think that--at least if a judge does something--the jury is going to totally buy it, and I think juries, for the most part, have an identification with the judge [inaudible]. But whether or not a court-appointed expert would be dominant in the outcome is a matter that's very much up for grabs and cannot be said. In any event, you then have to keep a separation between the court, who might unduly influence the experts--if I was in daily communication with the experts, there is a fear that that would be wrong. There should be more like the same ex parte communication groove we developed with respect to lawyers, should be separated and the like, but it makes for awkward things, particularly when you're dealing with experts who may have little or no familiarity with the judicial system. At least in the case with the breast implant situation, of the four experts who were appointed, only one had ever before even given a deposition. They were people who were not involved in the litigation process. It makes it then very difficult to know how to give guidance in some way to people who have that experience about things that maybe should be done or not be done. They are accustomed, certainly in the academic community, to calling up people and saying, "I got this article, I read it, tell me about this." Those people might be somebody that's employed by the plaintiff or defendant, but they're not accustomed to the limitations that we, who are more experienced in the litigation process, might have. So, that's a [problem] and particularly when you limit the easy, quick informal communication, it's difficult to deal with it. In any event, you come out, the experts do whatever they need to do, one or more, to learn some things, and then they, first, by Rule 706, have to give a report. Under Rule 706, they are subject to being cross-examined by the parties after their report. Third, there would be the anticipation they would testify at a trial, though if you're talking about hundreds or thousands of trials, you have to do something else, which in breast implant, we said, okay, then we'll have a videotape trial-type deposition. What would be the use? Well, one use certainly is at trial, if a case does go to trial, and the plaintiffs and defendants have experts, we have a battle of experts, you may have the court-appointed expert who is an additional resource to the judge and jury. There's also a potential for its being used--and I'm sure there will be an effort to do that, assuming these depositions go forward--in connection with a Daubert hearing, of saying, "Look, here's what some court-appointed [inaudible] said about reviewing this, and this bears directly on this expert's qualifications or not." We've only got 2 minutes. Let me stop. I'm sorry but we just don't have the time. I know, Ed, you got hit over the head a little bit by mine. I don't know if you want to comment about that or whether you want to take a question or two. Participant: Let me just say one thing. Discretion is certainly the [inaudible] Daubert, Joiner, and Kumho, but there are limits to discretion. There is discretion under 403, but every year, courts occasionally say, this was an abuse of discretion, and when you have a clear, overarching policy concern of reliability [inaudible] Daubert and an enumeration of some illustrative factors, I don't think that the mantra of discretion satisfies Daubert. I think there are clear states of the record where the judge is within his or her power saying, "This is beyond the pale." What I think discretion means, in many cases, is, there would be a lot of different configurations of epidemiological studies or combinations of an epidemiological study and an animal study that would satisfy Daubert, but notes that the court talks about discretion but notes that the bottom line in all three cases was a ruling of [inaudible]. Judge Sam C. Pointer, Jr.: Yes, [inaudible] [Kenneth]. Participant: I don't know if the panel is aware or not, but the case from the Seventh Circuit called DePepe v. General Motors--and I think it's, in some ways, perhaps the most important post-Daubert case. Here's why. We go through a lot of compilation of factors to try and figure out what the courts want us to do in order to establish reliability or unreliability, and we wind up with legalistic sets of factors that we then try and pound square pegs through, and what DePepe says--this was on appeal, where General Motors is saying the trial court should not have left his expert testimony in--it was bad, Daubert; it was speculative, Daubert; no good, Daubert. The Seventh Circuit responded by saying, Don't come in here with your lawyers' opinions about what is or isn't engineering (it happened to be an engineer). Tell us--and I'm almost quoting exactly the Seventh Circuit--tell us how engineers address questions like the question at issue in this case and why this engineer didn't do it. And I think that's what it all boils down to. And if we try and get to that point through lists of factors, I'm not sure that we're actually going to accomplish the objectives. Judge Sam C. Pointer, Jr.: That's a good question, though it sounded more like a comment. Participant: There's still a lot of confusion in the DNA world about which kinds of criticisms of DNA tests go to admissibility and therefore are subject to foundational review under Daubert or whatever and which ones go merely to weight and are issues for the jury, and the courts I've seen in cases I've been called, where they're going every which way, although generally they go in favor of declaring issues to be weight issues. That seems inconsistent with Professor Raeder's comment that issues regarding application technology are, indeed, among the issues to be considered, and so, where are we on this? Does Kumho help us? I'm concerned particularly about cases where, say, the attack on the DNA test is that a control failed or the lab failed to follow a particular protocol. Judge Sam C. Pointer, Jr.: Well, let's get a comment, because we're running out of time. Professor Myrna S. Raeder: I completely agree with you that the great weight of cases right now say that this is a factor of protocols and of application going to weight and not to admissibility, though there certainly is some disagreement now. But I think that Kumho, literally hitting us over the head with all of that issue about application and specificity to the case itself, really implicates that protocols are considered at admissibility. Judge Sam C. Pointer, Jr.: Just one comment. From the Supreme Court's recent decision, there was a reference made to proposed changes in the rules that district courts must "scrutinize" whether the principle and method employed by an expert has been properly applied to the facts of the case, and maybe that's the ultimate bottom line which I suppose is what you're saying. One last comment, question. Participant: [Inaudible.] Judge Sam C. Pointer, Jr.: If the parties decide they're not going to challenge each other's experts for whatever the reason, should, would the court nevertheless get involved, saying, "But I want to do it." Certainly I think the court could do so. It would be a rare case that I would want to put my time into that kind of endeavor when the parties were not ready to do so. Professor Myrna S. Raeder: Though I have to say, in criminal cases, that there may be some additional concern, because what we saw with DNA was that the defense bar rolled over at the beginning, because they had not a clue about how to challenge this at all until they really started to talk to experts. And so, there may be some constitutional considerations in criminal cases not existing in civil cases that raise questions of fair trial and competent counsel. Judge Sam C. Pointer, Jr.: Well, let's give them a hand. Notes [1] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). [2] Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311, 1315 (9th Cir.), cert.denied, 516 U.S. 869 (1995). [3] Id. See also Ruiz-Troche v. Pepsi Cola of Puerto Rico Bottling, 161 F.3d 77 (1st Cir.1998) ("choreographing the Daubert pavane remains an exceedingly difficult task. Few federal judges are scientists, and none are trained in even a fraction of the many scientific fields in which experts may seek to testify"). [4] Huxley, Aldous L., Brave New World, 1932. [5] Gilligan, Francis A. & Edward J. Imwinkelried, "Cyberspace: The Newest Challenge for Traditional Legal Doctrine," Rutgers Comp. & Tech.L.J. 24 (1998): 305, 343. [6] American Civil Liberties Union v. Pataki, 969 F.Supp. 160, 167 (S.D.N.Y. 1997). [7] Reno v. American Civil Liberties Union, 521 U.S. 844 (1997). [8] United States v. Lacy, 119 F.3d 742 (9th Cir. 1997), cert.denied, 118 S.Ct. 1571 (1998); United States v. Charbonneau, 979 F.Supp. 1177 (S.D.Ohio 1997); United States v. Maxwell, 45 M.J. 406 (C.A.A.F. 1996). [9] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). [10] Id. at 598, 600-01 (Rehnquist, C.J., concurring in part, dissenting in part). [11] In An Introduction to Legal Reasoning (1949), Edward Levi explained the key role that analogical reasoning plays in American legal decisionmaking. [12] Graham, Michael H. & Edward D. Ohlbaum, Courtroom Evidence: A Teaching Commentary, 1997: 481. [13] Id. at 482, citing McCormick, Evidence section 388, at 437 (4th ed. 1992). [14] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592 (1993). [15] Imwinkelried, Edward J., "Determining Preliminary Facts Under Federal Rule 104," in Am Jur.Trials 45 (1992): 96 97, 101. [16] Imwinkelried, Edward J., Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2905 (3d ed. 1998). [17] Id. at section section 2906-07. [18] Id. at section 2912. [19] United States v. Starzecpyzel, 880 F.Supp. 1027, 1031 (S.D.N.Y. 1995). [20] In the exceptional case in which the testimony sets out a physically "impossible" version of the events, the judge may disregard the testimony. United States v. Casel, 995 F.2d 1299, 1304 (5th Cir. 1993), cert.denied, 510 U.S. 1197 (1994). This exception is "extremely narrow." United States v. Dent, 984 F.2d 1453, 1459 (7th Cir.), cert.denied, 510 U.S. 858 (1993). The version of the events must defy the laws of nature. United States v. Okoronkwo, 46 F.3d 426, 430 (5th Cir.), cert.denied, 516 U.S. 833 (1995); United States v. Bermea, 30 F.3d 1539, 1552 (5th Cir. 1994), cert.denied sub nom. Rodriguez v. United States, 513 U.S. 1156 (1995). [21] Imwinkelried, Edward J., "Determining Preliminary Facts Under Federal Rule 104," Am.Jur. Trials 45 (1992): 96 97, 101. [22] Calarmi, John D. & Joseph M. Perillo, The Law of Contracts section 8.10 (4th ed. 1998). [23] Prosser and Keeton on the Law of Torts section 135 (5th ed. 1984). [24] McCormick, Evidence section 337 (4th ed. 1992). [25] Id. [26] Id. at section 11; Estate of Clegg v. Wiebe, 87 Cal.App.3d 594, 151 Cal.Rptr. 158 (1978); Spillman v. Estate of Spillman, 587 S.W.2d 170 (Tex.Civ.App. 1979). [27] Andre A. Moenssens, James E. Starrs, Carol E. Henderson & Fred E. Inbau, Scientific Evidence in Civil and Criminal Cases  18.07 (4th ed. 1995). See also Fed.R.Evid. 704(b), 28 U.S.C.A. [28] There are more conceivable distinct stages. Imwinkelried, Edward J,. Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section section 2905-12 (3d ed. 1998). However, the text of this article discusses the stages most parallel to the possible states of the Daubert foundation. [29] Lay opinion on the question of sanity is of the skilled lay observer sort. Carlson, Ronald L., Edward J. Imwinkelried, Kionka & Kristine Strachan, Evidence: Teaching Materials for an Age of Science and Statutes 613 (4th ed. 1997). Consequently, the required predicate must include a showing that the witness is familiar with the person about whose sanity the witness is testifying. Id. [30] Edward J. Imwinkelried, Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2905 (3d ed. 1998). [31] People v. Hill, 934 P.2d 821 (Colo. 1997) (an accused was not entitled to an instruction on insanity). [32] United States v. Scout, 112 F.3d 955 (8th Cir. 1997) (the accused sought an instruction on self-defense; the defense produced admissible evidence of the accused's reputation for passivity; although that evidence was relevant to the question of whether the accused initiated the fight, standing alone it was insufficient to warrant submitting the question to the trier of fact). [33] United States v. Branch, 91 F.3d 699 (5th Cir. 1996), cert.denied sub nom. Castillo v. United States, 520 U.S. 1185 (1997)(the "merest scintilla of [relevant] evidence" will not sustain the defendant's initial burden of production); People v. Hill, 934 P.2d 821 (Colo. 1997)(the defendant was not entitled to an instruction on sanity). [34] Koppsky v. Apfel, 26 F.Supp.2d 475, 478 (E.D.N.Y. 1998). [35] Buckley v. California Coastal Com'n, 68 Cal.App.4th 178, 80 Cal.Rptr.2d 562, 571 (1998). [36] McCormick, Evidence section 339, at 437 (4th ed. 1992). [37] Imwinkelried, Edward J., Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2905, at 1090 (3d ed. 1998). [38] United States v. Williams, 132 F.3d 1055 (5th Cir. 1998). [39] United States v. Arteaga, 117 F.3d 388, 399 (9th Cir.), cert.denied, 118 S.Ct. 455 (1997). [40] McCormick, Evidence section 338, at 433 (4th ed. 1992). [41] Imwinkelried, Edward J., Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2907, at 1093 n. 19 (3d ed. 1998), citing Cal.Evid.Code section 600(b). [42] Wright, Charles A. & Kenneth W. Graham, Jr., Federal Practice and Procedure: Evidence section 5214, at 265-66 (1978). See Bowden v. McKenna, 600 F.2d 282, 284-85 (1st Cir.), cert.denied, 444 U.S. 899 (1979). [43] McCormick, Evidence section 337 (4th ed. 1992). [44] Id. at section section 339-41. [45] Imwinkelried, Edward J., Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2906, at 1093 (3d ed. 1998). [46] Id. [47] McCormick, Evidence section 338, at 437 (4th ed. 1992). [48] Imwinkelried, Edward J., Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2909-12 (3d ed. 1998). [49] McCormick, Evidence section 338, at 437 (4th ed. 1992). [50] Id. at section 338, at 433 n. 2, citing Boeing Co. v. Shipman, 411 F.2d 365, 374 (5th Cir. 1969) and 5A Moore, Federal Practice para. 50.07(2) (2d ed. 1985). [51] At common law, trial judges possessed the power to comment on the weight of the evidence. Quercia v. United States, 289 U.S. 466, 469 (1933); United States v. Jaynes, 75 F.3d 1493, 1503 (10th Cir. 1996). However, in some States, the trial judiciary no longer retains that power. Kalven, Harry & Hans Zeisel, The American Jury (1966): 419-21. In these jurisdictions, the judge may merely descriptively sum up the evidence. [52] Fed.R.Evid. 706, 28 U.S.C.A.. [53] United States v. Rodriguez, 43 F.3d 117 (5th Cir.), cert.denied, 515 U.S. 1108 (1995). [54] McCormick, Evidence section 338, at 437 (4th ed. 1992). [55] Imwinkelried, Edward J., Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2908, at 1095 n. 30 (3d ed. 1998), citing Isaac v. United States, 284 F.2d 168 (D.C.Cir. 1960), McKenzie v. United States, 266 F.2d 524 (10th Cir. 1959), and People v. Murphy, 416 Mich. 453, 331 N.W.2d 152 (1982). [56] McCormick, Evidence section 338, at 436 (4th ed. 1992). [57] The jury has the power to nullify in the teeth of overwhelming prosecution evidence of guilt. Rose v. Clark, 478 U.S. 570 (1986)("a trial judge is prohibited from entering a judgment of conviction or directing the jury to come forward with such a verdict . . . regardless of how overwhelmingly the evidence may point in that direction"); Smelcher v. Attorney Gen. of Alabama, 947 F.2d 1472, 1476 (11th Cir. 1991); United States v. Goings, 517 F.2d 891 (8th Cir. 1975); United States v. Bosch, 505 F.2d 78 (5th Cir. 1974); United States v. Lee, 483 F.2d 959 (5th Cir. 1973). The issue must be submitted to the jury even if the defense does not submit any contrary evidence disputing the prosecution testimony. United States v. England, 347 F.2d 425 (7th Cir. 1965); United States v. Jerke, 896 F.Supp. 962, 964 (D.S.D. 1995), aff'd sub nom. United States v. Reather, 82 F.2d 192 (8th Cir. 1996); People v. Lawson, 189 Cal.App.3d 741, 234 Cal.Rptr. 557 (1987). [58] McCormick, Evidence section 338, at 436-37 (4th ed. 1992). [59] Black, Bert, Francisco J. Ayala & Carol Saffran-Brinks, "Science and the Law in the Wake of Daubert: A New Search for Scientific Knowledge," Tex.L.Rev. 72 (1994): 715. [60] Ohio App.2d 183, 362 N.E.2d 1239 (1976). [61] Id.at 192, 362 N.E.2d at 1245-46. [62] State v. York, 564 A.2d 389 (Me. 1989). See also Moore v. Ashland Chemical Inc., 151 F.3d 269 (5th Cir. 1998) (Dr. Jenkins cited no scientific support for his theory). [63] Golod v. Hoffman La Roche, 964 F.Supp. 841, 860 (S.D.N.Y. 1997)("biologically plausible"). [64] 99 F.3d 870 (8th Cir. 1996), cert.denied, 519 U.S. 1141 (1997). [65] 419 N.W.2d 886 (N.D. 1988). [66] Cooke v. Naylor, 573 A.2d 376, 378 (Me. 1990); State v. Bell, 57 Wash.App. 447, 788 P.2d 1109, 1112 (1990). For a collection of cases holding that small sample size can undermine statistical analysis, see Giannelli, Paul C. & Edward J. Imwinkelried, Scientific Evidence section 15-4(B), at 180-81 (1998 Cum.Supp.). See also Capra, Daniel J., "The Daubert Puzzle," Ga.L.Rev. 32 (1998): 699, 720. [67] Nelson v. Trinity Medical Center, 419 N.W.2d 886, 892 (N.D. 1988). [68] 118 S.Ct. 512 (1997). [69] Id.at 518. [70] Id. [71] 40 Cal.App.3d 69, 114 Cal.Rptr. 708 (1974). [72] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 591 (1993). [73] Id.at 594. [74] 118 S.Ct. 1261 (1998). [75] Id.at 1265. [76] David L. Faigman, David H. Kaye, Michael J. Saks & Joseph Sanders, Modern Scientific Evidence: The Law and Science of Expert Testimony section 1-3.7, at 22 (1999 Supp.)("flipping a coin"). [77] General Electric Co. v. Joiner, 118 S.Ct. 512, 519 (1997). [78] Capra, Daniel J.,"The Daubert Puzzle," Ga.L.Rev. 32 (1998): 699, 715. [79] United States v. Kime, 99 F.3d 870 (8th Cir. 1996), cert.denied, 519 U.S. 1141 (1997). [80] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592 n. 11 (1993). [81] 425 Mass. 807, 685 N.E.2d 739 (1997). [82] Budowle, Bruce et al., "Validation Studies of the CTT STR Multiplex System," J. Forensic Sci. 42 (1997): 701; Crouse, Cecelia A. et al., "Analysis and Interpretation of Short Tandem Repeat Microvariants and Three-Banded Allele Patterns Using Multiple Allele Detection Systems, " J.Forensic Sci. 44 (1999): 87; Fregeau, Chantal J. et al., "Validation of Highly Polymorphic Fluorescent Multiplex Short Tandem Repeat Systems Using Two Generations of DNA Sequencers," J. Forensic Sci. 44 (1999): 133; Kline, Margaret C. et al., "Interlaboratory Evaluation of Short Tandem Repeat Triplex CTT," J. Forensic Sci. 42 (1997): 897; Lins, Ann M. et al., "Development and Population Study of an Eight-Locus Short Tandem Repeat (STR) Multiplex System," J. Forensic Sci. 44 (1999): 1168; Miscicka-Sliwka, Danuta & Tomasz Grzybowski, "High Microvariation Sequence Polymorphism at Short Tandem Repeat Loci: Human Beta-actin Related Pseudogene as an Example," Electrophoresis 18 (1997) 1613; Sprecher, Cynthia J. et al., "General Approach to Analysis of Polymorphic Short Tandem Repeat Loci," Biotechniques 20 (Feb. 1996): 266; Vandenberg, Nicholas et al., "An Evaluation of Selected DNA Extraction Strategies for Short Tandem Repeat Typing," Electrophoresis 18 (1997): 1624; Toschimichi Yamamoto et al., "Allele Distribution at Nine STR Loci--D3S1358, vWA, FGA, TH01, TPOX, CSF1PO, D5S818, D13S317, and D7S820--in the Japanese Population by Multiplex PCR and Capillary Electrophoresis," J. Forensic Sci. 44 (1999): 167. [83] Commonwealth v. Rosier, 425 Mass. 807, __ n. 12, 685 N.E.2d 739, 743 n. 12 (1997). [84] Id.at __, 685 N.E.2d at 743. [85] Id. [86] Id [87] Id. [88] Id. [89] Id.at __ n. 11, 685 N.E.2d at 743 n. 11. [90] Id.at__, 685 N.E.2d at 743. [91] As previously stated, under Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592 (1993), Federal Rule of Evidence 104(a) controls this preliminary fact. Thus, the judge has plenary factfinding power and could choose to disbelieve the testimony by one side's expert. If the judge found the proponent's expert's testimony incredible, in effect the proponent would revert to stage #1--there is no credible evidence that the hypothesis has been validated. Alternatively, if the judge concluded that the opponent's expert's testimony was unbelievable, the proponent would be at stage #3--the proponent has laid a sufficient predicate, and there is no credible contrary testimony. [92] Capra, Daniel J., "The Daubert Puzzle," Ga.L.Rev. 32 (1998): 699, 710. [93] Respondent's Brief at 15, Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). [94] Id.at 30. [95] See Capra, Daniel J., "The Daubert Puzzle," Ga.L.Rev. 32 (1998): 699, 704. [96] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 595-96 (1993). [97] Id.at 596. [98] Ealy v. Richardson-Merrell, Inc., 897 F.2d 1159, 1161 (D.C.Cir.), cert.denied, 498 U.S. 850 (1990), discussing Ferebee v. Chevron Chemical Co., 736 F.2d 1529 (D.C.Cir.), cert.denied, 469 U.S. 1062 (1984). The Ealy court stated that Ferebee was an example of "a classic battle of the experts, a battle in which the jury must decide the victor. [99] 1999 W.L. 152275 (U.S., Mar. 23, 1999). [100] Faigman, David L,. David H. Kaye, Michael J. Saks & Joseph Sanders, Modern Scientific Evidence: The Law and Science of Expert Testimony section 29-1.7 (1997). [101] 897 F.2d 1159 (D.C.Cir. 1990), cert.denied, 510 U.S. 1193 (1994). [102] Id.at 1162. [103] Id.at 1160. See also Richardson v. Richardson-Merrell, Inc., 857 F.2d 823, 832 (D.C.Cir.), cert.denied, 493 U.S. 882 (1989). [104] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 582-83 (1993). [105] Ealy v. Richardson-Merrell, Inc., 987 F.2d 1159, 1161 (D.C.Cir.), cert.denied, 498 U.S. 950 (1990). [106] Id.at 1162. [107] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 584 (1993). See also Sanders, Joseph, "From Science to Evidence: The Testimony on Causation in the Bendectin Cases," Stan.L.Rev. 46 (1993): 1, 10. [108] Fed.R.Evid. 706, 28 U.S.C.A.. [109] McCormick, Evidence section 339, at 437 (4th ed. 1992). [110] Faigman, David L., David H. Kaye, Michael J. Saks & Joseph Sanders, Modern Scientific Evidence: The Law and Science of Expert Testimony section 29-1.7 (1997). [111] Ealy v. Richardson-Merrell, Inc., 897 F.2d 1159, 1161 (D.C.Cir.), cert.denied, 498 U.S. 950 (1990). [112] Imwinkelried, Edward J., Paul C. Giannelli, Francis A. Gilligan & Fredric I. Lederer, Courtroom Criminal Evidence section 2907 (3d ed. 1998). [113] Id. at section 2912. [114] National Institute of Justice, U.S. Department of Justice, Postconviction DNA Guidelines (Aug. 28, 1998 Draft). [115] Fed. R. Evid. 706, 28 U.S.C.A.. [116] Note, "Fighting Fire with Firefighters: A Proposal for Expert Judges at the Trial Level," Colum.L.Rev. 93 (1992): 473, 483 ("increasing at a modest rate"). E.g., Hall v. Baxter Healthcare Corp., 947 F.Supp. 1387 (D.Or. 1996). ------------------------------ Luncheon Address Speaker: Thomas D. Pollard President The Salk Institute La Jolla, California Dr. Thomas D. Pollard: I am here today because 3 years ago I wrote a letter to the governing board of the National Research Council suggesting that the National Academy of Sciences should get involved in the issues being discussed at this meeting. I was motivated by the contrast between, on one hand, recently published epidemiological studies showing no linkage between silicone breast implants and systemic disease and, on the other hand, the huge awards to plaintiffs claiming that their implants caused a variety of systemic diseases. My letter, in a small way, provided some of the impetus for this meeting. In my view, we are dealing with the old issue of the two cultures, science versus law in this case; science versus the arts and humanities in the traditional C. P. Snow version of the question. Science is asserting itself and its methods as a force in society, and this is rejected by some. On some college campuses, majors in the arts and humanities speak out against science and its dominance in society, and I know there is concern in the other communities as well. In contrast, scientists generally look at the arts and humanities as something as interesting and important as what they are doing themselves. When it comes to the law, however, scientists are generally pretty mystified about what you all do. Thus, I think that we have a lot to learn from you and you from us. Scientists wonder particularly about the way the courts handle technical matters. Thanks to meetings like this, these concerns are rapidly being transformed into thoughtful discussion and engagement and, hopefully, action on some fronts. The pace of this dialogue is increasing as the years go by. I was first aware of this issue through the 1993 Carnegie Commission report. In 1997, the National Research Council sponsored a symposium on this question. Last year Justice Breyer spoke at the annual meeting of the American Association for the Advancement of Science. Speaking as a concerned public citizen with little expertise on this subject, I find several things that disturb scientists about the way the scientific and technical information is handled in the court. The first concern from a scientist's point of view is the built-in conflict of interest of paid expert witnesses. Technical experts are being paid to say what the attorneys in the case want them to say. This is generally unacceptable within the scientific community. The scientific community is rich with debate, discussion, and judgments about technical issues. Our tradition is to do this work pro bono. For example, reviewing research grants, something so important that can make or break an individual scientific career, is done by panels of scientists at the Federal granting agencies and at voluntary health organizations (e.g., American Cancer Society) who serve either pro bono or for a modest daily honorarium ($150 per meeting day). Service on these panels requires much uncompensated preparation, more than 1 day of preparation for each day of service. To serve on peer review panels at the National Institutes of Health, a scientist might devote 1 entire month of work per year for a period of 3 or 4 years. The same standard of pro bono service is applied at the National Academy of Sciences, where the National Research Council has, at any one time, hundreds of studies going on. The many individuals on these panels serve without compensation. Both at NIH and NRC, panel members are selected for their ability to render expert judgment and are screened very carefully for conflict of interest. Consequently, conflict of interest essentially never comes up in this setting. A second concern is the challenge of dealing with technical information. Science and technology are so broad that it is impossible for anyone, whether they are a scientist, judge, juror, politician, or member of the public to comprehend the breadth and depth of knowledge in science and technology. It is just plain impossible. Like Judge Pointer, who apologized this morning for his lack of expertise in a particular technical field, I would have to make the same apology for my lack of knowledge in most areas of science--even in many areas of biology. No one, whether a judge or scientist, has the breadth of expertise necessary to cover the broad range of issues that might come up in the courtroom. On the other hand, the scientific community grows bolder about our ability to collectively understand the natural world. Scientists are confident that the methodology that we use will ultimately help us explain everything in nature, including really complicated things like people's behavior, even though no individual can possibly have the expertise to see the whole field of science and technology. Thus, when you feel a bit embarrassed about not having the technical expertise to deal with a particular case, just join the club. All the rest of us fail to have this expertise as well. How do we deal with this inevitable lack of personal expertise? Scientists deal with this by making technical advice a group effort by selecting groups of people to share their knowledge on panels to review public policy, grant applications, and scientific articles for publication. Collectively, a group of knowledgeable scientists can identify an unbiased, broad panel of experts on any topic, even though none of those appointing the panel individually may have the expertise to serve on or chose the entire panel. When I chaired the NRC Commission on Life Sciences, we appointed people to panels on diverse topics. For example, Shari Diamond served on the second NRC Committee on DNA Forensic science. I did not know her and I just met her here today, but the group assigned to select that panel made a lot of inquiries to find out that Shari, Jim Crow, who chaired the panel, and others were the right people for that job. None of us could have selected the panel on our own. Similarly, judges would benefit from diverse sources of advice in choosing a panel of scientific experts. A third concern is the weight of anecdote over scientific evidence in the courtroom, at least as it is reported in the media. Scientists reject anecdote as a source of knowledge and rely on the scientific method, instead, to discern the truth. This is not a problem just for the courts. For the second time in the history of the United States, a physicist was recently elected to Congress. His name is Rush Holt, a plasma physicist from New Jersey. I met Dr. Holt when he spoke to biophysicists about how to interface with politicians. He advised, "When you see your member of Congress, please don't confuse them with the evidence. Anecdotes are much more persuasive." A fourth concern is the potential for a mismatch of resources available to the two sides in a case when it comes to scientific and technical information. The scientific community tries to avoid this. When the NRC puts together a panel on a controversial topic, we make sure that the full range of opinions are represented and that everyone gets a fair hearing as equals when it comes to rendering a consensus judgment on a question. I will now share a few thoughts that might help the scientific and judicial communities to move together toward Justice Breyer's goal that "the law must seek decisions that fall within the boundaries of scientifically sound knowledge and approximately reflect the scientific state of the art." My first goal is to get neutral scientific expertise into the courtroom. You might ask: "Is there truly neutral scientific expertise?" I am absolutely convinced there is neutral scientific expertise, based on the long-term success of the National Research Council in dealing with very thorny issues. I suggest that you look very closely to the NRC as a model for how to get that technical information and make it available in the courtroom. I applaud those enlightened judges who have taken advantage of this opportunity, such as Judge Pointer with his expert panel on the health risks of silicone breast implants. A similar panel in Great Britain came to identical conclusions about the lack of evidence for a connection between the implants and systemic disease. The NRC has a committee considering the same issue. Their report is not yet out, but this study has been done in a way that should meet the satisfaction of both the scientific and legal communities. (Footnote: The NRC report published in July 1999 reached the same conclusions as the two judicial panels.) NRC may be able to help the courts with broad technical issues such as those that come up in large class action suits. NRC has excellent ways to impanel people for studies; very strong conflict-of-interest guidelines and methods to avoid those conflicts; and a laudable history of getting a full range of opinions on their panels, which nonetheless reach a consensus in most cases. Occasionally, one or two panel members choose to make a minority report differing from the consensus on some aspect of the committee's conclusions, but the vast majority of reports have a consensus, even from a broad community. Individual scientists may be able to help the courts in their local communities. In his talk this morning, Dr. Lederberg explained why scientists might not be desirable as jurors. This rang a bell, because many of my scientific friends were not selected when they were called for jury duty. If this is common, the courts are throwing away a valuable community asset. If not acceptable as jurors, could scientists be asked to advise judges on technical matters? Serving pro bono like jurors, panels of scientists would probably be willing to provide this service to the courts, if the courts had some way of mobilizing this resource. Second, the scientific community might help with judicial education both in law school and as continuing education. The education of nonscientists, including lawyers, should concentrate on the process of science and the broad principles, rather than the detailed facts. The few detailed facts that a lay person can remember will be such a small subset of any field that they really are not that helpful. On the other hand, the process that scientists use in their work is extremely robust and easily understood. If all lawyers and judges (and hopefully even jurors) could understand how science is done, many questions about technical evidence would disappear. What you should understand is that scientific conclusions need to be based on the scientific method with testable hypotheses, good experimental design, and adequate controls, all carried out by people with strong technical credentials. This work then needs to be validated by critical peer review and published. Work is creditable if it meets all these standards. Fortunately, it is relatively easy for someone who understands the scientific process to judge whether a particular source of technical information is credible or not. A small anecdote. One time, I came home from a peer review activity at another university and complained to my family: "You won't believe what those people are doing." My young daughter asked, "What are they doing?" I explained that the scientist thought that A, B, C, and D were all important variables in his experiment. In his first experiment he tried A and B and in the next he changed both A and B. Nine-year-old Katie said, "You can't do that. You can only change one variable at a time." I tell you this story because it is relatively easy to tell good experiments from bad experiments, even if you do not know much about technical details. If you focus on the method, I think that judges, juries, and everybody else can understand what is good science and what is not good science. Third, the scientific community could help the judicial system with the art of presenting evidence in the courtroom. Outstanding lay presentations of scientific work rarely happen by chance; they happen by design. If jurors and judges are befuddled about what technical experts are trying to tell them, it may be possible to help witnesses to be more lucid and easier to understand. Judge Pointer, again, made reference to a talk this morning, where the speaker unfortunately used technical terms with which he was not familiar. Plain English could have been used more effectively. The scientific and technical community might be able to provide some coaching to make technical presentations clearer. It is important to realize that science does not have all the answers. We are confident that the scientific method, rigorously applied, will eventually yield profound insights about the natural world, but at any point in time many things are incompletely understood. The scientific community is not only comfortable with uncertainty but views uncertainty as an opportunity to improve the state of knowledge. The two NRC reports on DNA Forensic Science illustrate how to deal with uncertainty and the growth of scientific knowledge. The first NRC DNA report acknowledged that the information at the time was not sufficient to rule out some possible false-positive matches of evidence samples with the DNA of innocent individuals accused of a crime. These false-positives were conceivable, because there was insufficient data in the scientific literature about the frequencies of various genes in particular populations. The first report identified these limitations, offered a temporary solution, and recommended more research. There was a bit of a hue and cry, because judges and juries were looking for more certainty. A few years later, the second NRC DNA report was much more definitive, because additional information became available about the genetic structure of different populations. It turned out that the various populations do not differ all that much from each other, and that statistical methods could put a limit on the chance of false-positive identifications. Consequently, this second study has had much more influence than the first one. This example illustrates how the scientific community can assess the state of the art and recommend new research to provide more certainty as time goes by. The National Research Council is interested in helping the courts. NRC has just had approved a Science and Technology Law Program. They are now seeking funds from foundations and other sources for this enterprise and they plan many important things. They propose a panel on science and technology law to consider critical issues, like the ones you are discussing here. They have a long list of topics such as the impact of tort law liability on research and innovation, the conflict between legal understanding and scientific understanding of risk, the impact of punitive damages, and intellectual property rights. They propose to host conferences and workshops, to file amicus briefs where appropriate, to carry out studies along the lines of the DNA study to advise the courts, to publish papers, and to host a Web site and internships. It sounds like a great program. I suggest you keep your eye on this Science and Technology Law Program as one contribution from the scientific community to the legal community. ------------------------------ Panel III. "Junk" Science, Pre-Science, and Developing Science Moderator: James E. Starrs Professor of Law and Professor of Forensic Sciences The George Washington University Washington, D.C. Panelists: Andre A. Moenssens Professor University of Missouri at Kansas City School of Law Kansas City, Missouri Michael J. Saks Edward F. Howrey Professor of Law and Professor of Psychology University of Iowa Law School Iowa City, Iowa Carole E. Chaski Executive Director Institute for Linguistic Evidence, Inc. Georgetown, Delaware Dr. James E. Starrs: There are two changes that I should announce. The first change is that, since we are 20 minutes delayed in getting started, since I'm the last speaker, I'm taking the last speaker's prerogative of continuing even though we may go into the break. So, if you wish to take a break in place or otherwise, that's fine, but I'm going to continue until the end. The second change is, through the unanimity of the group here, we've agreed to change the title from "Junk Science." We've found a few other terms that we've toyed with and bandied about, such as "Catastrophic Science," "Chaotic Science," but we've come up with a better title. That's "Crippled Science." So that we're now going to be talking about crippled science. We're not going to take any special time to introduce each one of the members. The first speaker today is Professor Andre Moenssens. Dr. Andre A. Moenssens: I don't know about the change in the title, but my slide--my first slide does not reflect the change in the title. It's still "Junk Science" up there, as opposed to real science. I think if there's one overriding comment or observation that I have made to myself this morning, it is that there is an awful lot of tension between law and science, something, of course, all of us knew before we even got here. When I first agreed--or was asked to--define the difference between junk science and science, I believed that this would be rather easy, because perhaps I had certain candidates for the "junk science" label already in mind, and I also knew that I disagreed with others on whether that same label ought to be applied to certain other forensic sciences. But as I gave more thought to it, I came to realize that my task may well be impossible. Any attempt to even define junk science requires us to confront at least four different considerations. Consideration number one: Is scientific knowledge, as defined in the Daubert case, synonymous with what the courts and legal commentators have been commonly referring to as "scientific evidence," or what the crime laboratory persons referred to as "forensic science." For those of you who say, "Of course," I remind you that Justice Blackman, who wrote the Daubert opinion, referred, in another case, to psychiatric evidence predicting future dangerousness as "scientific evidence." All of the courts have referred at various times to scientific evidence as including the testimony of handwriting examiners, microscopic hair comparisons, fingerprint identification, DNA analysis, bullet and firearms comparisons, bite mark identifications, and a variety of clinical findings by doctors, psychologists, and psychiatrists, as Dr. Gardner said this morning. And if we believe, as some do, that a recognized and established discipline in the forensic sciences doesn't truly engage in "scientific" analysis, but makes its determination based on specialized skills that are gained through long experience and training, does that mean that that discipline is engaged in junk science? Consideration number two: To what extent, if at all, is it even relevant to seek to determine whether what one person calls forensic science is truly "science" but, instead, opinion testimony based on specialized skills? Is it relevant? And the Supreme Court, in the Kumho Tire case that's been referred to by several of the speakers this morning, said that all expert opinion testimony, even that based on specialized skills and knowledge but not necessarily on scientifically established principles, must pass the reliability/validity test. Consideration number three: Separating the scientific wheat from the pseudo-scientific chafe. To quote my esteemed colleague, Professor Paul Giannelli, from one of his recent writings, who should do the separating, scientists or lawyers? The easy answer would again be--or at least it's the answer that I have heard referred to here-- scientists should define science. Perhaps. But lawyers, as has been pointed out repeatedly today, mistrust nonlawyers defining legal concepts such as "scientific evidence." One of the major conflicts between law and science that we've discussed over and over again this morning is that lawyers would like to see science, when it is used in the courtroom, if not infallible, at least mostly accurate, mostly immutable, and certain. That is the very factor that, in the legal mind, makes the evidence also "reliable." In the scientific community, by contrast, knowledge is forever changing. It is adapting; it is sometimes reversing direction, and thereby also advancing. In the process of advancing scientific knowledge, science may also be correcting erroneous conclusions of the past, despite the fact that these now out-of-date conclusions may already have become embedded in our case law as legal principles that are due great deference, if not controlling effect. It's very hard for courts to abandon holdings, rules based on scientific tests--whatever "scientific" may have meant to a particular judge--that were adopted many years ago, in many jurisdictions, and by some eminent jurists. Today, the lawyers and the courts hail as nearly infallible scientific evidence, the analysis of bodily substances and cell material by DNA. We have, of course, become overwhelmed by the statistics that are designed to establish DNA's superiority, I guess you could call it that, over any other form of analytical evidence in existence. When DNA testimony is presented, how can we possibly assume innocence when highly credentialed experts talk about the odds of finding a random match to be only one in 66 billion, with the most conservative calculation being one in 6.3 billion? It shouldn't be forgotten that, before we were pushed into numerical numbness by the astronomical size of the improbabilities, criminalists examining biological evidence testified to possibilities of random matches being in the one to 50,000 range and that we, as lawyers, found those numbers to be so impressive as to be synonymous with near certainty. By the testimony of these serologists' conclusions drawn from conventional antigen or enzyme typing of blood and semen stains, some defendants were wrongly convicted. They were released only after years of incarceration when, despite the evidence of the high odds of their guilt as established by "scientific" serological evidence, a reexamination of the evidence by DNA analysis positively excluded these defendants as the donors of the biological materials. When the Justice Department released its research report in June of 1996, the one that you found on the desk here when you checked in, Convicted by Jurists, Exonerated by Science, on the 28 cases that were examined, almost half of them also included non-DNA analyses of blood, semen stains, and hair samples that were testified to as implicating the defendant to a high degree of probabilistic certainty. Impossible though it may now seem, perhaps the sense of security that DNA appears to offer to fact finders may in the future be challenged again by something that we can't even imagine now. Improbable? I like to think so. Far in the future? Perhaps. But those challenges are certain to come. In fact, next week, a paternity trial is scheduled to commence in Connecticut in which a particular DNA test performed is going to be called junk science by a well-credentialed university professor in microbiology. The professor who is expected to testify on behalf of a putative father who suffered from the human papilloma virus, HPV, is Dr. Gordon Carmichael of the University of Connecticut, who asserts that HPV can have an effect on its host's DNA to a degree sufficient to skew the paternity test. Does Dr. Carmichael's testimony meet the Daubert test, which the courts in Connecticut, by the way, have adopted as State law? Is it good science? Is it junk science? Those are all questions to which I don't have an easy answer, but one fact I can state, that the answer to whether we're dealing with junk versus real science isn't going to be made on the basis of whether we're dealing with a university-educated expert as opposed to one who learned to apply a scientific or a technical skill by having been taught by the apprentice method. Dr. Carmichael is a person with an extensive background in virology and in molecular genetics; he is a researcher--what the Daubert court would most certainly call a "scientist." Yet, I have a feeling that many among you here will characterize his testimony as junk science. Should his testimony be excluded, not because Dr. Carmichael is not a scientist and hasn't done hard science in the laboratory, but because we don't accept the conclusion that he reached? After all, the Supreme Court has said that a trial court exercising gatekeeping sentry duty is permitted to do just that. If trial judges exercise their "wide latitude of discretion," as they are permitted to do under the decisions in Joiner and in Kumho Tire, their exercise of discretion should, at least we are led to believe, cause the reviewing courts to affirm these decisions made based on a judge's individual predilections. If that happens, as it seems to be happening right now, will we have advanced very far from where we were prior to Daubert? You see, there are a lot of questions that I raised to which I offer no answers. Flying out to San Diego, coming from Kansas City, I was reminded of a prominent Missouri poet who criticized the often heard canard that nature never duplicates itself or that no two snowflakes are alike in this poem that she wrote. Two snowflakes are never the same, the scientists agree, but have they proved what they proclaim? Not to me. I watch the myriad stuff fall and leave it up to chance that there be among them all twin snowflakes that dance. Enough of these philosophical musings. Let me briefly cover some of the information that I was asked to give in the abstract. Despite my professed inability to make a distinction between junk science and real science, I must now, of course, come up with a test whereby we can distinguish junk science from true scientific evidence, and I will continue to use the term "scientific evidence" in the broadest sense, as the courts have traditionally done and as Dr. Gardner gave you his take on. I suggest that we look for these things: Number one: In any undertaking that involves expert inquiries, I would start by attempting to extract the guiding assumption that validates even making the inquiry to begin with. For instance, when it comes to fingerprint identification, the underlying assumption is that no two fingerprints that come from different digits are the same. So, we ask ourselves, has it been proven that no two fingerprints are alike? Number two is rather easy to verify. Does there exist a respectable professional literature that describes the discipline's purposes and the methods of achieving these purposes and that establishes how the underlying assumptions came to be known? Number three: Are there protocols, accepted methods of proceeding which will yield verifiable results that are accepted as accurate in the discipline? And number four: Does there exist a rigorous training program for achieving proficiency in a discipline under the supervision of people with established credentials? Now, if all four of these questions can be answered with an unambiguous yes, then in my view we're certainly dealing with scientific evidence, using the term, again, in its broad historical sense, or at least we are dealing with reliable and valid expert opinion testimony. If none of them can be met, I would say we're dealing with junk science. What if some but not all of these conditions are met? I'll leave it up to my copanel members and Dr. Saks to give you their ideas on whether the discipline then falls into the pre-science or the developing science category or whether they have different categories for all of those examples. Now, in the abstract, printed in the program, I had suggested a look at four different examples, and I'm going to abbreviate that look somewhat, where we can legitimately question whether we are dealing with reliable expert opinion testimony without bothering to define scientific knowledge, technical skill, or specialized skills under Rule 702. Latent ear identification: The assumption is that all ears are different when they are studied in their minute details. Has this been scientifically established? I know of no study that has done so. The assertion, when it is made, is usually backed up by what? The "Snowflake Syndrome": Nothing in nature ever duplicates itself. It's been established that ears have a variety of shapes. They may, in general, be round in form, oval, triangular, rectangular. Anatomy books also tell us that we have very specific names for anatomical parts of the external ear, but aren't these parts class characteristics rather than individual characteristics? I saw a chart prepared by a person that took a case like this to court, who calls himself a pioneer of ear identification, in which the factors on which an identification of two full ear photographs--photographs now--was made by the correspondence of only two individual characteristics, a beauty mark and a vein. The next question is, assuming that a person placed an ear against the door to listen to whether someone is inside and, later on, a latent ear print is discovered by a fingerprint technician who developed the ear impression, much like he or she would develop a latent fingerprint, can that be identified? If we examine the scientific literature, there has been no validation of this, there has been no experimentation. But there are six people in this country, in different States, who have taken those cases to court and have the evidence admitted in at least four of the six cases that I'm aware of. Footprint identification: Dr. Louise Robins' Cinderella Analysis is one of these fields where we also had a person who had a background of being a scientist, a physical anthropologist, who nevertheless went off the deep end and professed to be able to do something routinely that no one else could do, not even the people at the FBI laboratory. She could determine not only that a crime scene boot print matched the defendant's boots, which is fairly routine, but who was wearing the boots at the time the impression was made. We don't have to spend a lot of time on this, because at present, no one is following in her footsteps. She, unfortunately for her, died a few years ago. But the aftermath of her work is still very much in the courts. You see, unfortunately, the disrepute into which some of those forensic science or disciplines have fallen has been fostered and continues to be fostered by overzealous prosecutors who sought these people out and who wanted them to testify to these facts, even in the face of opinions by their own State crime laboratories that said it was impossible to do so. Currently, there's a criminal prosecution going on in DuPage County, Illinois, against three former prosecutors and four sheriff's deputies for having weaved a tangle web of false evidence--that's the court's language-- setting out to frame a cohort of Mr. Buckley, whose case is mentioned in our slide there, Buckley v. Simmons. So, you see, these cases still go on. Next slide. "Blue Light" Odontology: As I said, the "who wore the shoe?" testimony of Dr. Robins isn't around anymore, but in a sense it's a little bit like the case of Dr. West, whose tribulations as a bite mark expert in Texas were written up in the ABA Journal, a feature journal article. Dr. West could make bite mark identifications where no other forensic odontologist even found a bite mark. Since my time is up, I will forget about the last two "techniques," if you want to call them that, that I have mentioned in the abstract. Advances in sciences occur sometimes in the most unexpected corners, of course, and we have to forever keep an open mind in exploring these possibilities, but despite the Daubert case and its progeny, I'm beginning to like the words of Frye v. United States, as interpreted in the cases just before Daubert was announced, more and more, even though, before Daubert, I was one of the Frye critics. And loosely paraphrasing that opinion, we should not admit the evidence based on these novel ideas until they can be shown to be reliable by general acceptance in the relevant community of disinterested scientists. Proof of validity or reliability is what we're seeking to achieve, and we seek to achieve it by any factors that might be relevant to a particular discipline. Thank you. Dr. Michael J. Saks: There have been numerous references this morning to differences between lawyers and scientists, and I think there is no question that lawyers and scientists live their lives in two different intellectual universes. Consequently, I want to spend my first few minutes trying to help those not trained in science to acquire some gut-level appreciation for what it is that scientists are talking about when they refer to empirical testing as the touchstone for figuring out whether something is valid or not, whether something works or not. So, I'm going to begin with a story that illuminates this essential point. There is a condition you may have heard of-- autism--the victims of which are unable to communicate and seem to have no interaction with their environment, certainly not with other human beings in their environment. Whether they are severely retarded or whether they simply cannot communicate is not clear. Several decades ago, a technique was developed in Australia which seemed to make a major breakthrough. It was picked up by people who work with autistic children in the United States and has come to be called "Facilitated Communication." The way it works is that a trained facilitator sits next to the autistic child, holds the child's arm, steadies the child's hand, and the child presses letters on a special keyboard. With that one bridge linking the person with autism to the rest of the world, it was widely believed that the child could communicate. Few breakthroughs could be more dramatic. Suddenly, these children could attend regular schools. They could have conversations with their parents--some of whom had not been able to exchange a single thought with each other their entire lives. They could do math. They could write poetry. They could do all kinds of things. It seemed obvious that this technique of facilitated communication worked and worked extraordinarily well. One moment you had people who could not communicate, and the next moment they could communicate. An extensive literature developed consisting not only of case reports--and there were many successful case reports of this miracle cure--but an extensive literature that discussed the theory and the techniques. Protocols developed to specify exactly how to use facilitated communication properly in order to maximize its miraculous effects. There were training programs. There was continuing education. Certification programs came into being. Professional organizations, societies, and journals came into being. Reports about facilitated communication were published in many journals, among them the Harvard Educational Review. The Columbia University Press published at least one book about it. A Center for Facilitated Communication opened at Syracuse University. Government guidelines came along and government funding for treatment using facilitated communication. One more thing happened. On the keyboards of the autistic children were typed allegations of serious criminal misconduct against people, typically family members or caregivers. And that brought the matter to court. And now a new question was asked: Are the statements that are being tapped out on the keyboard being authored by the person with autism or are they being authored- -however inadvertently and unintentionally--by the facilitator? Based on all of the anecdotal evidence, all of the experience of countless professionals who had been using FC for decades in several countries around the world, the answer is: "Of course the statements are being authored by the autistic children. We've never questioned that. It's so clear. It's so obvious. It works." But at least two courts that I'm aware of ordered that a good, clear empirical test be designed and conducted. Did FC work or was it no more than a modern day Ouija board? A typical test developed to test whether it really worked was quite simple. An apparatus was created whereby two pictures could be shown separately, one to the facilitator and one to the autistic person. Sometimes the pictures were the same: they both showed a boat. And what gets typed out is boat. Other times the pictures were different: the autistic child sees a sneaker and the facilitator sees a house. When that happened, about 99 times out of 100 the word that got typed out on the keyboard was what the facilitator saw. Now, this was such a clear, dramatic demonstration, in the light of which assertions of efficacy were rejected by numerous courts. Testimony mediated by FC was not admitted into evidence. In addition, many schools and other special education facilities ceased using FC because of the research findings, much as the FBI has done in regard to so-called voiceprint identification. In spite of the heartfelt, sincere belief of those trained in and using FC that FC was valid and reliable and dependable (based on all of their own experience), the data could not be ignored. To remind you, the point of this story is that I wanted to convey, particularly to those with no scientific training, the remarkable power of a well-designed empirical study to generate data that can answer an empirical question as no other kind of knowledge can. All of the testimony in the world from people who use any given technique every day, all of the journals, the conferences, the university centers, the existence of organizations and certification, and even satisfied customers, all prove remarkably little. Facilitated communication would have passed all of the items on Professor Moenssens' nonempirical litmus test with flying colors. By his test, facilitated communication would be a winner. The only thing it lacked were good, clear studies. And once those studies were done, a totally different answer emerged. The only way Professor Moenssens or I or you or anyone can know whether any technique--and we could list hundreds, thousands of them--the only way we will ever know whether any of them work, produce valid results or not, have the claimed effects or not, is going to be if we can test them with well-designed, systematic empirical studies. And that, in a nutshell, is what I take Daubert to stand for, especially the evaluation criterion of data on error rates. I take "error rates" to be a figure of speech for the larger concept of acquiring data, of testing empirically. For an expertise based on testable empirical claims to be admitted under Daubert and Kumho, it is going to have to survive some kind of reasonably convincing empirical testing to show that it is or can do what is claimed for it. Before Kumho, some courts were dividing the world of Rule 702 into scientific technical or "other"--and Daubert was applied only to the scientific, not to technical or "other." That created the peculiar situation that the best tested and best understood kinds of knowledge were being put to a tougher test in order to be admitted. Fields that had not developed empirical knowledge about themselves, that were poorly studied and were of unknown reliability, those were being admitted over a much lower threshold. It was as if the courts were saying: Those of you that cannot pass Daubert will be exempted from Daubert, for the very reason that you cannot pass it. On seeing that formula, a number of fields filed briefs with courts saying, in effect: You know what? We used to tell you we were scientists and you used to let us testify because you thought we were doing science. But we're here to tell you that we were only kidding. We're not really scientists. And we don't want to be tested under Daubert. We prefer being treated as "technical or other." If you think I'm kidding, I could give you some cites to amicus briefs filed by professional associations, including forensic scientists, saying: Take our field out of the science column; we're changing our rhetoric. I read the Supreme Court's recent decision in Kumho to say, "No more" to that shell game. In any event it will undoubtedly have that effect. For example, the Starzecpyzel court reasoned: "Were the Court to apply Daubert to the proffered [forensic document examination] testimony, it would have to be excluded. This conclusion derives from a straightforward analysis of the suggested Daubert factors . . .." Kumho says: Yes, you do have to apply Daubert, regardless of the category a field wishes to place itself in. Putting Starzecpyzel together with Kumho, the only logical conclusion available is that the conclusions of forensic document examiners about the authorship of handwriting is not currently admissible. [Note: Two months after this conference, a Federal court reasoned in precisely this way and concluded that authorship opinions of forensic document examiners are inadmissible under Daubert and Kumho. See U.S. v. Hines, 1999 WL 412847.] What is one to do with fields that do not have traditions of systematic self-testing? Are their claims to be taken on faith? Are the courts to merely accept the sincere and heartfelt self- assertions offered by members of those fields? What can a court do with fields that purport to be talking about the empirical world, but have done little empirical research to evaluate themselves? I want to suggest there are three ways that this problem can be approached. I call these The Applied Science Model, The DNA Model, and The Black Box Model. In the Applied Science Model, it could be that a field of forensic science is borrowing well- established methods from what I'll call normal science. Take chemistry as an example. If you become a forensic chemist, and you apply the principles and the techniques being used in normal chemistry, nonforensic chemistry, then there would be a very good basis for a court to conclude that if it works in industry and it works in academic chemistry labs, then it will work when applied properly to forensic science problems. Handwriting identification, by contrast, cannot point to any basic science discipline from which it is borrowing its concepts or methods. By the DNA Model, what I mean is an empirically based probability analysis. DNA typing has shown, largely through the work of population geneticists, how to calculate the probability of a coincidental (erroneous) match. Since all forensic identification fields operate by the same basic notions of probability as DNA--that there is an enormous amount of variability with respect to the features being examined, whether those are handwriting or DNA, fingerprints or striations on bullets. What the DNA model suggests is that what needs to be done is to go and measure that variability. Measure how much variability exists among the relevant population. Then take the case at bar and, by measuring the observed elements using the background probabilities found in the larger database, one can calculate the likelihood that the crime scene evidence and a defendant's evidence share a common source. In the case of handwriting identification, experts would report to the factfinder the probability of a coincidental match associated with a conclusion that a ransom note and the defendant's writing came from the same hand. If all else fails, we can resort to the Black Box Model. The black box model can be used with any claimed special skill--it can be wine tasting, it can be identifying and matching fingerprints, handwriting, anything. It could be done with groups, it can be done with individuals. What one would do is to present problems with known answers to experts for examination. For example, one could test handwriting samples, markings created by toolmarks, two bullets that may or may not have been fired from the same gun, etc. People giving the test know whether they had a common origin or not. The people taking the test don't know. And the answer given is compared to the answers known to be correct. Now, this has certainly been done in the realm of what is referred to as proficiency testing. I would just take it one step further and use it as a technique to try to map the extent of special skill of various kinds of experts. How fuzzy can the latent print and the known inked print get and still produce valid conclusion? Or how partial can it be? In the instance of handwriting experts, by testing different kinds of FDEs, with various different kinds of stimulus writings, under different types of testing conditions, using different methods of examination; eventually one could map the extent of special abilities and limitations of different types of FDEs, examining different types of writing, using different comparison methods, under different types of conditions. By doing this, we can discover in what domain experts really bring some expertise that is over and above that which a jury could accomplish on its own. In contrast to those tasks where they are near, or outside of, the borderlines of their expertise. My recommendation is that any of these strategies would provide courts and everyone else with a much better ability to evaluate claimed expertise than is currently offered by self- proclaimed fields of expertise. Courts themselves play a large role in how good the data are that they receive from experts about the claimed expertise. When courts set a very low threshold they will receive little data about the expertise, and probably a low quality of expertise. When courts raise the bar, experts will work harder to get over more demanding standard and ultimately offer the courts better evidence. References Faigman, Kaye, Saks & Sanders, Modern Scientific Evidence , Volume 1, section 1-3.3.1[2] (West, 1997). Risinger et al., "Brave New Post-Daubert World," 29 Seton Hall L. Rev. 29 405 (1998): n. 109 at 441-42. Saks & Koehler, "What DNA Fingerprinting Can Teach the Law About the Rest of Forensic Science," Cardozo L. Rev. 13 (1991): 361. Dr. James E. Starrs: Carole Chaski is cited as being one of the originators of this particular program. She is the next speaker. Dr. Carole E. Chaski: [Dr. Chaski's remarks are presented in manuscript form.] Linguistic Authentication and Reliability Carole E. Chaski, Ph.D. 1.1. --Authorship in an Electronic Society Many different types of crime and civil action involve documents whose origins or authorship must be authenticated. The traditional method of linking document with author has involved Questioned Document Examination, in particular handwriting or typewriter identification and/or ink dating. But our society is rapidly moving beyond pen, pencil, and typewriter; we produce more and more electronic documents. Documents composed on the computer, printed over networks, faxed over telephone lines, or simply stored in electronic memory preclude traditional handwriting identification. When the authorship of an electronically produced document is disputed, the analysis of handwriting and typing obviously do not apply, but also in the case of networked printers--to which thousands of potential users have access--even ink, paper, and printer identification cannot narrow the range of suspects or produce a solitary identification. The language of a document, however, is independent of whether a document is written or printed or faxed or stored electronically. The question then arises: can the language of a document be used to link the document with the author? Since the early 1900's, American courts have dealt with this question, from a legal perspective, in terms of admissibility of language evidence. Table 1 summarizes what has been proffered as language-based evidence of authorship: punctuation, grammatical errors, spelling errors, sentence beginnings, "stylistic deviation." The judicial record makes two points clear: 1.admissibility is not uniform; 2. the techniques used for determining authorship rely on common misconceptions about language. Table 1 shows that most of what has been offered as language-based evidence of authorship is exactly the kind of common knowledge which is emphasized in American education: grammatical errors, vocabulary, spelling mistakes, punctuation, and style. Further, when the academic and forensic literature is examined, these same ideas come up repeatedly, although they are dressed up in academic jargon. For a technical review of the academic and forensic literature, see Chaski 1998a. Table 2 lists common misconceptions of language use and the academic/ forensic techniques which correlate with them. Now the question becomes much more interesting: do the techniques based on common misconceptions about language use actually work reliably and accurately to identify the authors of suspicious documents? This is a question that can be tested empirically, and my research fellowship at the National Institute of Justice focused on empirically testing methods of language-based author identification. Before we turn to these results, there is another type of language-based author identification technique based on style and literary interpretation, or literary imagination, which is currently enjoying some notoriety due to the JonBen‚t Ramsey case. The New York Times published an interview with Professor Donald Foster about his work as a language expert (Metro Section of City Edition, November 19,1997). Included with this was his analysis of the ransom note which begins "Listen carefully!" Professor Foster's analysis of these first two works follows. The author imagines the text as a heard document, as in a film kidnaping or a literal dictation (one person speaking, the other writing). A cinematic thread . . . includes diction associated with films like "Ransom," "Dirty Harry," and "Speed." A corporate thread . . . includes diction associated with a chief executive officer, day-to-day business concerns or computer equipment, possibly indicating a businessperson as author, and/or someone wishing to implicate John Ramsey. All of this is an interpretation of just the first two words! This is rather impressive, but it is not science. Science, unlike literary criticism, requires that the method of analysis be so clear that anyone who cares to can repeat the analysis and come up with similar results. The method must be objective so that anyone can do it. The method must be quantitative so that the procedure can be standardized. Science is about predictability. But literary criticism, on the hand, strives for originality and dreads replication. What Professor Foster does may be excellent literary criticism, but it cannot be replicated, because it relies on subjective and nonquantitative interpretation. Therefore, Foster's work, as it is presented in The New York Times interview, cannot generate hypotheses that can be tested empirically. 2.1.--Empirical Testing of Nine Hypotheses There are, however, nine hypotheses for language-based author identification suggested in the literature (for review see Chaski 1998a). Many of these hypotheses have not been replicated in a forensically plausible way because in fact they derive from literary criticism. But it is possible to test these nine hypotheses empirically because they are objective and quantitative. These are: --1: Vocabulary richness identifies authors. --2: Hapax Legomena identifies authors. --3: Readability measures identify authors. --4: Content Analysis identifies/discriminates between authors. --5: Spelling errors identify authors. --6: Grammatical errors identify authors. --7: Syntactically classified punctuation discriminates between authors. --8. Sentential complexity identifies authors. --9: Abstract syntactic structures differentiate and identify authors. 2.2. --Empirical Testing of Language-Based Author Identification Techniques In order to test empirically the current techniques for language-based author identification, a Writing Sample Database was first assembled. Assembling a database for testing the hypotheses is an essential and time-consuming step which nonscientists are often puzzled by. But in true science, the results are only as good, as reliable, as the experimental design that gets you those results. If there is any question, for instance, as to who actually authored a document, then that document cannot be used experimentally to test a hypothesis. Therefore great care has been taken to ensure that the Writing Sample Database is designed properly and that data has been collected properly. A set of four writers was extracted from the database in order to control for sociolinguistic factors which we know affect linguistic performance. This pilot subset mimics the kind of data which are actually obtained in real casework. In real casework, the analyst is typically given the unknown, suspect, or questioned document(s), and known writing samples from one or more potential suspects. The task is to eliminate some or all of the suspects as the possible author of the questioned document(s) and, if possible, to identify one of the suspects as the possible author of the questioned document(s). In effect, the analyst must distinguish between documents written by different writers and cluster together documents written by the same writer. Both the questioned and known documents are typically short in word length. Since the investigators have already developed suspects for independent reasons in the typical case, the task of author identification in casework is circumscribed by the number of known sets, and the sociolinguistic characteristics of the known writers such as age, race, sex, and education. The parameters of real casework have determined the design of the empirical tests. First, the task in all the empirical tests that follow is the same: to distinguish between different writers and to identify documents by the same writer, some known and one unknown, using one particular technique. Second, the known writing samples were selected on the basis of demographic characteristics which would make the writers similar enough to qualify as a list of suspects. Also, from a theoretical perspective, we know that certain demographic characteristics affect linguistic performance, so a group of people sharing these sociolinguistically significant characteristics would very likely share dialect features. By selecting our "list of suspects" so that they share group or dialect features, we can test a language-based identification technique's ability to go to the individual (or idiolectal) rather than group (or dialectal) level of linguistic performance. Based on both investigative practice and sociolinguistic fact, four writers were selected, from the Writing Sample Database, to form the Pilot Subset. The subject identification numbers and sociolinguistic characteristics of the four writers are shown in Tables 3 and 4. Third, as in actual casework, the writing samples from these four subjects are short. The shortest text contains only 93 words, the longest, 556. Three texts were used from subjects 001, 009, and 080, while only two were used from subject 016, in order to keep the number of words from the subjects relatively comparable. In this way, subjects 001 and 080, and subjects 016 and 009, respectively, produced a comparable number of words. Since most questioned documents are short, the goal is to test techniques on short documents. In fact, it is important to develop techniques which can operate successfully on short documents, as the worst case scenario, even if long documents are available in particular cases. The textual characteristics of the Pilot Subset are shown in Table 5. Table 5 also shows the number of words in the questioned document (QD). The QD text was selected by an intern at the National Institute of Justice from the documents generated by the four writers, typed into the computer, identified as SQD2. The true identity of SQD2 was not revealed to the analyst until after the empirical tests were conducted. So the analyst knew that the document was authored by one of the four writers but not which one. 3.3. --Results of Empirically Testing the Nine Hypotheses on the Pilot Subset HYPOTHESIS 1: Vocabulary richness identifies authors. Source: See Holmes (1994) [44] for review and references; Baker (1988) [78]. Methodology: Count number of total words in text; let N = tokens. Count number of distinct words in text; let V = types. Calculate TTR and PACE for texts of each writer. Compare each writer's TTR and PACE to each other's. Tools: Type-Token Ratio and Pace. TTR = V/N PACE = 1 TTR *Note: Due to the small sizes of these texts, all texts written by the author were combined in order to count tokens and types. This could be a false move in a forensic setting if the "known" writing samples are not actually all written by the same writer. Analysis: The TTRs of subjects 009 and 016 are very similar; likewise, the TTRs of subject 001 and 080 are very similar. TTR clusters together texts from four writers into two groups; in each of these groups, texts from different writers are clustered together erroneously. The unknown writing sample, QD2, has a TTR which is very similar to subjects 080 or 001. QD2 was actually written by subject 016, not subject 080. If an analyst relied on TTR, he would mistakenly conclude that he was dealing with two known writers--the clusters of 009/016 and 001/080-- rather than four known writers. Further, he would conclude that the questioned document was authored by the erroneous cluster 601/080, rather than the correct conclusion that it was written by subject 016. Not surprisingly, PACE (which is just a reciprocal of TTR), leads to the same erroneous inferences. Replication Results: The hypothesis that vocabulary richness identifies authors has failed to be replicated successfully in a forensically similar test. HYPOTHESIS 2: Hapax Legomena (a Greek term for "spoken once") identifies authors. Source: See Holmes (1994) [44] for review and references; cf. Ule (no date) [79]. Methodology: Count total number of words in text; let N = tokens. Count number of words occurring once in text; let V1 = types occurring once. Calculate Ratio of Hapax Legomena to Tokens (HLR) for texts of each writer. Compare each writer's HLR to each other's. Tools: Hapax Legomena Token Ratio. HLR= V1/N *Note: Due to the small sizes of these texts, all texts written by the author were combined in order to count tokens and Vl. This could be a false move in a forensic setting if the "known" writing samples are not actually all written by the same writer. Analysis: The HLRs of subjects 009 and 016 are very similar; the HLR of subjects 001 and 080 differ. HLR clusters together texts from four writers into three groups, 001, 009/016, and 080; in one of these groups, 009/016, texts from different writers are clustered together erroneously. The unknown writing sample, QD2, has a HLR which is very similar to subject 080. QD2 was actually written by subject 016, not subject 080. If an analyst relied on HLR, he would mistakenly conclude that he was dealing with three known writers--001, the cluster of 009/016 and 080-- rather than four known writers. Further, he would conclude erroneously that the questioned document was authored by 080, rather than the correct conclusion that it was written by subject 016. Replication Results: The hypothesis that hapax legomena identify authors has failed to be replicated successfully in a forensically similar test. HYPOTHESIS 3: Readability measures identify authors. Sentence length and Word length both factor in most readability measures. Source: See Ellis and Dick (1996) [55] for an example of this hypothesis; for sentence length and word length see Holmes (1994) [44] for review and references. Methodology: Select readability formula. Apply readability formula manually or by computer (e.g., through word processing programs). Compare grade level, etc. for each text to other texts. Tools: Readability formulae, possibly t-test or correlation statistics. Readability Formula Pilot Test 1 using Pilot Subset *Note: The Microsoft Word version of these readability formulae reports that the asterisked numbers may not be reliable due to insufficient number of words in the texts. Analysis: The readability scores for each author's set of documents look similar across all the authors. For instance, the Flesch scores all seem to be in the range of 60 to 70, on the average. Since there is variation among the documents in each author's set, the degree to which each author's texts are similar can first be measured. For this, the correlation statistic is feasible. The scores for texts written by each subject are, after all, highly correlated; each writer appears to be consistent across different texts in terms of readability scores as shown below: Correlation Matrices for Readability Scores Within Writers One would expect these very high correlations to decrease if the scores from QD2 are added to the wrong writer's scores. But when the QD2 is grouped with each of these different writers, these very high correlations do not decrease, and in fact stay consistently high across the board: If an analyst relied on Readability measures, he might recognize that he was dealing with four known writers, but he would conclude erroneously that the questioned document was authored by any one of these writers, rather than the correct conclusion that it was written by subject 016. Another way to analyze these data, implemented by Ellis and Dick in their work on Civil War correspondents, is to compare the readability scores of different writers by the t-test. Using the null hypothesis that there is no difference between the readability scores of writers who have previously been clustered by other techniques, consider the t-test results: What these probabilities tell us is straightforward. Readability scores do not differentiate between writers of similar sociolinguistic characteristics (age, race, sex, educational level, and dialect background). It is doubtful, however, whether readability formulae are even capable of distinguishing between writers who differ on educational and dialect levels. The following data from an actual case included three white men, in their twenties. Two men were Southerners with college degrees. One man was a Northerner with 10 weeks to go before receiving his M.D. Readability Formula Pilot Test 2 (Actual Case Data) There is certainly no need for a t-test here! It is obvious that readability scores would never differentiate between the sets of known writers B, C, D or lead to any one of them being eliminated from the authorship of the questioned document. Replication Results: The hypothesis that readability measures identify authors has failed to be replicated successfully in a forensically similar test. HYPOTHESIS 4: Content Analysis identifies/discriminates between authors. Source: Kenneth Litkowski (personal communication). Methodology: Classify each word in document by semantic category. Analyze statistically the distance between documents. Tools: Classification scheme based on semantic categories; linear discriminant functions for statistically computing distance between documents. Professor Donald McTavish ran the analysis of the pilot subset documents and returned an initial report which was forwarded to me by Kenneth Litkowski. Portions of this report are quoted in this summary, but in order to understand them, the reader must understand McTavish's way of labeling the texts, by number and letter, and how these relate to the Pilot Subset ID labels and the thematic topics in each document. These are listed in Table 10. McTavish's Comments on the C-scores, or Context- Scores: --. . . four texts (C,F,K,L) talk about goals, four talk about terror (A,D,L,G), four talk about influential people (B,E,H,J) and one (M) deals with anger. Looking at the 1x2 plot, those talking about goals are on an outer ring, the outliers plus L, which, like C and K, is somewhat more distant on dimension 3. The "terror" texts are generally high Traditional and low Practical. The "influence" texts are lower Traditional and lower Practical but B is an exception (high Traditional). In general there is strong patterning evident. At first I had expected some sort of pairing across the two arcs (B-M-A-E and L-G-D-H-J) but I haven't found the criterion if pairing is going on. . . . Overall, there is a pattern in the plots that probably connects with the patterns designed into the data if one knew more about the sources and conditions of the data. The outliers appear to be texts F, K, C, and perhaps I. McTavish's Comments on the E-scores, or Emphasis- Scores: --I had hoped that theme differentiation would pattern in more obvious ways. It appears that K and J are more positive outliers and M is an outlier in a more negative dislike direction. There is some patterning but it doesn't seem to connect well with discriminating authorship. . . . I can suggest that some texts are more different than the others (F, K, C, and perhaps I contextually; K, J, and M conceptually). K seems to be the one that is different in both respects. Analysis: Semantic categorization of the texts groups together the texts which share the same topics (trauma/terror, influence, goals, and anger) through the clustering of Context-Scores. In one "arc" (BM-A-E) texts from writers 001, QD2, and 009 are clustered, while in another "arc" (L- G-D-H-J) texts from 080, 016, and 009 are clustered. These arcs represent a similarity between 001 and 009, on the one hand, and 080, 016, and 009, on the other. Further, the first arc shows a similarity between the QD2 text and both 001 and 009. The Emphasis-Scores appear to cluster texts from all of the writers (F, K, C, I or 009, 080, 001, and 016) "contextually" and two of the writers (K, J, M or 080 and QD2) "conceptually." If an analyst relied on Content Analysis's C- scores, he would mistakenly conclude that he was dealing with two known writers: 001/009 on the one hand and 080/016/009 on the other. Further, he would conclude erroneously that the questioned document was authored by 001 / 009, rather than the correct conclusion that it was written by subject 016. If an analyst relied on Content Analysis's E-scores, he would mistakenly conclude that he was dealing with two known writers 001/009/086/016 on the one hand and 080 on the other. Further, he would conclude erroneously that the questioned document was authored by 080, rather than the correct conclusion that it was written by subject 016. McTavish himself recognizes that the semantic categorization of texts is not able to discriminate between authors, when he comments that "there is some patterning but it doesn't seem to connect well with discriminating authorship." Replication Results: The hypothesis that Content Analysis scores identify authors has failed to be replicated successfully in a forensically similar test. HYPOTHESIS 5: Spelling errors identify authors. Sources: McMenamin 1993 [4]; Janet Randall, Ph.D. (personal communication); Ron Butters, Ph.D. (personal communication). Methodology: List each spelling variant in texts of each writer. Compare spelling patterns. Tools: Spellcheckers or other dictionaries; knowledge of English spelling patterns. Analysis: Given these lists, 001 and 016 appear to be "poor spellers" while 080 appears to be a "good speller" and 009 is probably a "good speller" who suffered a momentary slip of the pen. 001 texts and 016 texts share one spelling pattern: the [e] before the suffix [ment] in 001's developement and 016's arguement. 001 's uniquness also involves [el with a suffix but this pattern cannot be related to other patterns outside the 001 set. 001 texts and QD2 text share a mislinearization of the graphemes [c, i, e] in 001's receive and QD2's espeically. 016 texts and QD2 text show no relation in spelling patterns. Other spelling errors such as 001's systematicly for systematically or mos for months or 016's structoring for structuring and nite for night cannot be related to other patterns in these documents. If an analyst relied on spelling errors, he would mistakenly conclude that he was dealing with three known writers--the cluster of 001/016, 009, and 080--rather than four known writers. Further, he would conclude erroneously that the questioned document was authored by 001 or 001/016, rather than the correct conclusion that it was written by subject 016. Perhaps the spelling error technique requires more writers in the suspect set. In order to allow for this, another Spelling Errors Pilot Test was conducted. This time the texts written by the first 11 women in the Writer Sample Database were extracted and each spelling error was listed, as shown in Table 12. The first 11 women range in age from 18 to 49, so there is less sociolinguistic control in the second pilot. Analysis: Writers 002 and 011 share several, very similar spelling error patterns. These are: 1.--errors with doubled consonants: --002 --terifying----[terrifying] --011 --occuring----[occurring] ----opressed----[oppressed] ----impresionable----[impressionable] 2.--errors with doubled consonant with suffix [1y] ----002 --realy----[real + ly > really] ----011 --politicaly --[political + ly> politically] ------racialy----[racial + ly > racially] 3.-- errors with vowels preceding nasal consonant ----002 --behide----[behind] ------frount----[front] ----011 --aroud----[around] ------beyound--[beyond] The nasal consonant is dropped in 002's behide for [behind] and 011's aroud for [around]. The vowel preceding the nasal consonant is expanded in 002's frount for [front], and 011's beyound for [beyond]. 4.--errors with vowel [I] sound as in "sit" [SIT] ----002 --regestration--[registration] ----011 --disfunctional--[dysfunctional] ------travisty--[travesty] These spelling patterns are very similar, but they originate from two different authors. If an analyst relied on spelling errors, he would mistakenly conclude that he was dealing with one known writer--the cluster of 002/011--rather than two known writers. Likewise, if the common conception of "poor spelling" is used, writers 002, 006, and 011 would be erroneously thought to be one writer, because these three writers are indeed "poor spellers." But these poor spellers are three distinct authors. Similarly, the common conception of "good spelling" would erroneously lead an analyst to conclude that 008 and 010 are one and the same writer because they are both in fact good spellers, but two good spellers, not one. Finally, the spelling errors technique would be extremely difficult to quantify unless the documents were extremely long and contained repeated instances of spelling patterns. The technique is subjective in that "good" spelling and "poor" spelling can mean different amounts of spelling mistakes to different people. One spelling error may signal "poor speller" to one person on the jury, while five spelling errors may be required to signal "poor speller" to another person on the jury. The frequency of spelling errors is another issue which should be considered, as Goutsos pointed out, with regard to McMenamin's spelling- based analysis. Even errors that appear to me, subjectively, as rare, such as the behide/aroud pattern, are not so odd that they cannot be shared, as shown by writers 002 and 011. Without frequency data it is almost impossible to figure out how to quantify observations based on spelling errors. Linguists who suggest spelling errors as individualistic do not, to my knowledge, quantify their observations, although I believe that McAlenamin is considering this. It is very likely that spelling errors signify group behavior reflective of dialect background, education, and auditory processing abilities rather than individuality. Even children who invent their own spellings in preschool activities often follow general rules. Replication Results: The hypothesis that spelling errors identify authors has failed to be replicated successfully in a forensically similar test. HYPOTHESIS 6: Grammatical errors identify authors. Sources: McMenamin (1993) [4], Janet Randall, Ph.D. (personal communication), Ron Butters, Ph.D. (personal communication). Methodology: List all grammatical errors in text, using school grammar. Compare errors. Tools: Prescriptive grammar books, GrammarChecker in word processing software. The first three numbers in each column represented the number of the error in the first, second, and third text respectively. The number in parentheses is the total number of the error in the writing sample from the author. Analysis: There are two ways to interpret these data. One is to read the rows or error types as indicative of authorship; the other is to read the columns or error frequency as indicative of authorship. Reading the rows--or error type--reveals the following patterns: 001, 009, and 016 all have run-on sentence. 001 and 016 have sentence fragments as well as run-on sentence. 001 and 009 have wrong verb form as well as run-on sentence. 001 has subject-verb mismatch and tense shift which no one else has, separating 001 from 016 in part. 009 has missing auxiliary verb which no one else has, separating 009 from 001 in part. Thus, if an analyst were dealing with prescriptive grammar errors by error type, he would mistakenly conclude that he had six authors- -a cluster of 001/016, a cluster of 001/009, 001, 009, 016, and the grammatically superior 080. SQD2 could, however, be correctly assigned to 016. Reading the columns--or error frequency--reveals the following patterns: 080, 009-02, and 016-02 have no errors. 001 and 009 have the same number of errors (9). 016 has the second most number of errors (6). Thus, if an analyst were dealing with prescriptive grammar errors by error frequency, he would mistakenly conclude that he had three authors--a cluster of 080/016/009, a cluster of 001/009 and 016. SQD2 could, however, be correctly assigned to 016 on the basis that there are not many errors in the text. Neither interpretation relies on a statistical test because there are too many zeroes in the frequencies. It would appear, then, that the grammatical errors technique, if type is used, at least begins to take us to the right answer. It enables us to distinguish between the four writers, and it enables us to cluster the questioned document with the correct writer in the pilot subset, even if it does not enable us to cluster documents from each author correctly. But this result does not warrant a full-fledged acceptance of the technique for four reasons. First, the whole notion of school grammar, the idea that a native speaker's use of his own language is right or wrong, violates all linguistic theory and descriptive linguistics. There is no defense for this technique having been suggested by academicians trained in modem linguistics except that is what most people think of when they think of grammar, so it is easy to explain to juries. Second, since most nonstandard dialects are defined in terms of the standard school grammar, it is highly likely that the grammatical errors technique actually confounds class with individual characteristics. As mentioned earlier, handbooks on composition document that there are "10 most frequent errors" (comma splices, it's for its, etc.) found in most nonacademic writing (see for instance Berry (1971). So almost by definition grammatical errors belong to groups of people, not individuals. Third, because prescriptive grammatical errors are so well known and easy to explain, even computers can identify them and in most instances correct them. Word processing programs such as WordPerfect or Word contain grammar checkers which can resolve most of these errors for producers of electronic documents. If person A's known writings contain peculiar errors, but person B's writings are known to be grammatically correct, a clever A might spell-check and grammar-check the fraudulent document. Butters, a forensic linguist, for instance, has mentioned to me his belief that "you can't perform a rule you don't know." But you can get a computer's word processing program to perform a rule you don't know. This could lead the error-based analyst to the false conclusion that B authored the document actually composed by A (false identification). Fourth, the grammatical errors technique is very difficult to quantify. Linguists who have suggested this method do not quantify their results. Partly, this is no doubt because quantifying the errors would involve quantifying the entire document. Suppose, for instance, that errors would be counted as part of a percent of items which includes the number of times the phenomenon was produced correctly. Then all instances of the phenomenon would have to be counted. It is simply much easier not to do this kind of quantification, and it is in fact not even part of the prescriptive grammar tradition to compare rates at which particular "errors" occur (although quantitative sociolinguistics such as Labov's work would require this kind of total quantification). Fifth, it is possible to keep the baby and throw out the bath water. Analytical techniques based on descriptive linguistics are able to discern the same types of patterns--and more--without resorting to prescriptive grammar. Further, these same analytical techniques would enable us to quantify the entire document so that rates of particular phenomena could be ascertained. Replication Results: The Grammatical Errors technique has been partially replicated but is still held in reservation due to theoretical and statistical problems. HYPOTHESES 7: Sentential complexity identifies authors. Source: Svartik (1968) [59]. Methodology: Classify sentences into sentential categories. Count frequencies of each category. Test statistically. Tools:--Knowledge of sentential syntactic categories such as simple, compound, complex, and compound-complex or Svartik's own six clausal categories. Knowledge and use of X2 statistic. Analysis: Svartik's analysis of the confessions in the Timothy Evans case exemplifies both grammatical error analysis as well as the sentential complexity technique. Svartik repeatedly refers to Evans as an "illiterate" who uses "substandard" language. The underlying principle in sentential complexity analysis is the idea that some sentence structures are more complex than others and that people will differ in their abilities to produce different types of sentential complexity. The hypothesis that patterns of sentential complexity differentiates between writers can be tested statistically, and in fact Svartik used the chi-square test. Assuming the null hypothesis that there is no difference between the sentential complexity patterns of pairs in the pilot subset, what is the chance that these paired patterns come from the same author? The results are shown in Table 15. These probabilities suggest that writers 009 and 016 can be clearly differentiated by the sentential complexity method, because the chance of there being no difference between them is so extremely low (1 in 10,000). Further, writers 016 and 080 might be differentiated by the sentential complexity method, because the chance of there being no difference between them is almost acceptable in terms of statistical significance (6 in 100). More disappointing is that the sentential complexity method cannot strongly distinguish between the texts authored by 001 and 009, or 011 and 016, or 009 and 080, or 001 and 080. The chi-square results in Table 16 relate to the null hypothesis that there is no difference between the punctuation patterns in QD2 and each of the writers in the Pilot Subset. Since the truth is that there is a difference between the author of QD2 and authors 001, 009, and 080, we expect a very low probability of no difference in these pairings, but a high probability of no difference in the pairing of QD2 and 016. As Table 16 shows, however, these expectations are dashed. Indeed, there is no significant difference between the sentential patterns of QD2 and any of the writers. If an analyst relied on sentential complexity, he would mistakenly conclude that he was dealing with three known writers--009, 016, and a cluster of 001/009/080 texts--rather than four known writers. Further, he would conclude erroneously that the questioned document was authored by any of these three "authors," rather than the correct conclusion that it was written by subject 016. Svartik's measure of sentential complexity separated relative clauses from other types of subordinate clauses and counted compound verb phrases as separate clauses. Although this counting may not be completely defensible within generative grammar, it points out that different measuring tools may lead to different results. In fact, measuring really natural language is quite different from measuring edited language or textbook examples. Whenever the measuring device is vague, subjectivity can creep in. Therefore, it is advisable to reserve final judgment on the forensic suitability of sentential complexity as an identification technique until these methodological problems have been resolved. Replication Results: The hypothesis that sentential complexity patterns identify authors has failed to be replicated successfully in a forensically similar test; however, this failure to be replicated may be caused by methodological problems in determining how to measure and count sentential complexity. HYPOTHESES 8: Syntactically-classified punctuation discriminates between authors. Sources: McMenamin (1993) [4] suggests that punctuation is idiosyncratic but his approach does not include quantification. Pilot studies presented in a National Institute of Justice Research Seminar, (Chaski 1996) [80], suggested that punctuation which is syntactically classified and subjected to statistical testing may be idiolectal. The methodology which follows comes from Chaski (1996) [80]. Methodology: --List each punctuation mark. --Classify by the mark's syntactic function, e.g, End-Of- Sentence period, comma separating main and dependent clauses, comma separating phrase, comma in list, etc. --Test statistically the hypothesis that syntactically classified punctuation differentiates between writers. Tools: --Knowledge of punctuation and syntax. --Knowledge and use of X2 statistic. Note: EOS means End Of Sentence; W means Word; dep means dependent or subordinate clause; S means Sentence. Analysis: The underlying principle in punctuation analysis is the idea that punctuation reflects intonation, which is driven by syntactic structure (cf. Nunberg 1988) [81], Meyer (1987) [82]. Punctuation is therefore a reflection of syntactic structure, or an alternate means of getting at syntactic structure. Punctuation is notoriously free in that rules for comma placement, for instance, are typically vague and underspecified. Because punctuation allows for options, it may also allow for individuality. The hypothesis that syntactically classified punctuation differentiates between writers can be tested statistically. Assuming the null hypothesis that there is no difference between the punctuation patterns of the pilot subset, what is the chance that these punctuation patterns come from the same author? Since the data is frequency of categories, the chi- square statistic is used, with the results shown in Table 18. The chances that the punctuation patterns from pairs of different writers are similar enough to conclude that the different writers are one and the same ranges from extremely small (1 in 10,000) to acceptably small (5 in 100). From these statistics, it can be inferred that punctuation patterns can differentiate between different writers. The chi-square results in Table 19 relate to the null hypothesis that there is no difference between the punctuation patterns in QD2 and each of the writers in the Pilot Subset. Since the truth is that there is a difference between the author of QD2 and authors 001, 009, and 080, we expect a very low probability of no difference in these pairings, but a high probability of no difference in the pairing of QD2 and 016. Table 19 shows that, as hoped for, there are very low probabilities of no difference when, in fact, the sources of the punctuation patterns really are different. When the sources of the punctuation patterns are the same, 016 and QD2, however, the probability of no difference fails the typical significance cut-off of p <.05. It would be nice if this p value were really high, but anything larger than .05 is acceptable in terms of the chi-square test. A similarity coefficient will have to be developed in order to deal specifically with issues of how similar two documents have to be in order to be classified as originating from one writer. It is safe, however, to conclude that, at least in this forensically similar task, frequency of syntactically classified punctuation patterns is able to differentiate between different writers and cluster documents of one writer, in a statistically significant way. Replication Results: The syntactically classified punctuation technique has been replicated. HYPOTHESIS 9: Abstract syntactic structures differentiate and identify authors. Source: Chaski (1997a,1997b, 1998a) [60, 61, 63]. Methodology: --Parse text using a generalized phrase structure grammar. --Count structures and ratios between structures of related type. --Test for differences between texts statistically. Tools: --Knowledge of phrase structure grammars. ALIAS [registered trademark] computer program. --Knowledge and use of 2 statistic. ALIAS, Automated Linguistic Identification Authentication System, is an electronic parsing system which is designed to quantify the structures in a text. As a relational database, it consists of the components shown in Figure 1. These components perform the tasks and relate to each other as described in Figure 2 below. Each text passes from the Writing Sample Database through each component to statistical analysis. --Subject Info Database o stores sociological and dialectal information about each subject --Writing Sample Database o stores the texts written by each subject, keyed to Subject Information --Lexical Analysis Programs and Database o breaks text up into words o assigns Part-Of-Speech (POS) labels o passes POS to Syntactic Analysis o sends quantification to statistical analysis --Discursive Analysis Programs and Database o breaks text up into sentences o assigns discourse function o passes sentences to Syntactic Analysis o sends quantification to statistical analysis --Syntactic Analysis Programs and Database o combines POS into bar and phrase levels o combines Phrase Structures into sentences o sends quantification to statistical analysis --Phrase Structure Database o stores phrase structures o parses to create phrases from POS o allows user to guide parsing decisions o sends quantification to statistical analysis --Output to Statistical Analysis Statistical analysis enables us to determine identifying features, differentiating features, and idiolectal markers. A differentiating feature is a quantified, syntactic pattern which passes statistical testing of significant difference. An identifying feature is a quantified, syntactic pattern which fails statistical testing of significant difference. An idiolectal marker is a quantified, syntactic pattern which has both differentiating and identifying functions when submitted to significance testing. Results in Tabular Format: Since ALIAS parses each word of a document, and each phrase of a document, many syntactic features are available for analysis. For brevity's sake, only data which illustrates the concepts of a differentiating feature, identifying feature, and idiolectal marker will be presented here. Analysis: The hypothesis that syntactic structures differentiate between writers can be tested statistically. Assuming the null hypothesis that there is no difference between the verb phrase patterns of the writers 016 and 080 from the pilot subset, as shown in Table 20, what is the chance that these verb phrase patterns come from the same author? Since the data is frequency of categories, the chi-square statistic is used. When these frequencies are submitted to statistical testing, X2 = 19.739, p = .0318. The probability of no difference (same origin) is very low, which in fact coincides with the fact that the documents were authored by different writers. Thus, verb phrase features function as a differentiating feature in this case. On the other hand, the hypothesis that syntactic structures can identify or cluster documents written by the same writer can also be tested statistically. Assuming the null hypothesis that there is no difference between the complexity of sentences as measured by nodes per sentence in the writing of one author, as shown in Table 20, what is the chance that these nodes-per- sentence patterns come from the same author? Here we find a resounding failure of significant difference, X2 = .185, p = .9117, which is just what we would expect. The probability of no difference is very high because, in fact, these documents do come from the same origin. Thus, sentential complexity in terms of nodes per sentence serves as an identifying feature in this case. Finally, we need features which are able to distinguish between writers because they are used differently by different writers, but also identify documents because they are used consistently by each writer. The ratio of prepositional phrase types --pp[p np] --pp[p vp], pp[p p xp] is a potential idiolectal marker which has both a differentiating and identifying function in the comparison of sets of documents. First, the notion of consistency across documents authored by one writer can be tested statistically. The data on prepositional phrases from Table 22 were run through the chi-square test to determine the chance of no significant difference between subject 016's prepositional phrase types. The probability of no difference between 016's texts 1,2,3 is very high, as expected, since these texts were authored by the same writer. Second, the notion of idiolectal difference across writers can be tested statistically. The data on prepositional phrases from Table 22 with additional data from writer 080's texts were run through a chi-square test. The probability of no difference between 016's texts and 080's text is very low for two texts, as required, and relatively low for one text, since these texts were authored by different writers. Replication: At this stage of research, more pilot subsets are being extracted from the Writing Sample Database in order to perform replications of the method on different writer sets. However, based on the results presented here we can conclude that syntactic analysis looks like a very promising approach. 4.1 --Summary of Empirical Testing Results It is generally agreed among both forensic linguists and traditional document examiners that no conclusion can be based on a single attribute. The combination of attributes or results from many different techniques lead to the conclusion that a set of documents were authored by the writer of a particular known set or not authored by any of the suspects. In line with this principle, Table 25 shows how disastrously dangerous many of the language-based author identification techniques are. The danger of these techniques is that justice could be subverted because certain ideas about language use which are commonly held but empirically indefensible could lead to false identifications or false eliminations. So the most important conclusion of my research, in my opinion, is the fact that techniques based on common misconceptions of language used as a means of identifying authorship are unreliable, inaccurate, and should not be admitted as scientific evidence. The underlying ideas about language use may be held by either the American high school graduate or the language expert, but they are not a reliable foundation for authorship identification in court. The empirical results of the Pilot Subset studies also demonstrated that not all language-based author identification techniques are misleading or dangerous. Two of these techniques --punctuation patterns and syntactic structures--yielded results which enable us to differentiate between authors while clustering documents from each author, as shown in Table 26. While punctuation patterns may seem to be an obvious kind of textual phenomena which both the American high school graduate and the language expert would pay attention to, the way that punctuation was used in the empirical test requires knowledge of syntactic structures and statistics. So while any juror or judge may notice that one document contains lots of hyphens while another does not, any juror or judge may not notice that the hyphens in the one document are always syntactically conditioned in ways that are not available in the other document. In other words, even such an obvious feature as punctuation has to be handled in a nonobvious way in order to yield reliable results for author identification. Syntactic phrase structures, on the other hand, are the kind of phenomena which are not obvious to the American high school graduate or the language expert who has not been trained in syntactic theory and analysis. To sum up, empirical studies of current language-based author identification techniques make two points clear: 1.--Techniques relying on common misconceptions about language are, predictably, unreliable. 2.--Techniques relying on linguistic science appear to accurately cluster and discriminate documents. Legal conclusions can be drawn: 1. --The jury can rely on its own common misconceptions about language to erroneously determine the authorship of documents without having an expert make their mistake more certain. 2.--The jury may need an expert witness to help them not rely on common misconceptions about language. 3.--The jury may need an expert as a rebuttal witness to help them discount the claims of other experts who rely on common misconceptions about language. Scientific conclusions can be drawn: 1. --The Daubert ruling is a great boon to all scientists who are seeking to develop forensic methods by applying the scientific techniques peculiar to their discipline. 2.--The scientists' or language experts' integrity, when high, is absolutely key to the development of novel forensic applications basic science, and when low, is the sure road to junk science. 3.--The limitations of real science, most often stated in statistical probability, are more honest than the grand conclusions of pseudo-science. Bibliography (for Dr. Chaski's paper) 1. Donaldson, Russell G. 1985. "Admissibility of evidence as to linguistics or typing style (forensic linguistics) as basis of identification of typist or author." American Law Reports, Annotated 36 ALR4th 598. 2. Menicucci, Jeffrey D. 1977. "Stylistics evidence in the trial of Patricia Hearst." Arizona State Law Journal. 3. Squires, Susan. 1997. "Linguist developing scientific method to identify authorship." The Criminal Practice Report, 11, 24: 460-464. 4. McMenamin, Gerald R. 1993. Forensic stylistics. Amsterdam: Elsevier. 5. Black's Law Dictionary, Abridged Sixth Edition. 1991. St. Paul, MN: West Publishing Co. 6. Risinger, D.M., Denbeaux, M.P., and Saks, M.J. 1989. "Exorcism of ignorance as a proxy for rational knowledge: the lessons of handwriting identification 'expertise.' " University of Pennsylvania Law Review, 137: 731-787. 7. Risinger, D.M., and Saks, M.J. 1996. "Science and nonscience in the courts: Daubert meets handwriting identification expertise." Iowa Law Review, 82, 1: 21-74. 8. Hansen, Mark. 1997. Evidence Section. ABA Journal, May 1997:76-78. 9. Tiersma, Peter M. 1993. "Linguistic issues in the law." Language, 69:1, 113-137. 10. Giannelli, Paul C. 1993a. "Forensic science: Frye, Daubert and the Federal Rules." Criminal Law Bulletin, 26, 5: 428-436. 11. Imwinkelreid, Edward. 1997. "Forensic science: Frye's general acceptance test vs. Daubert's Empirical Validation Standard--'either ... or' or 'both ... and'?" Criminal Law Bulletin, 33,1: 72-84. 12. Johnson, Lynn R., Six, Stephen N., and Hamilton, Patrick A. 1997. "Deciphering Daubert." Trial, November 1997: 71-78. 13. Huber, Peter W. 1991. Galileo's revenge: Junk science in the courtroom. New York: Basic Books. 14. Giannelli, Paul. 1993b. " 'Junk science': the criminal cases." The Journal of Criminal Law and Criminology, 84, 1: 105-128. 15. Hagen, Margaret A. 1997. Whores of the court: The fraud of psychiatric testimony and the rape of American justice. New York: Regan Books. 16. Levi, Judith. 1994. Second edition. Language and law: A bibliographic guide to social science research in the U.S.A. Teaching Resource Bulletin No. 4. Chicago, IL: American Bar Association. 17. Crystal, David. 1995. Review of Forensic Stylistics. Language, 71, 2: 381-385. 18. Finegan, Edward. 1990. "Variation in linguists' analyses of author identification." American Speech, 65, 4: 334-340. 19. Morton, A.Q. 1978. Literary Detection. London: Bowker. 20. Morton, A.Q. 1991a. Proper words in proper places. Department of Computing Science Research Report, R18, University of Glasgow. 21. Morton, A.Q. 1991b. "The scientific testing of utterances." Cumulative sum analysis. Journal of the Law Society of Scotland, 357-359. 22. Morton, A. Q. and Michaelson, S. 1990. The Qsum plot. Report CSR-3-90 from the Department of Computer Science, University of Edinburgh, James Clerk Maxwell Building, The King's Buildings, Mayfield Road, Edinburgh, EH9 3JZ. 23. Michaelson, S., Morton, A.Q., and Hamilton-Smith, N. 1977. "To couple is the custom." Department of Computer Science, University of Edinburgh. 24. Michaelson, S. and Morton, A.Q. 1973. "Positional stylometry." In Aitken, A.J. and Bailey, R.W., eds., The computer and literary studies. Edinburgh: Edinburgh University Press, 69-83. 25. O'Brien, D.P. and Darnell, A.C. 1982. Authorship puzzles in the history of economics: A statistical approach. London: Macmillan Press Ltd. 26. Mosteller, Frederick and Wallace, David L. 1984. Second edition. Applied Bayesian and classical inference: The case of the Federalist Papers. New York: Springer-Verlag. 27. Totty, R. N., Hardcastle, R.A., and Pearson, J. 1987. "Forensic linguistics: the determination of authorship from habits of style." Journal of the Forensic Science Society, 27: 13-28. 28. Hardcastle, R.A. 1993. "Forensic linguistics: an assessment of the CUSUM method for the determination of authorship." Journal of the Forensic Science Society, 33, 2: 95-106. 29. Sanford, Anthony J., Aked, Joy P., Moxey, Linda M., Mullin, James. 1994. "A critical examination of assumptions underlying the cusum technique of forensic linguistics." Forensic Linguistics, 151-167. 30. Smith, M. W. A. 1989. "Forensic stylometry: a theoretical basis for further developments of practical methods." Journal of the Forensic Science Society, 29, 1: 15-33. 31. Smith, Wilfred. 1994. "Computers, statistics and disputed authorship." In Gibbons, John, ed, Language and the law. New York: Longman, 374-413. 32. Holmes, David I. and Hilton, Michael L. 1993. "Cumulative sum charts for authorship attribution: An appraisal." Forensic Linguistics Occasional Electronic Newsletter, Issue 2. 33. Hilton, M.L. and Holmes, D.I. 1993. "An assessment of Cumulative Sum charts for authorship attribution." Linguistic and Literary Computing, 8, 2: 73-80. 34. Dahl, H. 1979. Word frequencies of spoken American English. Essex, CT: Verbatim. 35. Kucera, H. and Francis, W.N. 1967. Computational analysis of present-day American English. Providence, RI: Brown University Press. 36. Foster, Donald W. 1989. Elegy by W.S.: A study in attribution. Newark: University of Delaware Press. 37. Bailey, Richard. 1969. "Statistics and style: A historical survey." In Dolezel, Lubomir and Bailey, Richard W., eds. Statistics and style. New York: American Elsevier Publishing Company, Inc., 217-236. 38. Yule, G. Udny. 1938. "On sentence-length as a statistical characteristic of style in prose, with application to two cases of disputed authorship." Biometrika, 30: 363-390. 39. Fucks, Wilhelm. 1952. "On the mathematical analysis of style." Biometrika, 39: 122-129. 40. Milic, Louis T. 1967. A quantitative approach to the style of Jonathan Swift. The Hague. 41. Yule, G. Udny. 1944. The statistical study of literary vocabulary. Cambridge: Cambridge University Press. 42. Herdan, G. 1955. "A new derivation and interpretation of Yule's 'characteristic' K." Journal of applied mathematics and physics (ZAMP), VI: 332-334. 43. Herdan, G. 1966. The advanced theory of language as choice and chance. New York: Springer-Verlag. 44. Holmes, David 1. 1994. "Authorship attribution." Computers and the Humanities, 28: 87-106. 45. Miller, George A. 1996. The Science of Words. New York: Scientific American Library/ HPHLP. 46. Miron, Murray S. and Pasquale, Thomas A. 1978. Psycholinguistic analyses of coercive communication. Journal of psycholinguistic research, 7, 2: 95-120. 47. Miron, Murray S. 1990. Psycholinguistics in the courtroom. In Robert W. Rieber and William A. Stewart, eds., The language scientist as expert in the legal setting: Issues in forensic linguistics. New York: New York Academy of Sciences, 55-64. 48. Miron, Murray S. 1981. The resolution of disputed communication origins. In Lass, N.J., ed., Speech and language: Advances in basic research and practice, New York: Academic Press, 405-466. 49. Miron, Murray S. 1983. "Content identification of communication origin." In Reiber, R., ed., Advances in forensic psychology and psychiatry, Norwood, NJ: Ablex, 113-146. 50. Chomsky, Noam. 1957. Syntactic structures. The Hague. 51. Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press. 52. Stone, Philip J., Bales, Robert F., Namenwirth, J. Zvi, and Ogilvie, Daniel M. 1966. The General Inquirer. A computer approach to content analysis. Cambridge, MA: MIT Press. 53. Osgood, Charles E., May, William H., Miron, Murray S. 1975. Cross-cultural universals of affective meaning. Chicago, IL: University of Illinois Press. 54. Martindale, Cohn and McKenzie, Dean. 1995. "On the utility of content analysis in author attribution: The Federalist." Computers and the Humanities, 29: 259-270. 55. Ellis, Barbara G. and Dick, Steven J. 1996. "Who was 'Shadow'? The computer knows: Applying grammar-program statistics in content analyses to solve mysteries about authorship." Journalism and Mass Communication Quarterly, 73, 4: 947-962. 56. Biber, Douglas. 1988. Variation across speech and writing. Cambridge: Cambridge University Press. 57. Strauss-Larsen, Jamie. 1993. "Orality and literacy: A case study of how technology is changing the traditional models." M.A. thesis. North Carolina State University. 58. Ellegard, A. 1962. Who was Junius? Stockholm: Almqvist & Wiksell. 59. Svartik, Jan. 1968. The Evans statements: A case for forensic linguistics. Stockholm: Almqvist & Wiksell 60. Chaski, Carole E. 1997a. "Who wrote it? Steps toward a science of authorship." National Institute of Justice Journal. Washington, DC: U.S. Department of justice. 61. Chaski, Carole E. 1997b. "An electronic parsing system for document authentication." International Association of Forensic Linguists Biannual Meeting. Durham, NC. 62. Chaski, Carole E. 1998a. "Electronic parsing for idiolectal features in suspect documents." Linguistic Society of Annual Meeting. New York, NY. 63. Chaski, Carole E. 1998a. "An automated language-based authorship system for document authentication." Questioned Documents Section. American Academy of Forensic Sciences Annual Meeting. San Francisco, CA. 64. Pollard, Carl and Sag, Ivan A. 1987. Information-Based Syntax and Semantics. Fundamentals, Volume 1. Stanford: CSLI. 65. Pollard, Carl and Sag, Ivan A. 1993. Agreement, Binding and Control. Information-Based Syntax and Semantics, Volume 2. Chicago: University of Chicago Press. 66. Douglas, John, Burgess, Ann W., Burgess, Allen G., and Ressler, Robert K. 1992. Crime classification manual: A standard system for investigating and classifying violent crimes. New York: Lexington Books. 67. Osborn, Albert S. 1910. First edition. Questioned documents. Rochester: Lawyer's Cooperative. 68. Osborn, Albert S. 1926. The problem of proof. Newark: Essex. 69. Osborn, Albert S. 1929. Second edition. Questioned documents. Albany: Boyd. 70. Conway, James V. P. 1959. Evidential documents. Springfield, IL: Charles C. Thomas. 71. Hilton, Ordway. 1982. Revised edition. Scientific examination of questioned documents. Boca Raton: CRC Press. 72. Harrison Wilson R. 1958. Suspect documents: Their scientific examination. London: Sweet and Maxwell. 73. Eagleson, Robert. 1994. "Forensic analysis of personal written texts: a case study." In Gibbons, John, ed., Language and the law. Longman: New York, 362-373. 74. Pickett, Penelope O. 1993. "Linguistics in the courtroom." FBI Law Enforcement Bulletin, October, 6-9. 75. Ciaffoni, Robert. 1994. "Comparative stylistics and gramformprints: Their application in solving questioned writings." Focusing on lawyer-logotyped wills. Manuscript. 76. Berry, Thomas Elliott. 1971. The most common mistakes in English usage. New York: MacGraw-Hill, Inc. 77. Goutsos, Dionysis. 1995. Review article: Forensic stylistics. Forensic Linguistics, 2, 1: 99-113. 78. Baker, John Charles. "Pace: A test of authorship based on the rate at which new words enter an author's text." Literary and linguistic computing, 3, 1: 36-39. 79. Ule, Louis. No date. "The rare word fallacy." Manuscript. 80. Chaski, Carole E. 1996. "Linguistic methods of determining authorship." National Institute of Justice Research Seminar. 81. Nunberg, Geoffrey. 1988. The linguistics of punctuation. Stanford: CSLI. 82. Meyer, Charles F. 1987. A linguistic study of American punctuation. New York: Peter Lang. _______________________________________ Writing Sample Database --The Writing Sample Database was designed to take into account both general statistical sampling issues and linguistic performance. The decision factors for the writers (or experimental subjects) included the availability of subjects; writing as normal part of the subject's lifestyle; dialect similarity or dialect grouping; generally equivalent educational level; and representation of both genders and several ethnicities. Based on these factors, writing samples were collected from two groups: Criminal Justice majors at a community college and Business and Nursing majors at a private 4-year college. Table 2 shows the sex, age, and race distributions of subjects in the current Writing Sample Database. --The decision factors for the writing samples (or experimental tasks) included: genre or text-type parameters; similarity to actual types of questioned documents, e.g. suicide notes, threatening/ anonymous letters, etc.; and emotional level and home dialect. We know that the social context and communicative goal of a message affect its form. There are differences between the speech and the writing of each individual, differences between language behavior at home and at work, differences between language in a letter to a friend and an essay [56, 57]. Based on these factors, subjects wrote, at their leisure, on 10 topics, some of which are meant to elicit enough emotion to evoke the home dialect, while others are intended to elicit a more formal or workplace dialect. Topics are listed in Table 3. 1. Describe a traumatic or terrifying event in your life. 2. Describe someone or some people who have influenced you. 3. What are your career goals and why? 4. What makes you really angry? 5. A letter of apology to your best friend. 6. A letter to your sweetheart expressing your feelings. 7. A letter to your insurance company. 8. A letter of complaint about a product or service. 9. A threatening letter to someone you know who has hurt you. 10. A threatening letter to a public official (president, governor, senator, councilman, or celebrity. Table 3. Writing Topics for Writing Sample Database Appendix 1: Writing Samples From the Pilot Subset 001-01 Giving birth to my 4th child, 3 mos too early I was in a detox center and premature labor began. First of all, I should state I was in a detox center so I could give birth to a healthy child. I was gripped with unbelievable terror at the thought that my child was coming that early I didn't feel like he would have the opportunity to survive because I was an alcoholic and a crack cocaine user through out the whole pregnancy. The hospital did what they could to save the child but because of his low weight and under development he didn't stand a chance. The whole ordeal (took 12 hours from the onset of labor until the actual time of death and he died in my arms. I was helpless and totally powerless to do anything to help or ease his suffering. The doctors said that he didn't suffer, but really how do they know!! At that moment in time I believe I would have given my own life to save his. But now as I think who would have taken care of him or my other small children. I'm a single parent of 3 children. I believe my son gave his life so I could live and that's how I go on and stay clean and chemical free. 001-02 Numerous people and events influence me everyday in different ways. As far as me returning to school, I guess it would have to be wanting a better quality of life for my children and myself. The only way that I knew how to accomplish this is to return to school; and continue my education and show my children how important an education is now, so they don't have to wait until they are adults to get their education. Also the current job market had a high impact on my decision to get a degree, because there are no jobs available that would allow me to support my family effectively. We needed some financial security that a job at McDonald can't provide. 001-03 My Career goals is to achieve a BA in Behavioral Science Although I don't view it that way. I take it systematicly one thing at a time and one step at a time. First I will receive an AA in CJ May 96. Then I plan to switch to Wilmington College where I plan to earn my BA who knows may I go further and get a MA also. I hunger for the knowledge in this field because not only do I learn of the human condition and diversity of culture, I also learn of myself and how to handle everyday problems. We are all connected by some mannerism either by our uniquness or likenesses, also there is a thin line between the two. I like knowing the whys and that there is not one answer to certain questions. The more I learn the more I realize I don't know so it keeps me coming back. I like systematic approaches and the deviations to problems and solutions. This field has broaden my awareness that allow for trial + error. Fairness and "that's just the way it is." 009-01 One of the most terrifying events in my life was being held at gunpoint and told to get in the car by two men. All I could feel was dying without Christ in my life. I had a chance to run or get in the car. I was scared. I knew if I dying I would to go to hell and had not made peace with God. I am from a Christian background. So many things ran across my mind. All I could see was this big gun that looked as if it was a cannon. I got in the car, one drove and the other held the gun on me and told me not to look at them. The one guy told me if I looked he would kill me. By the way, the one that was doing all the talking didn't rape me, but made the other guy do it. I believe he was a pervert. I was too scared to cry but wanted the event to end. At that time, I lived in Baltimore and girls were being raped, killed and thrown out on the expressway or beltway. When he told me I should take you to New Jersey, I almost lost it. I remembered my background started praying. They finally let me go. He told me to get out and don't look back. I ran and ran until I reached an apartment with a light. No one would answer the door. I knocked on the door still no one would answer. I don't know how I arrived at my apartment, but I did. I jumped in the shower trying to wash his hands off but kept feeling his touch and remembering what had happened. I tried to tell my husband what happened but he was too high to listen. I didn't call the police because I felt I would be taken through the 3rd degree. I had seen it happen to too many women and nothing done. So I lived with it. I think about it sometimes now, but because Christ is in my life- that is what makes the difference! He has taken the hurt away. 009-02 I have been influenced by many people. A boss I had was very educated, independent, and aggressive. She was very successful and knew what she wanted and how to obtain it. She was a go- getter, not afraid to talk to anyone. When she appeared in a room, no matter what she was wearing, you could see the authority she had. Most women have to wear a suit to have that type of authority. My mother and father both have influenced me because they always succeeded at anything they went after. They taught me never to give up- "a winner is not a quitter" and a "quitter is not a winner." Anything you strive after you can obtain, a you work hard enough. Even though they were unable to receive a proper education, they instilled in me the importance of an education. Honesty and integrity as well as respecting other feelings were also important. There are other people who influenced me, especially those who have had great obstacles and other factors but still went on in spite of. There was a deaf lady that received a Master's Degree that influenced me because she had been a hearing person before which is much tougher than being born that way. She had developed a disease and lost her hearing but against all odds she received a Master's. According to her, she had no encouragement from outsiders but her family was very supportive. To me, this is most important. Family is an important factor in everyone life! Many more people would be successful if they only had family support. 009-03 My ultimate goal is receive a BS degree in Criminal Justice. With this degree my plans are to work extensively with juveniles and addicts. Since I have started I have mixed emotions about exactly what to do because I have found so many avenues to pursue in this field. I love people and concerned about their well being. Since I was involved in many things in the past but overcame them; I feel can be an asset to many people. Counseling has always been a desire but I had a family and they were more important at the time. People have always felt comfortable talking to me and relating their problems. I feel comfortable talking to anyone. I never been afraid to start a conversation. Therefore, counseling would be ideal. Another career goal is to own a bookstore with coffee shop (Gourmet) and a boutique. I love to shop but I hate to see too many of the same kind. Boutiques are unique, since they usually only have one or two of the same item, so much different from a department store. I would like to return to my first goal, education is priceless. Many times jobs are not obtained due to lack of education. I always have told my siblings, "don't ever give a person an opportunity not to hire you because of the lack of education or qualification." My oldest daughter obeyed my advice and completed. My son enlisted in the army, married and then entered workforce. Now he is pursuing his career in criminal justice. My youngest daughter has enlisted in the army after in the workforce for a few years. Maybe she will also take my advice and pursue a career and attend College. More important than all of the above I must be a success in my ministry. I would like to be a success in leading many teens, or anyone hurting, to Christ. After all is done, career, family etc. we all must give an account to Jesus as to what have we done for him and with him when He was offered to them. Our goals are only temporal to get us through this life! Most important, where will you spend eternity. God Bless! 016-01 I guess my most terrifying feeling is not being here for my two sons. My own mother died when I was 30, and I've always thought that I've sheltered and protected my sons as much, or more, than mom did me, I was the youngest of 4, and if I left this world early, I'm not sure how my boys would function. Both emotionally and physically. Emotionally, we are a very close threesome, relying and depending almost solely on one another, with me being a focal point for problems they find themselves unable to deal with. We talk about everything together, and I always find it amazing when their peers say things like "my mom doesn't treat me like yours"- I treat my kids as people who need structoring, raising and guidance- not as kids who 'belong" to me. I wonder if I die who my boys would hash over the week's happenings with. Who would they turn to for guidance and understanding- my family is of little help because I've raised my sons so differently, the boys father's family is of no help they're far away and don't even know the two guys. Physically my boys have been sheltered, once again from the cruel realities of today's world. At the ages of 16 and 20, they are only now becoming financially responsible, I have raised them to respect a dollar, but they are only now beginning to learn where that dollar has to go before it can go where they want it to go. If I left my children now they would be alone in that I have kept them mine- I have not involved them in financial matters, I have not forced them to accept and be with family members who do not see our "way"- my kids would survive -I have taught them that- but it would not be an easy survival- I worry for them- jobs are scarce, cost of living rises more each day- Being a parent is a very real fear. 016-02 I think my mother influenced me more than anyone. As a child, we were taught a lot of values but in ways that most kids couldn't pick up on. Like -we were seldom told "no"- we were told things like "if you choose to do such and such, these are the results, you make your decision. As teenagers we were given the choice to hang out where we wanted, with whom we wanted but we were told things like "If your grandmother sees you there, would she be proud and say "hi"? Or, "you are who you're seen with"- We were also seldom threatened, she did just as she said she was going to do-- we knew that if she said she was going to pour cold water on us next time we didn't get up out of bed on time-- that is exactly what she would do-- no second chance. Two stories that stick out in my mind are: she got tired of my sister and I arguing over who's turn it was to do the dishes; she said if we couldn't decide, she would solve the problem and decide for us--as kids, we never seem to learn, so the next nite, the same old s_t, and the next thing we knew--mom had opened up the window next to the table and thrown all of the dinner dishes out the window onto the lawn- she turned to us and said "now neither one of you have to do dishes- there are none left to wash- your only problem now is to explain this to your father when he gets home" (he was a truck driver.) The other thing I remember well is: I seldom "thought" to hang up my coat when I got home from school, it was always laying on a chair, or on the couch, -anywhere but where it should have been- She kept telling me to take care of it -finally she told me if I didn't, she was throwing it out in the snow. Well, one morning in January, I asked her where my coat was, and, you guessed it- in the Snowbank outside the kitchen door- left there from the nite before- I was born and raised in Houlton, Maine- in January, in Maine, it's pretty damn cold- Mom taught us to stand up for our beliefs, try to walk away from an arguement, and to treat others as you want to be treated. The other two people who have influenced my life are my 2 sons- I have raised them by myself and it has been interesting, heartbreaking, thankless, and one hell of an experience. But I wouldn't trade that experience for a ship full of hundred dollar bills. They have taught me to laugh from the inside, to look at the world from the ground up, and to never loose sight of who I am and who I'll be. Having those 2 has taught me to respect my own feelings, to show them (my feelings) in a way I can be comfortable with later- and to hold onto my goals- never loose sight of the future- the past is what made us what we are today- and mom was right "Someday I'll thank her for what she did'. 080-01 The scariest thing in my life was when the doctor told me I had to have a hysterectomy because my pap smear revealed positive cancer cells. My fear and the unknowing were awful. Would I have to have chemotherapy or radiation? Would I lose my hair. Would I die, and if so, how much would I suffer? I guess he noticed the fear in my eyes and tried to assure me that the cells were probably localized, but I was not buying this. He tried to assure me and calm my fears by stating that by removing my uterus, the cancer cells would not spread. The two weeks waiting for the surgery were hell. How would my children be if I died? Who would be there for them? I loved them so much and wanted to see them grow into adults. Most of the time I was scared- couldn't concentrate and cried when I was alone. At other times, I felt guilty for being so selfish. I would scold myself and tell myself that I had no control over this and it was out of my hands and I should just accept whatever happened. But the fear of the unknown is stronger than rational thought, and would rear its ugly head. Years later, I guess my scariest moment was unfounded- but who knows for sure? The scariest thing in my life so far has been the question of immortality. 080-02 My third grade teacher influenced me greatly. She was very intelligent, warm, and funny. She encouraged me and in so doing instilled confidence in me which up to that point was lacking. Because of her, I became a better student and proud of my accomplishments. Because of her quiet and praising manner, I loved going to school and tried harder to please her so she would bestow her warmth and praise on me. Through her guidance, I excelled that year, and became more aware of what I could achieve if I applied myself. 080-03 My career goal is to land a position where I could become free of working two jobs as I have in the past. I would like this to be a management position as I enjoy this. In addition, I am fond of travel, so this would be an asset as I am willing to relocate. Office management or human resource management are areas of interest to me. My goal is obtaining either of these positions with a corporation providing employee benefits. Primarily, however, I am interested in a Monday to Friday job that would provide an adequate salary so I could enjoy weekends. SQD2 A lot of things anger me but nothing makes me really angry. I've pondered this question for a couple of hours and can't come up with one single factor. I can describe lots of small, irritating examples - but no one large "thing". Injustice makes me angry- treating all people the same in any system- people are all different- all circumstances are different- no one person is exactly like another- stereo typing people- that makes me angry- commercials on TV that ask for money to feed starving kids over seas makes me angry (Sally Struthers looks like she could give up a meal or two)- has anyone really looked in their own neighborhood lately? What about those kids down the street? Maybe they're hungry, too. People who are capable of working but don't - or won't- make me angry- kids who say "I can't" make me angry- people who live in perfect worlds created by money- make me angry. Disease -especially cancer- makes me angry. Cancer stole my mother at 52, and she never harmed a single living thing- and bore such pain, never complained- her death made me very angry- Families who don't appreciate one another make me angry. Wives who take advantage of their mate- and vice versa- make me angry. Our country's system of child support paying makes me angry- one person suffers, one person gains- and the kid gets nothing -is often the case. Or like my children- no support at all- and no help from welfare- because I lived with my parents, or because I "make too much money:- is that after taxes? No, that's before Uncle Sam takes his share- Incompetence in the work place makes me angry. If you can't do the job- let someone who can do it, do it- Blacks who use "prejudice" like the term "thank you" make me angry. Whites who can't envision a black president make me angry. people who don't vote make me angry- Seaford's school system makes me angry. Kids who go to college and goof off, make me angry. [End of Dr. Chaski manuscript.] Dr. James E. Starrs: My remarks today are directed at the dread scourge of the scientist as a hired gun in the legal system. Like the appellation "Philadelphia lawyer," in the legal profession, to be dubbed a hired gun scientist is more than a sign of disapprobation. It also leads inexorably to the conclusion that a junk scientist has been loosed upon the courts. We do well to remember, in these hyper-critical times, that Michelangelo and Bach were, in today's pejorative verbal coinage, "hired guns." Yet their artistic productivity has rewarded us, generations later, by adding more than a modicum of sublimity to our lives. Is there good reason, therefore, for open hostility to the hired guns of science who flock to courtrooms across the land when we spontaneously applaud those in the artistic community for the glory of their works produced under the aegis of persons employing them as hired guns? Money is not necessarily the root of all evil in the courtroom setting nor is it in the theater of the artist. Calling an expert a hired gun in his courtroom testimony is merely a facile way of shifting the burden of proof to the expert to demonstrate that he has not been corrupted by monetary interests to voice opinions of the nature of junk science. The beauty of Bach and Michelangelo's creations are self-evident. Not so the opinions of hired guns who, in the courtroom, must prove themselves to be entitled to respect and affirmation. The hired gun can be classed among society's undesireables, whether literally a paid killer (a hit man) or, more expansively, simply one who, automaton-like, does his master's biding. The hired gun is marked by a lack of independent thought and a commitment to a particular course of action, not of his own choosing. By all accounts, he is to be disdained, shunned, and cold-shouldered into oblivion, at the very least. When the hired gun insinuates his way into the legal system, he is customarily garbed as an expert witness. The expert witness, who appears as a hired gun, is usually one who is signed by the frequency of his courtroom appearances and by the fees, oftentimes beyond the norm, for those appearances. Another indicia of the hired gun expert is his penchant for regularly supporting one side or the other in civil litigation or in criminal prosecutions. Scientists and nonscientists alike can fall prey to being stigmatized as hired gun experts. Most frequently, the expert witness who is typed as a hired gun is compelled to run the gauntlet on this issue during cross-examination on the trial of the case. But it is not only the credibility and, concomitantly, the weight of the expert's opinion which is diminished by his testifying in a hired gun capacity. With the advent of Daubert hired gun experts are on notice that the admissibility of their opinions may be contested on account of the bias reflected by their being denominated hired gun experts. The Fifth Federal Circuit in Watkins in 1997 put the matter plainly and succinctly, viz. "application of the Daubert factors is germane to evaluating whether the expert is a hired gun . . .." And yet the Daubert factors or guidelines for the exercise of a trial judge's gatekeeping function, when presented with expert testimony, do not either explicitly or necessarily guarantee that the hired gun expert will be exposed and banished from the courtroom. These factors, termed "general observations" by Justice Blackmun in Daubert, are five in number. They all look to the reliability of the expert's principle or method. Is it testable (sic falsifiable)? Has it been peer reviewed? What is its error rate? Are "standards" in existence and maintained controlling the technique's operation? And lastly, has the principle or method been generally accepted within the relevant scientific community? Justice Blackmun was at some pains to point out in Daubert that these four guidelines were not, nor should trial judges consider them to be the only guidelines to assure the reliability of scientific testimony. In a paragraph prefacing these "general observations" Justice Blackmun, with crystalline clarity, indicated that this listing was not to be a "definitive checklist or test." Nor, in its afterdays in the opinions of the Federal courts, has it been. Although the United States Supreme Court, in Daubert, did not speak explicitly of junk phrasing scientists or even junk science, eschewing the arresting phrasing of Peter Huber in his Galileo's Revenge, still there are those courts that have construed Daubert to signal a forthright effort to rid the Federal courts of junk scientists. Other Federal courts have construed Daubert as motivated by an antipathy toward junk science. Whether or not Daubert is to be interpreted as a junk science or a junk scientist-inspired decision is really unimportant, for I would maintain that there is an interconnectedness between the two. In countless instances, to paraphrase Benjamin Franklin's Poor Richard's Almanac, for want of a junk scientist, junk science is lost. The same theme can be expressed in an adaptation of Justice Scalia's brief concurring opinion in the recently decided Kumho Tire case. Whereas Justice Scalia referred to those occasions when the expertise is fausse and the science is junky, it could be rightly said as well that when the expertise is fausse, it is likely that the science will be junky. But not always. The method developed by a prominent toxicologist, Dr. Umbarger, in the New York City Medical Examiner's office, for the post-mortem detection of exogenously introduced succinyl choline chloride (a muscle relaxant) as the agent by which Dr. Carl Coppolino had committed murder was a new and untried technique, one devised solely for this prosecution. Some might call Dr. Umbarger's method junk science. It certainly would not have withstood challenge under a strict application of the Daubert guidelines. But no one could impugn the professional integrity of the toxicologist involved, unless he could be seen to have been motivated by a litigation bias. Junk scientists come in a full spectrum of guises. As Nobel- laureate Irving Langmuir noted when he coined the phrase "pathological science," one of the hallmarks of pathological science is that the scientist who espouses such outre theories meets criticisms with "ad hoc excuses thought up on the spur of the moment." Rather than accepting criticisms and seeking, as a scientist should, to test his own hypotheses more rigorously in light of these criticisms, the purveyor of pathological science, call him also a junk scientist, is so subjectively wedded to his own theories and methods that he rejects criticisms out of hand, meantime putting forward factitious arguments in his own behalf. But my unbridled angst, on this occasion, is directed at the scientist whose science is oriented to the process of litigation to the extent that his opinion is warped in the making by his overweening litigation consciousness. He may think of himself as a forensic scientist but, for him, being a forensic scientist is plainly an oxymoron. The courts, both Federal and State, have wrestled with the task of defining this unseemly litigation bias of the scientific expert. Some have remarked that the clearest indication of litigation bias lies in the expert's having conducted his research solely for this litigation. Others have emphasized the fact that the method employed by the expert has limited nonjudicial uses. Still others find an opinion developed outside the ordinary practices of the expert to be suspect. And, of course, there are those courts that tie the proof of litigation bias to the lack of scientific objectivity in the work and work product of the expert. None of the courts have declared their ability to know it when they see it, however. Once an expert's litigation bias has been put in issue the courts are not uniform in adopting a remedy appropriate for it nor in the proper procedure to test the claim of litigation bias. Some courts, following the view of the Third Circuit's decision in United States v. Downing, which prefigured and was heavily relied upon in Daubert, consider the litigation bias of the scientist to relate to the admissibility of his opinion. Litigation bias, thereby, becomes a factor for the trial judge, acting as a gatekeeper, to evaluate in deciding to admit or reject the scientist's opnion. Tennessee has actually taken this position by a statutory formulation of it. Other courts address the consequences of litigation bias quite differently. To them the qualifications of the scientist are being questioned by this challenge to the scientist's opinion. And yet it would seem that if a patently biased gang member can be qualified as an expert on a gang's covert code and a medical doctor can appear both as a defendant in a medical malpractice suit and as a scientific expert in his own defense, then a demonstrated litigation bias by any expert should not be a disqualifying factor. Federal Rule of Evidence 702 does not impose a requirement of impartiality on the qualifications of experts. Rule 702, in Judge Posner's words, is more latitudinarian than restrictive in defining who is to be qualified as an expert. However, the litigation bias of the expert certainly goes straight to the core of his credibility. In that regard, the weight of his opinion can legitimately be questioned by the litigation bias it reflects. In sum, it may be that the proponent of an expert shrouded in a litigation bias will have to counter an in limine motion to declare the expert's testimony inadmissible as well as a searching cross-examination on the issue of the litigation bias of the expert before the fact finder on the trial of the case. When it comes to proving the presence of the litigation bias of an expert, Peter Huber reminds us that "data-dredging" is anathema to a quest for truly scientific knowledge. As Huber puts it "the data-dredger takes data that do not coincide with his theory and explains them away and those that do fit are livingly retained." Another author in a recent Skeptical Inquirer article entitled "The Perils of Post-Hockery" elaborated upon post- hockery, a pernicious form of data-dredging, as a bias toward confirming one's beliefs through the use of a double standard. Great weight is given to the evidence supporting the chosen theory and little or no weight is given to the evidence contradicting it. I have found post-hockery to be a commonplace occurrence at the FBI's Laboratory, that is, if three out of three occurrences, which have come to my attention, make it commonplace. Two of these cases involved fingerprint identifications and one concerned bunter marks on the headstamps of cartridge cases. In all three cases the FBI Laboratory reported results of tests which implicated the accused. But when these incriminating laboratory conclusions were contested by the defense in in limine motions, the FBI Laboratory then and only then went data-dredging in a post-hoc effort to obtain the data it demonstrably needed to buttress the opinions which they had arrived without such data. This type of post-hoc scientific backpedaling is a glaring and a truly worrisome illustration of litigation bias. All this being said, it would seem that it is proved "that what's going on here is not science at all, but litigation" (in the words of Judge Kozinski on the Daubert remand) then the scientific method has been jettisoned by the expert and so also should the expert's opinion, if Daubert is to be given full and fair rein. But Judge Kozinski, in a footnote on the Daubert remand, posits that the litigation bias of an expert should "obviously not be a substantial consideration" where the expert's "scientific endeavors (are) closely tied to law enforcement." Since the courtroom is the "principal theater of operations" for such scientific enterprises as "fingerprint analysis, voice recognition (and) DNA fingerprinting (sic) among others of a similar nature," the litigation bias, being inherent, should be unobjectionable. Judge Kozinski's views on this matter are deeply troubling. What he has done is to carve out an exception for "law enforcement" laboratories from the rigors of scientific detachment. On the contrary, law enforcement laboratories should be obliged to play on the same scientific playing field and according to the same rules as defense experts. Indeed, it is unconvincing to say that just because law enforcement laboratories are regularly courtroom directed, that suffices to reduce their burden of proving a lack of litigation bias. Contrariwise it would seem to me to make their litigation bias more recognizable and more in need of judicial oversight. Fortunately, no court has been found that adopts the crime laboratory versus other experts' dichotomy of Judge Kozinski. There are certainly a myriad of ways to curb junk science in the courtroom. In my view the most likely to be instantly effective among these would be to keep a weather eye out for the litigation bias that transmutes a scientist into a junk scientist. In my appraisal that concern is fundamentally the unarticulated but blatantly implicit premise in Daubert, Joiner, and now in Kumho Tire. Dr. James E. Starrs: We have a moment for questions if you like. Participant: I'm just curious, Dr. Chaski. As I recall, Bruno Haupffman in the Lindbergh case, some evidence was used against him [inaudible] for writing [a] ransom note. Did your research look at that at all? What thoughts do you have on that? Dr. Carole E. Chaski: I believe that work was done by handwriting specialists, and at that time, it was common for a handwriting specialist to also look at spelling and grammar. If you're reading from the early work in handwriting, handwriting specialists will consider that part of their purview. Dwight Dillon wrote a very interesting review of Darryl McMinniman's book Forensic Stylistics, in which McMinniman argues that language-based authentication should be considered part of handwriting identification. Dr. Dillon, who is himself trained in handwriting, asked, "What would make a handwriting specialist think they have the expertise to analyze language?" So, I think I would agree with Dr. Dillon on that. Dr. James E. Starrs: Anyone wish to comment? Any other questions? Yes, sir. Participant: [Inaudible.] In regards to junk science [inaudible]. Dr. Michael J. Saks: Well, my reaction is that sounds like a reversal of the usual judicial response to these things. I mean Rule 702, for what it's worth, doesn't make distinctions between plaintiffs, defendants, criminal, civil, although you raise constitutional concerns, the right of a defendant to put on a defense. But that sounds like an entirely legal consideration, which scientists shouldn't have much to say about. There are, of course, those trial judges who are mindful of the fact that there might be an appeal if a scientist is found either to be unqualified or the opinion to be inadmissible, regardless of the fact that the appeal may be thrown out by reason of the abuse of discretion review position, at least in the Federal courts. Therefore, I would think, and I also know, that a number of trial judges react more favorably to defense experts, particularly if the defense expert is one that is hard to come by in a particular field, where the field is--for example, in the case of certain disciplines such as fingerprint examiners, for example, being, by and large, law enforcement, except for one or two, like my colleague Andre Moenssens, so that the trial judge, therefore, for fear of that claim on appeal, might well decide that the expert is qualified or the testimony to be given would be admissible, and that therefore, there would be a bias in favor of the defense. Dr. Andre A. Moenssens: I'd like to make a comment about something that you said, Jim, about litigation bias inherent in crime laboratory people. I think that if you're looking at the issue of bias simply by looking at the case in which an examiner testifies that you're looking at it too narrowly. When evidence is initially received, frequently the evidence is examined by an examiner and the evidence is found to eliminate suspects. Sometimes, many times, the examiners will not know who is a suspect. Certainly in the case of fingerprint identification, it is very common to take the prints of everybody that might have been in the surrounding area and who might have had legitimate access to those premises. The examiner will not know whose are the suspect's prints and whose prints belong to the people who could have been legitimately on the scene. Therefore, during the analysis stage, at least, that bias, I believe, is not present or certainly not nearly as strongly as I feel that you've suggested. Everybody that is involved in sciences, whether we call it "true science" or "forensic science," believes that they examine evidence--well, I shouldn't say everybody, but most people that I know anyway-- examine evidence pretty much in a neutral fashion without any preconceptions initially and then arrive at the result and let the chips fall where they may. I don't think that the mere fact that the chips fell on one side of a controversy which forces them to testify in court that that necessarily means that they have a litigation bias. Dr. James E. Starrs: My riposte to that would be to give you, as we used to say in the days of Brooklyn, "a for instance." In the FBI's report- Dr. Andre A. Moenssens: Anecdotal evidence. Dr. James E. Starrs: In the FBI's reporting in the Crime Lab Digest on statistical findings with respect to its DNA analyses, it is very proud to point out the fact, both, as it turns out, both in this country and in England and other English-speaking countries, that about 33 percent of the cases referred for DNA analysis indicate that the DNA does not match, and that therefore, the FBI is doing its job. Well, one of the problems I had with that initially, which is not directly in response to Andre's point, is that, "Boy, that must mean that somebody out there in the law enforcement community is not doing their job." And picking up, with possibly a search warrant issued by judges on probable cause that an individual was guilty of a particular crime or at least reasonably suspected of being so, had to give a blood sample or whatever for DNA analysis, and it does not match. So, I would not necessarily pat myself on the back on the law enforcement endeavor. However, the FBI goes on to say in its report that that doesn't mean, 33 percent non-matches, that the defendant is not guilty. In other words, this is for the prosecutors out there, to let them know, "Please don't let those people go," because it could be that there was someone else and they were only an accessory to the commission of the crime, that it was someone else's biological specimen that was left on the victim, and various other ways of indicating their law enforcement bias, and that is that the DNA exclusion does not mean innocence-- which it doesn't, of course, it just means that there isn't a match. But they always go on to say--in every program I've attended with DNA statements from the FBI, they always go further than simply saying there's been an exclusion, to point out that that doesn't mean necessarily the defendant is innocent. Dr. Carole E. Chaski: Can I make a comment about. . . Participant: Sure. That doesn't mean necessarily the defendant is innocent. Dr. Carole E. Chaski: I would like to make another comment about the role of law enforcement and novel scientific techniques. I mean I think there's a real role for novel scientific techniques in terms of generating investigative leads, in terms of giving police officers another way to look at a crime, and I think these are very legitimate functions. I think the problem becomes when we think that those functions therefore legitimize it in court. There are many things that police officers use that they know will never get into court, but they need to use those things as part of their investigative tool bag. So, I think that the push to get stuff to court too early can come both from scientists who want, you know, it's kind of like the golden ring, to actually get yourself into court. I think there's that attitude, and I think it's also pushed into court by law enforcement officers who feel like, well, 'I don't want this to be another polygraph, I did all this work and I can't use it in court.' And I think that both the scientific community and the law enforcement community have to look carefully at alternate functions that are very real and very worth pursuing prior to --before you get to the bar and entering it in as admissible testimony. ------------------------------ Panel IV. Scientific and Demonstrative Evidence: Is Seeing Believing? Moderator Ronald Reinstein Associate Presiding Judge Superior Court of Arizona Phoenix, Arizona Panelists Robert J. Humphreys The Commonwealth's Attorney City of Virginia Beach Virginia Beach, Virginia Samuel A. Guiberson President Guiberson Law Offices, P.L.C. Houston, Texas Mark Garcia Litigation Graphics Consultant FTI/Consulting Los Angeles, California Mr. Mark Garcia: The motion graphics that you just viewed are computer MPEG video files. MPEG has taken the whole world of video depositions and afforded trial lawyers extended presentation capabilities of testimony at trial. A trial lawyer can now have a 6- to 8-hour deposition session converted in full length to several CD-ROM disks. If you are further along the technological curve of the new DVD disk, the same body of material can be loaded onto a single DVD disk. FTI will typically convert VHS videotaped depositions and animation into MPEG files and static exhibits and key discovery documents into .pcx files. All of this digitized media is then loaded onto a portable hard drive that is about the size of a toaster and controlled by a laptop computer similar to the one that I am using today. Barcode indexing and retrieval technology, which we all see at work in the local supermarket, completes this setup to provide complete random access, retrieval, and presentation of any of this media in seconds in the courtroom environment. Before I show you some portfolio samples using the FTI's proprietary TrialMax software, I want to point out that there are off-the-shelf visual presentation software packages that, while less robust, may be more cost efficient and effective in telling the litigant's story in the courtroom. Microsoft's PowerPoint is one such user-friendly application and this software is typically bundled with MS Word. PowerPoint is limited, however, to very tight and very linear presentations. On the other hand, the FTI TrialMax application offers more control of multimedia formatted graphic evidence. Irrespective of whether the trial lawyer is in opening, direct, cross, or closing mode, any type of digitized exhibit can be retrieved, annotated, and brought back with those annotations in seconds. The most dominant application of TrialMax is in the area of discovery document management and presentation. With most types of complex civil litigation there are usually many documents that need to be shown at trial. Many times the courtroom becomes a war of the exhibit boards, which are very cumbersome to manipulate for most trial teams, even when they are mounted on flip boards. In terms of economics, the average cost of an exhibit board will range between $200 and $500. That adds up quickly over a dozen to 20 exhibit boards, at which point you could purchase the laptop and presentation software that I am using today. So, the economies of computerized presentation of litigation graphics are now clearly evident for even small-scale litigation. Of greater significance are the user-friendly exhibit format tools that a program like TrialMax offers. Document presentation treatments involving text highlights, callouts, font changes, blocking, redacting, and so on can now all be manipulated digitally by the trial lawyers themselves. The user can work with these tools either with the icon-marked keys on the toolbar or via "hot" keys. While FTI will often counsel clients on creating exhibits that maximize juror perceptions of color, text, and illustrated terms and concepts, trial attorneys themselves are now capable via TrialMax to make modifications even minutes before trial. In the example you are now viewing, I am using a tool that offers a "John Madden" like approach to circle, underscore, or annotate a chart or graph. Now that we are on the topic of charts and illustrations, I am now showing you features like magnification and split-screen, which can help jurors better focus on key discussion points and relate illustrated concepts to text in key documents or deposition transcripts. By the way, the user can also load and display videotaped deposition testimony or animation into these screen windows. I will focus more on the use of video and animation at trial in a few minutes. All of this presentation technology expands the range of graphic evidence preparation options for trial lawyers. However, FTI concentrates on counseling trial attorneys and expert witnesses on the stylistic design and content of digitally created exhibits to get them to perform in a manner that best teaches and impresses key case themes and underlying concepts to jurors and judges. For example, let's look at this set of "build" graphics that was developed by FTI to help an expert witness teach a scientific principle concerning the impact of various sources of ionized radiation on human beings. This progressively staged or "build" approach you are seeing, along with the barcode reader, enables the presenter to present a complex scientific principle at a pace that enhances better juror comprehension. Often, FTI will develop "graphic analogies" such as this one to translate a highly technological term, e.g., the radiation measurement unit known as an REM, into recallable visual comparison. The most commonly used exhibit to teach jurors facts of a case is the timeline. Some of you may be familiar with this case, which dealt with the issue of a high-level auto executive who left GM and went to VW and who was accused of trade-secret misappropriation of GM marketing data. This interactive timeline takes key points in the story and matches them with key evidentiary documents and other graphic exhibits, which the presenter can retrieve instantly, inclusive of all of their prepared highlights. The stylized use of certain color and text treatments in this timeline further highlight the defense's key themes. FTI will often be engaged to test the perception of these exhibits with mock juror panels. By contrast, here is an example of what I am sure you will agree is an ineffective exhibit because it takes an information "overload" approach to visually communicate the operation of a burglar alarm system. This graphic contains far too much text and too many visual focal points for the average juror, with typically a high school education, to comprehend. Another popular use of evidentiary graphics is with respect to presenting damage estimates to a jury. This example is taken from the Francis Ford Coppola case, which involved a copyright dispute on a scripted treatment for an animated feature film adaptation of Pinocchio. Here, FTI developed visual analogies that broke down the different types of damages that would have accrued if the scheduled production had been put in actual distribution. Many times our firm is presented with voluminous accounting or statistical data related to damage estimates, often developed right before trial and in handwritten form. We will respond with a cleaner, more comprehensible version such as this example. Now, I would like to move away from the area of two-dimensional static graphics and address the world of videotaped deposition testimony and animation. Here, I can use the same TrialMax platform and instantly program and present segments or clips of video-deposition testimony that have been converted to digitized MPEG files. In this example, I can even trim frames from either the beginning or end. FTI has earned a reputation for producing high-impact, cost-efficient animation to support expert witness testimony in all types of civil litigation. Advances in computer processor technology and the proliferation of competitive animation authoring programs have driven down the per minute production cost of 3-D animation to make these motion graphics affordable for use in even small scale cases. I will show some of these samples later in this panel discussion. Thank you. Judge Ronald Reinstein: While he's sitting up here, does anybody have any questions of Mark? Participant: Mark, as you well know, the technology that you're describing to the audience is becoming more inexpensive-and every man and woman with a laptop [inaudible]. As multimedia digital technology becomes more user friendly and, in a sense, less expert in terms of its organization and preparation, what will specialty houses like yours do for a living? All the technology is going to exist at a level where individual citizens like me and other lawyers can generate this work in their offices. What role will you play? Mr. Mark Garcia: Well, actually, let me just respond to that. First of all, this technology that you saw today is actually provided to our clients for free. We actually give the software, in fact even the hardware, and frequently you'll have to make arrangements for, like, video monitors, although a lot of the Federal courts now are buying pools of monitors and providing them free to counsel. Our expertise is really on the design and the communication and the development of the graphic. We don't derive any income from either sales of software, because we don't sell it, we don't derive any income from the rental of the hardware. In fact, we have a third-party vendor who gets involved when they have to rent that. As I said before, we started off as a forensic engineering firm. We continue to get into this from the standpoint of building animation. After 1,000 trial settings, we have a good understanding as to what jurors take away in terms of when a story is told, and that's what we do. Participant: [Inaudible] even though the technology may not be [inaudible]. Mr. Mark Garcia: Absolutely. And I tried to show an example of that just now, when we look at that burglar alarm system, that fire prevention system. I was in litigation support before I went into graphic design and being involved in a lot of discovery exercises and so on, the whole art of litigation starts from being all-inclusive and then weeding it down to a fine story to tell. So, there are lots and lots of documents. I find the same kind of mindset in the legal field usually flows through in the kinds of graphics they will create. They'll try to put into a graphic, into a screen on an art board, so many items, so much complexity, so much text, that what they don't realize is that, while it makes sense to the trial advocate, it's overwhelming to the juror. And that's where we try to simplify it and pull out those [inaudible]. Judge Ronald Reinstein: Robert? Mr. Robert J. Humphreys: Okay. I guess from the sublime to the ridiculous now. What Sam and I have decided to do is sort of malice aforethought. Up to this point in the conference the discussion about expert witnesses, scientific evidence/expert witnesses, has focused on the expert part of that term. And I think it's fair to say that what Sam and I will do, and, to some extent, what Mark just did, is to focus on the other part of that phrase, the witness part of the phrase, the notion that what we're all about in a courtroom is to persuade, to communicate, which is the prerequisite to persuasion. That is, the trier of fact, whether it's the judge or the jury, and persuade them through the witness and, hopefully, of what that evidence is in a way the jury can understand, can digest and apply to the laws, and to the other facts that might exist in the case. And I guess the best place to start, whenever you start talking about communicating to persuade, is with a couple of my personal heroes. The one up there right now, of course, you recognize. Sir Winston Churchill. And I'd like you to listen, assuming the sound works here properly, to a short clip from one of his most famous speeches, the famous Battle of Britain speech. [Audiotape presentation] Sir Winston Churchill: "The Battle of Britain is about to begin. Let us, therefore, brace ourselves to our duty. So bear ourselves, that if the British Empire and its Commonwealth last for 1,000 years, men will still say, 'This was their finest hour.' " [End audiotape presentation.] Now, take a moment and listen to another one of the great orators of all time, Casey Stengel. It's his testimony before Congress in the 1950s on the bill that gave baseball an antitrust exemption. He and Mickey Mantle were dispatched to Capitol Hill and it must have been a good plan, because, of course, they got the antitrust exemption. But listen to the brief clip from that testimony. [Audiotape presentation.] Senator: "I would ask you, sir, why it is that baseball wants this bill passed?" Casey Stengel: "I would say I wouldn't know, but I would say the reason why they want it passed is to keep baseball going as the highest sport that has gone into baseball and from the baseball angle. I'm not going to speak of any other sports. I'm not in here to argue about other sports. I'm in the baseball business. It's been run cleaner than any baseball business that has ever been put out in the 100 years to the present time." Senator: "Well, Mr. Mantle, do you have any observations with reference to the applicability of the antitrust laws to baseball?" Mickey Mantle: "Ah, my views are just about the same as Casey's." --[End audiotape presentation.] I will give it to you that Mickey Mantle was the only one on Capitol Hill that day that had a clue as to what Casey was talking about. And the reason I put these two clips up here for you to listen to is, every trial lawyer, whether you're a prosecutor, a defense attorney, and certainly any scientist who has the word "forensic" on their CV somewhere, all think that they sound like Winston Churchill when they go into court and testify. And then, of course, you get the transcript and you read it, and you realize that you sounded a whole lot more like Casey Stengel. The point of this exercise is that, as the golf case there represents, diagrams, photographs, and physical evidence generally can be very, very powerful. In some ways it can even overshadow the live witness. And of course, in this day and age, we're dealing with Generation X, we're dealing with folks who cut their teeth on multimedia-- television, the movies, you know, Star Wars, special effects, Titanic, you name it. These are the folks that are out there, that we're grabbing off the streets and plopping down into our jury boxes to decide cases. And we need to understand that, both as practitioners, such as Sam and myself, trial court judges, appellate court judges, and above all, experts who also have to communicate and get their usually esoteric points across to that jury. Just an illustration of how you might do it. And by the way, I've added some sound effects here that I would never actually do in court, but you know, I'm here to entertain you along with everything else. But those of you who have been practitioners, or experts in this particular field, probably know about the explanation of reaction time. In a case like a drunk driving case, a motor vehicle manslaughter case or something like that, you have the scientist up there spouting formulas about hydroplaning, or how far you'd travel at a given speed in a given length of time. And, you know, the jury's eyes are glazing over at some point. But you can show it very simply, just that way. The old adage is that a picture is worth 1,000 words. Even talking about something like DNA--just those three magic letters make most people's eyes glaze over right away. There is the notion that the DNA molecule is approximately 2 feet long, when you stretch it all out, and that every living thing, on this planet at least, has DNA, and that there are some parts of the DNA molecule that all living things have in common, other parts that just mammals have in common, and then certain parts that scientists say are unique to each one of us except for maybe our identical twin. So, how do you get all that across to a jury? Well, of course the experts can get up there and talk about it, and if they're a good expert, they'll talk about it. By the way, some experts, I think, get paid by the syllable. And maybe they do, I don't know, but the ones that, in my experience, work well with juries are the ones that can break it down, convince the jury. In fact, you know the best expert, the ideal expert, at least in my experience, that I would want to use--and I'm dating myself--is Mr. Wizard. Or, for you younger folks, Bill Nye the Science Guy. Because they can explain pretty complex stuff in a very easily understood way that your average 9-, 10-, 11-, and 12-year-old can understand. The notion that you can sort of match up base pairs from a known sample and from an unknown sample and kind of, using the zipper analogy, kind of zip them up together. And if they match, if the zipper works, you've got the same person there. This notion can much more easily, more readily be understood with diagrams to go with it, as opposed to just the scientist or a lab technician of some type testifying about it. Expert witnesses are witnesses, after all, and witnesses, like lay witnesses, are there to communicate, to impart information, and you can do it diagrammatically. I mean a photograph, crime scene photograph such as the two represented here, on a diagram, that simply shows the point of view, that helps the trier of fact orient itself to where this crime occurred, how it occurred, generally where the parties were standing when it occurred, or just before it occurred. That sort of thing can be very, very helpful. And you could, of course, take that to the next level and actually diagram the whole crime using PowerPoint. [Presentation shown.] I know, that's a little over the top, but, I've always wanted to do that in court. But the point is, that's how you communicate visually; not visually alone but orally combined with the visual representation. Taking something like how gunpowder residue gets on someone's hand, you can simply take them right out of a ballistics text, as the photograph at the top there from Hill's Homicide Events Reconstruction. You simply slap it on a copier, copy it, and blow it up a little bit so the jury can understand it. Below are simply photographs taken under strobe light conditions showing basically the same thing but, in connection with a particular case, the blowback, deposits--gun powder residue--on the individual's hand who's holding the weapon, and where it comes from. It doesn't just come from the barrel, it comes from the receiver area as well, that sort of thing. And unless you happen to have some firearms expert sitting on your jury, they're probably not going to understand this stuff very well without some graphic representation. The alternative, as happened in one location in North Carolina, is the jury's going to do their own ballistics test in the jury room. If you let the ammunition go back with them, which is what happened there, and they shot it out the window. Blood pattern interpretation, using Luminal, if necessary, to bring the blood patterns out. This is from a case: Can you find the bloody hammer impression? Well, your expert is going to testify about it, but imagine if you were the jury, how much more helpful it would be if, along with that testimony, there were some photographs showing exactly how the head of that hammer fits a particular impression. So, why bother with these visuals? You know, why do you do all this? You've all heard the old proverb, "To hear is to forget, to see is to remember, to do is to understand." Well, except for the jury doing their own ballistics test back in the jury room, it's pretty difficult for them to actually do what you or your experts are going to be talking about in the course of the trial. But the next best thing, of course, is to let them see how something occurred. Or see how a concept can be brought to fruition in the case of a scientific expert. Jurors, as we've already talked about, are more accustomed to visuals, they've grown up with them, they see them all the time. They're bombarded by Madison Avenue and Steven Spielberg. Juror retention is increased. Your average human being, according to every study I've ever seen, will retain something on the order of 80 percent of what they see, versus only about 15 percent of what they hear. Less chance of misperception, that a juror who maybe was daydreaming or is hard of hearing, or didn't hear the modifiers or the adjectives that the witness used when they described whatever it was they were describing. And from our perspective as practitioners, visuals can also enhance or disguise your witness. Maybe your witness isn't the best witness in the world, maybe they're soft-spoken, maybe they don't communicate very well, but the visuals can help enhance their credibility or, to put it another way, overcome or disguise to some degree all of that. Now, if I'm offending any public defenders out there, I'm sorry, but I mean it works both ways. And because jurors assess and process all of this information when they're assessing credibility, it's certainly all fair game to put in front of them. I've kind of rushed through that. I'm going to stop at this point. Hopefully we can pick up some of the rest of this with some of the back-and-forth question and answers. Thank you very much. Judge Ronald Reinstein: You know, Robert, as Sam's getting up, have you ever heard the argument that what you do is too persuasive? Mr. Robert J. Humphreys: [Inaudible.] I was telling him earlier about a conversation I had last week. I got a call out of the blue from the Attorney General of South Dakota, who was in the middle of a trial. He's the first attorney general I know that's actually ever set foot in a courtroom, apparently. But he was in the middle of a trial, he was using PowerPoint in the trial, and during his closing argument, the defense objected. And the stated reason for the objection was, "Your Honor, that is too persuasive." And the judge stopped the trial and said, "I'm going to take a recess of several hours and I want some law on this." So, they called me to ask me if I could provide them maybe with some guidance to some law on how it's okay to be too persuasive. Off the top of my head, I couldn't think of anything, but I did give them some case citations. About every State in the union except New York approves the use of visuals like this throughout the trial, certainly in opening and closing arguments. And the same law basically applies to exhibits. I can boil it all down to, "If you can say it, you can show it." "What the ear may hear, the eye may see," as one of the judges put it. Mr. Samuel A. Guiberson: What I really like about what's happened so far is that you're seeing a panorama of the ways in which technology is applied to court, to the experience of being an advocate. Let's look at what we've done. We've had a lawyer talking, he made you laugh, showed you pictures, you have seen videos, you've seen stills, you've heard sound. What have you done? You've been part of a total communication experience. You've witnessed something that is not inherently alien to human discourse, but something, in fact, which is what? Totally human. That's what the new digital multimedia offers us: reinvigorates the courtroom with a way of being regular people. Multimedia bandwidth means the ability to communicate with each other in court, through advocacy, in all the ways we customarily communicate with one another. It gives something back to us that technology has taken away from our courtroom experiences. Because courtroom technology is trial technology, it's not, as some folks think, some part of the interior decor of the courthouse. It's something living, something vital, something that is as much a part of what the lawyer does in a courtroom as the voice, enunciation, and gestures that lawyer makes. Technology has become integrated with the way we express ourselves in the courtroom--not because we want to show off, not because we believe that a PowerPoint presentation like this one has any value if your point has no power. It doesn't. It's not just about special effects. It's about being especially effective. And that's what lawyers want to do. What's changed now is that all the capabilities that were once the exclusive reserve of high-dollar law firms and million-dollar cases have now trickled down to every lawyer. Where once you had to have an expertise, an advanced degree, to develop, to manipulate, to employ these technologies in court, now any one of us can do it, like the amateur, the movie director that does it all, holds the camera, edits the film, makes the decisions, directs the actors. Every part of the trial advocacy performance is now in the control of the lawyers--that has its opportunities and its risks, because some people who are unfamiliar with digital evidence believe that the power of it, the authority of it, the reliability of it, is derived just from the fact that it's digital. [In progress] -- but that's not true. It's all between the digits. We can't allow the image that a technology pushes toward the jury to be the arbiter of whether that is good evidence or not. One of the problems we encounter is that the threshold of scrutiny for digital exhibits, like courtroom animations that we'll be looking at shortly, has become greater as it becomes more interpretive; that is, as it is employed by an expert to state expert opinions about an unwitnessed event, accident, or crime. The more interpretive exhibit is simple because the courts are familiar with predicates for expert testimony. And then, of course, the expressive exhibit is simply that one which extends the expression of the witness, "Now Miss Jones, the video you're about to see, you've seen it, does it accurately depict the night the accident occurred?" It is an extension of the expression, the testimony of a particular witness. We were talking a moment ago about times when exhibits are so loud that we cannot hear the witness. What we've got to recognize is, as lawyers and their media are integrated, so witnesses and their media are integrated. The exhibit, the demonstrative exhibit, is not something apart from the witness; the exhibit is the witness. The witness is that exhibit. The exhibit on the screen is an extension of the words and recollections of that witness. Now, let me show you that this is not always--you know we think of this being computer animation. It also applies to photographs. When I say "down in the digits," it means how to interpret what the infirmities of the photographic exhibit might be. Now, we all know that photographs, you know, are routinely admitted. What you're looking at, of course, is a digital rendering of a photograph. That photograph--a digital photograph can, of course, in some instances today, and certainly more so tomorrow, photographic images are going to be, in their origin, digital. And that subjects us to a lot of potential manipulation. This, of course, has been treated digitally to make it appear a little darker than it actually might have been. There's your actual photograph. Look at the sign--we're going to sort of invent a scenario here. Under the Mojave Motel there, there's a sign. Let's say this is a case about somebody driving their 14-foot-high Winnebago through a 7-foot sign. Pretty obvious. Clearance, seven. "Well, did you see the 7-foot clearance sign?" "No, sir, I couldn't see a thing." But you could see it here, you see. Those subtle variations that the digital process permits can really vary the reality that is being described without truly varying the image of the exhibit. I think we'd all agree this doesn't have to be something diabolical. We'd all agree, if I were the proprietor of the Mojave Motel, "Is this a photograph of your motel?" Yes, sir. You see, because we are not trained, we haven't evolved in our media-wise sensibilities to understand, "Yes, sir, but on that kind of day, in those kind of conditions, you could see that sign." The subtleties, the very small, down in the details, digital manipulations, are not things that we usually associate with photographic exhibits. So, we have a problem in how we approach these different forms of digital evidence. Are they expressive in that they allow an individual to vouch for the way the animation portrays the actual event; or, are they interpretive in the sense that we have only select data points of factual knowledge about how an event took place--that either a computer or an expert or a combination of the two has, shall we say, interstitially extrapolated a reality, going between the existing data points to create probabilities that would create a reality out of only a partial reality. So, our risk is that courts and attorneys don't know how to deconstruct the digital into its programming parts. To see that it is really less than the sum of its parts, you have to understand how it's composed, how artful, what the art of the programming is, and what components of reality underlie the illusion, the apparent reality of the computer exhibit. So, the virtually real--the animated, digital reality that seems perfect, dinosaurs with scales on their feet, everything we witness in the contemporary cinema, of course--shows us the virtually real. Is it more admissible because it's real? Does how authentic it is really underlie its admissibility? Or, is something like a digital exhibit that is very realistic simply like a more articulate lawyer, or a lawyer who simply paints a picture a little more artfully than another lawyer? The reality, the physical appearance of the digital exhibit does not dictate its quality. Its entertainment value is not its probative value. And, in this sense, it's important to recognize that we are often lulled, by the very fancy Star Wars animated exhibits, into respecting them more as evidence than we would respect the stick figures that another lawyer might use on a low budget--a witness expresses the relative physical positions of people during a shoot-out. They're both equally expressive and equally valid as evidence. One simply is more articulate, because it uses a higher level of digital virtual reality. Be wary of aesthetic value pretending evidentiary value. Like the devil, the digital is in the details. How do you confront the evidence if it's so obscure in its encoding that you have no way to reach down to that level, that particular granular level, at which it becomes either the truth or a lie? And that, of course, is a part of the way in which we must change our court procedures to recognize that digital exhibits require a level of scrutiny much more subtle, and in some ways more complex, than other forms of exhibits. We ought to talk about having digital discovery conferences early on in a case, where the court can come as an arbiter of what the parties intend to do as far as their expression in terms of the discovery and in terms of their trial exhibits. If the court can stimulate people to think whether or not they can use these methods, and if they have time to learn to use them effectively, and once deciding that they can use them effectively, each party can judge the reliability underlying the digital evidence that comes forth. Because there is great prejudice in this, I cannot confront the defects of a sophisticated computer animation by using words alone. I cannot stand before the jury and go into denial about the subliminal, visceral impact of that image as it is really just wired into their psyches. When they go back to deliberate, that picture of the accident is how they will remember the reality. I can't say, "Folks, it's not me," if I'm caught on tape. I can't defeat that mental imagery that is already part of their concept of the case unless I have, what? A competing, equally effective counter-imagery that then subjects them to a choice process. It's not the other advocate's image of the event by default. Let's look at a few simulations. I'm just going to let this run. We'll just sort of talk about a few things that we observe while this is going on. Okay. Here comes a truck. Think about this. How did the animator know at what point that truck went sideways, or that car turned? Was it just the recollection of the parties? Do we really believe that people traumatized by an accident of that sort really remember the exact rate at which their car turned on the highway? Of course not. That's an artificial reality pretending as a reenactment of the crime. There you saw a photograph. Well, the photograph and the animation look real good, they look close, so it must be good. Boom! This can be characterized by that witness as, "This is what I saw, I saw that truck turn that way, as you notice, folks, he didn't have his left blinker on. That's what I saw." This can be sustained as expressive animation describing what the witness remembers. Of course, this is another one. This is an overview. Of course, obviously, nobody saw it from this angle. How does anybody really know what it would look like from above? Here a citizen is about to demise here. The faint of heart can look away. The testimony: "I was coming around that stopped truck, and there he was, couldn't do anything about it." You see, that, too, can be sustained. You can argue with it. I wouldn't try arguing with it with their animated exhibit. I have my own that showed him careening down the highway. If this guy survived, he can say, "Yes, sir, I saw the truck going like this as he approached me," and we can have an image to counter that image that would have an impact with the jury. Okay. This is supposed to be about--as it will tell you in a moment--about how virtuous it is to put a rack on your little scooter so nobody gets hurt. You see, you can understand how an expert testifying about the virtues of having racks over the drivers on these vehicles--ouch! Okay. Now, don't you know that--yes, exactly. Ooooh, what did you just do? You empathized with that victim. That picture gave you a way of relating to how much that hurt, because you had an image of it. And that's the whole point. This really is beware of exhibits, video exhibits bearing false promises. Folks, that wasn't about how much better it is to have a rack on the top. That was about, "Man, did that hurt! Was that an ugly accident!" That was the meta-content of that video. Now, here's another one. Here's a re-creation of the supposed reality of the contact between the victim and the Mercedes Benz 230-SL, I believe. Now, what's interesting here is, this is really not about knocking her over. It's apparently--and I didn't have anything to do with this case--it's about how her wrist got broken. We see the close-up here. Close up, we show you the impact. But how did they know? You think she remembers after that trauma that her wrist was pointed this way over the Mercedes Benz emblem on the back of the car? I don't think so! What that is is taking a result and working backward to interpolate a reenactment in digital video. Of course, it couldn't be done before. Now it can be. This I'd like to call "What Truck?" Who knows how dark it was that night? If I'm the guy representing the plaintiff--we assume the headless plaintiff in this case--I'm going to want it to be dark, but what about dark is so objective that a computer animation can capture just that essence of dark that existed on that day? Nothing. This is just to show you how far, how in that last second, if you notice, there's a giant truck ahead of you. Let's see what else we have. Truck--oh, good, the truck driver's view. "I was driving along, nothing in particular was happening, when". . . boom. Now, this is not a film. I don't know if you can see the individual standing in the middle of the road there, and of course, that's the whole point. "I couldn't see him. I thought it was a coat rack!" This is not like an enrollment film for plaintiff's lawyers, where you just want to represent guys who stand in the middle of the highway and wait for trucks to roll over them. But, as you can see, one second to impact. What are we doing? We're trying to convey what? Are we really talking about one second or how dark it was? That's what they are talking about, obviously. But, whew, bam! That hurts! That, too, is about the human experience conveyed through a human-like representation of the event that tells us what we need to know. That's the problem. It's not that the technology is so devoid of a human nature. It is that it provokes human reactions because it invokes all our senses So, you can see the problem I'm confronted with as a lawyer. There is very little way for me to defeat, as we've talked about, the introduction of that evidence. I could point out its shaping and its biases, but that image is there, and so, we're stuck in an extraordinary world of advocacy where we now must compete in digital reenactments. I'm not sure that's any less even a bar than we've worked with all these years. There are, after all, Winston Churchills who can artfully express themselves and bring the imagination alive and create images in our minds as effective as any of this computer animation, and then there are the rest of us. So I'm not sure the field is any less level than it was before, but it has introduced a new dimension to the, shall we say, competitive nature of advocacy in court. But the reality we have to accept and the courts need to recognize, all lawyers certainly do, is that truth comes to the court in words, it comes in sounds, and it comes in images, and the media itself is not the message, it is the message's messenger. We don't have to fear the advent of a media-rich courtroom environment. We have to learn and recognize that it is an expression of who we are. It is advocacy first, of course, and it is certainly technology second, but it is a gift, not a threat to our jurors. Thank you very much. Participant: You talked about computer simulation. There is no witness, so to speak. It's really the computer program who becomes the witness. What do you do about that as a lawyer? Mr. Samuel A. Guiberson: A good example, let's say, would be a jet plane crash. Here you have a rich data environment. You have thousands of data points. We know the physical reality of airplanes. We have sequentially and electronically preserved every change in attitude, in altitude, behavior of all its systems. In that instance, one has just a sea of data in which to reconstruct the reality that occurred when that plane crashed. But in a less technically rich environment that becomes very [inaudible]. So, you have to try to expose where the assumptions exist in the coding process, where the kernel, small or large, of reality is upon which the extrapolations are based. Now, a problem we have is that expert systems, that is, view of programming itself, relying upon all the previous crashes of this type in which this expert testified for one side or the other, have evolved this model and then try to sell the idea that this model is as reliable as data points from a black box. And of course, it's not. What you have to do is make sure that both sides--when these types of complex animations come into evidence--that both sides have the code, have the raw data, have the program, so we understand how [inaudible] artful engineering of the program is around the deficits in the physical [inaudible]. Those are the things, and that takes time. Participant: Now, you worked on the Oklahoma City bombing case, and I know that the simulation was not admitted by Judge Mache. Is that right? Mr. Samuel A. Guiberson: That's correct. But there was powerful video evidence and audio evidence in that case that I don't think most folks recognized, because there was so little public experience of the events in the courtroom. Some of the most bruising evidence one could ever hear, of course, would be the audiotapes recorded virtually by accident a few hundred yards away. In court we listened to the explosion as it occurred, and it is the most visceral and powerful evidence you can imagine, not because it was an animation of what happened, that would have been much less powerful, but because it stimulated our imaginations to conceive of what a horrific moment that was. Our imaginations remain the greatest resource for conveying our advocacy into the minds of the jurors. That's the key. It's not replacing the imaginations of the advocates and the jury with computer animation. It's stimulating people to put themselves inside what they see and hear. Judge Ronald Reinstein: [Inaudible.] Any potential . . . from the prosecution viewpoint? Mr. Robert J. Humphreys: Yes, it's amazing. I couldn't really agree with Sam more than with his last statement there. I think it's dangerous to jump into this stuff just for the sake of doing it. You've got to have a purpose in mind, and I would generally lump what I refer to as visuals into two categories. The simulations or animations--simulation is the more correct word, I think--some examples of which Sam just showed you. Those have limited utility and are very, very dangerous to work with. Because it depends on your expert testifying exactly the way as the animation . . .. You know, I don't know about you, but I have yet to hear a witness testify the same way they testified in my office 5 minutes before they took the stand. So, it's a dangerous thing. It's a dangerous way of doing that. You know, knock yourself out if you think it will work for you, but there are little issues like, what's the degree of darkness that you've got to be careful about? Where I think this visual stuff can be far more effective is in what I would refer to as simple illustrations of witnesses' testimony, such as, the crime scene diagram where the parties were positioned and what route they took based on their testimony. They can diagram it themselves right from the witness stand. And there are also what I refer to as argumentative visuals. Those which you used in your closing argument were perfectly okay to be argumentative, where maybe you can use some of this stuff, where it doesn't matter what the degree of darkness is, because that's all argument, as long as somebody said it was dark, that kind of thing. So, I think you have to be careful. It's very powerful. Sam and I agree completely on that. But know what you're doing before you jump into this stuff. Participant: I know there was a judge here earlier this morning from Maryland, I don't know if he's still here, but last year, Maryland's Court of Appeals adopted proposed model rules governing the admissibility of computer-generated evidence specifically as to simulations, animations, and digital camera productions, but nothing else other than that as far as regular cameras. Judge Ronald Reinstein: Dr. Lederberg? Dr. Joshua Lederberg: It may, in fact, be a relevant point. I remember Mr. Churchill's speech very well, but I also believe that there's been later evidence that it was junk. Judge Ronald Reinstein: It's application of the best evidence rule. I didn't hear it firsthand. Yes? Participant: My experience with the simulations is that I've never gotten one into a trial and I have never had one proposed by the other side, yet neither [inaudible]. Just as an example, [inaudible]. You have five variables, you have 150 inferred assumptions [inaudible]. It may be helpful [inaudible]. It's helpful in that regard, very helpful. Mr. Samuel A. Guiberson: I think that a simulation is going to be different than an animation. In an animation, we've got a witness describing an event, and you're using something demonstratively to impact the jury, and that--the animation itself, where you have the live witness there [inaudible] this is a fair and accurate depiction, is going to be different than something like in a crime scene where you're feeding in data from the black box of a plane crash. Mr. Robert J. Humphreys: Yes. No crime scene is as data-rich as an airplane crash, let's face it. Unless it's in the cockpit of an airplane. There's a very great risk. What worries me is that courts will be mesmerized by those production values and accept the thing for what it is on its face and not really look at the details. That's what always worries me. Mr. Samuel A. Guiberson: One of the things that Bob and I talked about before we came was what happens when you're in court and you have somebody who, in opening statements, plugs their computer in, and all of a sudden, toward closing argument, they're showing one of these simulated examinations, and the defense attorney is sitting there just mesmerized, as well as the jury, and then all of the sudden, probably thinks to himself, wait, you know, I should object to this. Mr. Robert J. Humphreys: One of the reasons I lecture on this a lot is because I think I was probably one of the first to start using this stuff in court, and the first couple of times I did, very serious cases, in fact, one was a capital case, the defense attorneys sat on their hands just watching along with everybody else, and I figured they would object to it. I was kind of ready for it. And the first two or three trials I went through, nobody said a word. And finally, I think, you know, people caught on, and about the third or fourth trial, they threw an objection, and the effect was basically, "Well, judge, we haven't seen any of this stuff, and we want to see all these records that he's putting up there." I just, you know, for the record vouched that I was an officer of the court and that everything I was going to show the jury was either the product of the stipulation or the product of pretrial conference where the court had made rulings or, in one case, a confession was coming, a videotaped confession was coming into evidence [inaudible]. Participant: Never mind, Your Honor, his word is good enough for me. [Inaudible conversation.] Mr. Robert J. Humphreys: The point is, I said I'll show it to you, you know, if you want to verify all this stuff, but basically, I'll be damned if I'm going to let the defense, you know, prescreen all my opening statements. I'm not going to give a dress rehearsal if I were just going to say it, so why would I give a dress rehearsal if I'm going to show it, and the judge bought the argument. I said, if I'm wrong, if I don't call this shot right, I mean we're going to try this case again, right? So, you know . . .set the standards. Participant: I think that's exactly the wrong [inaudible]. This is not about which side. I'm just saying, you've got to recognize the power of this form of presentation. It has to be respected, and you've got to give both sides a chance to confront it. There's a real confrontation issue here. I can't confront it by watching it run by my face once, anymore than I can confront 150 hours of audiotape by listening to it once. You have to have the time to take for the detail, the digital detail, or you really have no effective opportunity to challenge the evidence. However, I do agree that there is a learning curve, and that's what we're here about. People are going to miss the cue to know how and what to object to in this evidence, on both sides, and this is what the process of media consciousness raising is all about, so we all know that this evidence is subject to risk and, as I said, also [inaudible]. Judge Ronald Reinstein: Several years ago, I had a prison murder trial. One prisoner was accused of killing another. A third prisoner, the State witness, who said that he saw the whole thing, testified on direct examination as to what he saw. The defense presented photographs of the scene, and because of the light that was used and the angles that were used, it made it seem like the prison witness could not have seen what he said he did, which was crucial to the State's case. The jury was a hung jury, 10 to 2 for acquittal. Well, the State decided to try the case again. The second time they asked for a jury view of the scene. They did view the scene at the prisons. We took the jurors out there. And sure enough, there was no doubt, I think, in anybody's mind, that the prisoner witness could see the murder scene, and the jury came back in about an hour-and-a-half with a guilty verdict. That was the only difference in the two trials. So, images change. Mark wanted to show you something on a DNA exhibit that was used [inaudible]. Mark Garcia: Just a couple of comments on things that were said before. First of all, what's happened as the technology of producing digital graphs reached the common desktop is that you have a lot of people who have gone into this field coming out with product design. These are people who are not forensic animators, they are not people who are used to working in rigorous expert witness-type settings, but are basically taking a leap of faith and becoming themselves interpreters of events, and that probably distinguishes where we are, where other groups, like Decision Quest are, and I was also one of the original members of the Litigation Science [inaudible]. Where the better firms start are--usually there is definitely a division, a separation between the artist, the expert witness, and often an intermediary [inaudible] who is usually some kind of forensic [inaudible]. Ultimately, you've got to be careful about it. We do not take on [inaudible] specifically because a lot of [inaudible] jobs are built into that area. The kinds of things that we do are things that definitely have some moral objective [inaudible], for instance what I'm going to show you here, and basically, this is the birth of a trial balloon. I'm going to let this . . . but what the thing does . . .. What this will do, it will show genetic mutation that occurs. ------------------------------ Panel V. Jury's Comprehension of Scientific Evidence: A Jury of Peers? Introduction: David G. Boyd Director Office of Science and Technology National Institute of Justice Washington, D.C. Moderator: Shari Seidman Diamond Professor of Law and Psychology Northwestern University Law School Senior Research Fellow, American Bar Foundation Chicago, Illinois Panelists: Neil Vidmar Russel M. Robinson II Professor of Law Duke University School of Law Durham, North Carolina Lawrence M. Solan Associate Professor of Law Brooklyn Law School Brooklyn, New York Arthur H. Patterson Senior Vice President DecisionQuest State College, Pennsylvania Mr. David G. Boyd: Today we're going to look at the interface between science and the way it's presented in court and how juries react to it. And so, I'm going to turn it over to Professor Shari Diamond to do that, but a couple of real quick administrative notes. You have an evaluation form in your packets. You'll note that we cagily left off any address, so you can't mail it to us. We need you to complete it today, if you would. There's no address there. One thing you might make a point of is to tell us whether you think we ought to do this again next year. Professor Diamond? Dr. Shari Seidman Diamond: It's a genuine pleasure to see scientists and legal professionals engaged outside the litigation context in discussions about science and the legal system. As an attorney and a scientist, I've often heard the two groups characterize or perhaps caricature one another, a divide I hope this conference is helping to reduce. When legal professionals talk about scientists, the word "naive" often appears in the description. When scientists talk about legal professionals, they often express frustration with the perceived failure to search for the truth. How these characteristics have arisen and whether these are fair descriptions, I think, are issues worthy of further discussion, but they also relate to this morning's topic, the jury. For the jury is charged, actually, from all sides with both of these weaknesses. So, one way to characterize the topic of this panel is: What can we say about the accuracy of charges that juries are naive and insufficiently sensitive to search for truth when it comes to evaluating scientific evidence? I'll give you a sense of the order of things, but first I want to introduce our panelists, and I'm going to tell you a few things about them that you won't exactly find in their little biographies. Neil Vidmar comes from Duke Law School. Neil is well known for his work on juries in medical malpractice cases. He did a very close analysis of some of the most difficult cases that brought expert testimony into the courtroom in civil cases, and what is special about that particular book is that it has received high praise from both the legal and the scientific communities, which, as you might imagine, is no small feat. Neil is not only a well-respected jury researcher. He also wrote--took the lead in writing--an amicus brief from a number of us jury researchers in the Kumho case, and he'll be talking to you about that later. Larry Solan, from Brooklyn Law School, returned to the academic fold, true to his training as a psycholinguist, and those who read his excellent 1993 book, The Language of Judges, would no doubt be surprised to know that, during that time, he was a partner in a law firm. Art Patterson left a tenured position at Penn State to become one of the country's leading jury consultants. I always refer jobs in jury consulting to Art, because he's one of the few consultants, I believe, who is honest about what things he can do and can't do. As testimony to that, he was the jury consultant who allowed Steven Adler to follow him around doing his work, taking a little bit of a chance that this Wall Street Journal reporter might say not so pleasant things about him. For his troubles, Art got a wonderful chapter praising him--or praising his skills, I should say--in Adler's book on the jury. I'm Shari Diamond, and you can read a brief litany of the gory details of my checkered past in your pamphlet. Neil will begin with an overview of evidence concerning jury reactions to scientific experts. Larry will then provide a psycholinguist's account to explain why juries have difficulty with particular tasks. Art will focus on the characteristics of the trial and the evidence presentation, and then I'll discuss how deliberations affect how jurors handle quantitative evidence and also some approaches to improving jury performance. Neil? Dr. Neil Vidmar: I was struck yesterday--I came in late, and I heard people making reference to the use of anecdotes in trying to understand scientific evidence. I think it was appropriate, because the jury is often--in fact, usually--discussed by lawyers and judges by anecdote and innuendo rather than a more systematic approach to thinking about it. Everybody has their war story or examples of the jury that went astray or the jury that was brilliant. Interestingly enough, we were also talking yesterday about Peter Huber and his work on junk science. He coined the term "junk science," but seemingly without understanding the irony of it. The many comments that he made about the jury system, its gullibility and its frailties, were in themselves junk science. If you go back and look at his book, he makes all kinds of assertions about the jury that have no documentation or basis other than, "Well, here's one case, and there are two cases," or, "Common knowledge tells us that." The same thing was true in the Kumho case (Kumho Tire Co. v. Carmichael 119 S.CT. 1167). The petitioner and amici for both the petitioner and the respondent made the following assertions, among others: Jurors are incapable of critically evaluating the bases for an expert's testimony and too often give unquestioning deference to expert opinion. It is common knowledge, moreover, that jurors perform less well when they sit in judgment on technology. Jurors often abrogate their fact-finding obligations and simply adopt the expert's opinion. Studies have confirmed that jurors routinely believe the testimony of expert witnesses, citing a National Law Journal survey for that last quote. I want to provide this morning just a brief overview of research that has taken place over more than a quarter century but has certainly been, in the last quarter century, much more intense in examining what juries actually do. A lot of this comes from the civil jury, because it's easier to get some studies in those settings, but in fact, there's a fair amount that applies to the criminal jury, and I think the basic principles apply in each case, whether it's criminal or civil, in terms of their ability--of the ability of the jury to integrate the information. But to introduce the subject, I want you to think about something first. There's an interesting study by Landsman and Rakos in which they had 88 Ohio judges and 104 Ohio jurors read a synopsis of a product liability case and give a verdict on liability. Some of the jurors and the judges were exposed to legally objectionable facts in the case that they read. And then half of them--the judges and the jurors--were told that the material was inadmissible and should be set aside, while the other half received no instructions. There was also a control condition that had no objectionable facts. The finding from the Landsman and Rakos study was that the jurors who were admonished to disregard the evidence were not different from those who were not admonished. Not surprising, I suppose. And the hypothesis, of course, would be that judges, due to their training and experience, should be less susceptible, but the findings of the study were that the judges were no different than the jurors in their ability to set the evidence aside. So, it makes you stop and think about what the differences are between judges and jurors. I can cite another study for you by Gary Wells, who compared 740 students and 111 experienced trial judges in their ability to properly weigh probabilistic evidence in a series of vignettes. No differences were found in the abilities of the judges and jurors in response to the statistical evidence. Now, I don't want to downplay the difficulty and complexity of some of the scientific evidence that is produced at trial, because we all know it's complex, but I offer those two examples because we have to stop and think of what the alternative is to the jury--that is, a single judge acting alone versus the combined perspective of 12 citizens who are called to make these same decisions. I'm often struck by something I read about 10 or 12 years ago in which the statement was made that lawyers are highly literate but barely numerate. Sometimes I think that might be worth keeping in mind when we come to thinking about these things. We have to actually view the jury's performance in relationship to the alternative. One of the things I wanted to do just to introduce this topic is remind you that, especially after the O.J. Simpson trial, there was a lot of discussion about the trend in juries to become acquittal-prone. The Abuse Excuse, Dershowitz's book, and Jeffrey Rosen in The New Yorker, and a number of other commentators were perpetuating this belief, based upon no systematic evidence whatsoever, that juries had become more and more acquittal-prone. Sara Beale and two additional colleagues of mine went back to the Federal statistics and looked at conviction trends over 25 years. And what we found was that in the Federal courts the actual conviction rates have increased substantially over this period of time. Now, a lot of this may, in fact, be due to the way the cases are filtered up through the system, but the point that I'm making is the conviction rates are certainly contrary to the hypothesis that conviction rates are going down. So, that was in the Federal courts. We actually looked at five different States, because most of the criminal cases are tried there. When we looked at North Carolina, New York, Florida, Texas, and California, we found that the rates were either remaining stable over the periods of time or, in fact, had increased, as they had in the Federal courts. Again, you've got to treat all of that with a grain of salt because of the way the cases are filtered up, but certainly it is contrary to the hypothesis that juries are having higher conviction rates. Can I have the next overhead? In fact, what I want to do today is just give you a brief overview of some of the evidence that I and others set forth in the Kumho brief. You can consult that brief to get the basis behind it, if you wish, but I thought I would just talk about a few of these things. They are listed in the accompanying table below. Empirical Research on Jury Competence and Jury Bias A. Research findings lend no support to the view that juries have become increasingly acquittal prone. B. Surveys of trial judges indicate very positive views of jury competence and diligence. C. Trial judges' views of the case show high agreement with jury verdicts. D. Studies comparing opinions of experts on negligence in complex civil cases show agreement with jury verdicts. E. Case studies of jury competence in complex trials also lend little support to the claim that juries uncritically defer to experts. F. Experimental research on jury understanding of complex evidence has produced no consistent findings that juries perform poorly. G. There is scant evidence to support the view that juries are pro- plaintiff and anti-defendant in civil suits. In the criminal jury context the bias appears to be toward the prosecution. H. Expert evidence is not the only evidence around which complex cases turn. I've already referred to Point A. Research findings lend no support to the view that juries have become increasingly acquittal-prone. Point B. Surveys of trial judges indicate very positive views of the jury's competence and the jury's diligence. In fact, there were a couple of surveys that were taken in 1987 asking judges to respond, in some cases, to civil trials only and in other cases with respect to civil and criminal cases. The overwhelming view about the jury and its competence and its diligence was supported by the trial judges. That's some information that just seems to get lost in a lot of the criticism. You hear the anecdotes about the "one bad case I had," but in fact, there is pretty consistent, very positive support among trial judges. The third point that I would make is that trial judges' views of the case show high agreement with jury verdicts. Now, most of you should be familiar with the classic Kalvin and Ziesel study of the American jury in which trial judges were asked to give their views of the evidence and the proper verdict in both civil and criminal cases. Those judges' views were compared to the jury's verdict. There was around 80-percent agreement. Those data are almost a half-century old, and one criticism that could be made of them-- they're cited so often--is that the problem is that the evidence that appears before juries today has changed and has become more complex and so forth. It turns out that there are a couple of recent studies that have replicated these findings. One is by Heuer and Penrod, which took a sample of judges from 33 States and asked them to provide detailed analyses of trials, both criminal and civil. Like Kalvin and Zeisel, judges made their own ratings of the proper verdict and also indicated the degree of complexity of the evidence. It turns out that there was still high agreement between judge and jury, basically supporting the data of Kalvin and Zeisel. Similarly, another study was conducted just recently by Valerie Hans, Tom Munsterman, and Paula Hannaford, involving civil cases in Arizona. Judges were asked to rate their views of the jury's verdict and the complexity of the evidence. This study too ended up showing high degrees of consistency between judge and jury. So, with respect to Point C, there is pretty high agreement. Consider next Point D. Studies comparing opinions of experts on negligence in complex civil cases show agreement with jury verdicts. It's often said that the juries get confused by the expert evidence. In fact, the American Medical Association was on record several times stating that juries decide cases differently than doctors, because doctors, after all, have all of this knowledge and this expertise. A study that was conducted by Taragin et al. in New Jersey compared the nondiscoverable ratings made by the insurance companies. Every time an incident occurred, they had their own experts rate whether negligence occurred or not. And when Taragin et al. compared those ratings with the jury verdicts, there was high and consistent agreement between the experts, the doctor's ratings of the negligence and causality, and the verdicts that the jurors rendered. The Taragin et al. study was supported by a couple of additional studies conducted by other researchers. So, once again, when we look at these kinds of data, we see a very high agreement in these complex medical malpractice cases with the expert's views of the evidence. Case studies of jury competence in complex trials also lend little support to the claim that juries are uncritical of expert evidence. This is Point E. Now, several of the case studies have disagreed with the jury verdict in some ways, but even the ones that have said that the juries did not perform at an optimal level concluded that the jurors were generally skeptical of, if not negatively disposed toward, many of the medical experts who testified in an asbestos case and also in Bendectin cases. Ivkovich and Hans conducted indepth interviews with 55 jurors from a number of cases, and here's what their conclusion was: "The claims that jurors either ignore or accept uncritically expert testimony seem far-fetched. We observed a good deal of critical assessment of experts, their credentials, and their motives for testifying. Jurors do not appear to be as naive as some commentators have assumed about the financial and other motivations that may lead some experts to be hired guns. Furthermore, when jurors are faced with the difficult task of evaluating evidence that is outside their common knowledge, they rely on sensible techniques, assessing the completeness and consistency of the testimony, evaluating it against their knowledge of related factors. For especially complex topics, the jury relies on its members who possess greater familiarity with the subject matter of the expert testimony." So, basically, the case studies, when they're looked at systematically, also end up showing some substantial support for the jury system. Now, when I'm making these comments, I should just pause here for a moment and say this is not to say that every jury gets it right. What I'm talking about is an overview of the average jury and how it performs. Point F. There is a substantial body of experimental research on jury understanding of complex evidence, and it, too, has produced no consistent findings that juries perform poorly. There are a number of studies that have shown jurors have difficulty responding to probabilistic complex statistical evidence, but as I've indicated to you, there's some evidence that judges have the same problems with them. Jurors will tell you straight away that they do have difficulties with these complex things, because this is beyond the comprehension of and everyday experience of the layperson. But there are also some studies that show that the jurors' values and beliefs--and this particularly applies, I think, to the criminal cases-- influence the way that they tilt. In criminal cases, the burden of proof is supposed to be on the prosecution, but often jurors tilt in favor of the prosecution. There is some evidence that suggests that jurors would evaluate evidence in a way that is more consistent with the prosecution's view, but there are some other studies that contradict that as well, and I just need to indicate that this gets into some hairy kinds of findings. But, by and large, there is no evidence that juries consistently perform poorly. With respect to Point G, there is scant evidence to support the view that juries are pro-plaintiff and anti-defendant in civil suits, as is often implied, and I've already indicated to you that in the criminal jury context, the bias appears more often than not to be toward the prosecution. My final Point, H, is one that may explain some of that consistency, and it's just worth thinking about. Expert evidence is frequently not the only evidence around which complex cases turn. The expert evidence is there, but there's often a question of who said what to whom; What did they know and when did they know it? Whether we're talking about complex fraud trials or about corporate misconduct in a tort suit or a medical malpractice case, you have to evaluate all of that expert evidence in the context of the other evidence that the jury hears. So, what I've done today is given you a brief overview of empirical findings. The studies are much more detailed, but there is a very substantial body of literature on the subject. It tends to paint the jury as a competent decisionmaker. If the jury is communicated to properly by the lawyers and experts and instructed properly by the judge, it performs reasonably well most of the time. Prof. Lawrence M. Solan: I'm glad I didn't go first, because if jurors are no worse than experts and professionals in performing their tasks, I'm going to talk about how they're no better than that, and it's better not to start the morning that way. My goal is to show how--or to ask how--advances in linguistics and the psychology of language can explain some recurring problems in jurors' comprehension of both evidence and jury instructions. Once we ascertain what makes a concept hard for jurors to understand, we can then ask whether it's possible to present that concept in a different manner more effectively. On the other hand, if we conclude that it would be very difficult to remediate particular problems by improving presentation, then we might want to ask larger questions about the appropriate role of the jury in particular situations. I'd like to touch briefly on three issues that I think are important ones when we consider the efficacy of presenting scientific evidence to juries--for that matter, any evidence: First, the problem of presenting jurors with instructions that define the crimes and torts on which they are told to base their deliberations; second, the difficulty with burden of proof, in particular the concept of proof beyond a reasonable doubt; and third, the ease with which people are willing to accept evidence of association as causation. Now, I'm going to suggest that these legally significant concepts-- causation, proof beyond a reasonable doubt, and definitions of crimes and torts--cause problems for jurors and for everybody else--you don't get a cognitive remake when you get your jury duty notice--for essentially the same reason. We conceptualize by forming mental models of the world that have certain characteristic structures. However, some of the tasks that we ask jurors to perform are at odds with the structure of their concepts and, therefore, hard for them, as they'd be hard for anybody. Let me talk about what I mean by mental models. Traditionally, linguists and philosophers of language assumed that to know the meaning of an expression was to know the set of conditions under which that expression is true. This is sometimes called the classical approach to word meaning. And it says that to know a word is to define the concept by enumerating the conditions that are in sum both necessary and sufficient for that concept to obtain. An example that one used to hear, say, in the 1960s, was a bachelor is an unmarried adult male. Now, this definitional approach to meaning fits very nicely with a legal system that functions by promulgating rules, for if we can articulate the elements of a crime or tort or burden of proof--that is, we can articulate all and only the conditions under which the crime or tort has occurred--then we can govern ourselves in an orderly fashion with notice as to what's right and what's wrong, what's proscribed. Jury instructions are typically structured in that way. They present a concept, such as proximate cause or reasonable doubt or each of the elements of kidnaping, and then tell the jurors that this is how you're to deliberate. Beginning in the 1970s, linguists and psychologists began to discover that this definitional approach to conceptualization doesn't seem to characterize the way we think very well. First of all, the definitional approach fails to explain the intuition that some things are better members of a category than are others. In early experiments, the psychologist Eleanor Kosch found that people always consider, for example, a chair to be a real good example of furniture and a piano to be an iffy example of furniture. Second, definitions have trouble capturing the fact that concepts get fuzzy at the margins. Sticking with furniture, when does a chair get wide enough to become a love seat? And when does a love seat get wide enough to become a sofa? Well, at some point, it does, and at some point, you say, "That's a peculiar looking piece of furniture." The concepts do get fuzzy. Third, some concepts, such as "game," as Wittgenstein taught us, seem to be better described as family resemblance categories. It's very difficult to know what makes a game a game. Yet, we know one when we see one. We can account for all these observations if we claim that people conceptualize in terms of mental models. Now, among the things that our mental models contain are information about prototypical instances of a category. When I say "swimming pool," you probably envision what looks to you like a swimming pool and not a set of criteria for what makes a swimming pool a swimming pool. When I say "Doberman pinscher," you envision one, getting back to Dr. Caskey's dog metaphor. These pictures in our mind are schematic and incomplete, but they're no doubt part of what it means to have a concept. Certain linguists and psychologists have made much of this, with large theories around these observations. This isn't to say that necessary and sufficient conditions are irrelevant to conceptualization. In acquiring new concepts, we automatically search for unique features that make this category different from others. If the male and female of a species of bird differ with respect to their tail plumage, then we have no trouble saying so. The presence of the plumage is sufficient to define the sex of the bird, and experiments by various psychologists, including Douglas Medin and Philip Johnson-Laird show that we do use such features of uniqueness in conceptualizing. Nonetheless, our concepts and our knowledge of meaning certainly contain information about prototypes and saliency. Finally, our cognitive models typically contain information about what's true, what's in a concept, and not what's not part of the concept--what's false. It's not always the case. There are certain negative concepts, like "impossible" and so on, but it's typically the case. This fact is part of what makes Magritte's famous painting of a pipe with the caption "This Is Not a Pipe" so strange. We usually don't think in terms of what a thing is not. Although we're able to do so, experiments show that this is harder, a fact that's routinely recognized by experts in English composition and by those who specialize in rewriting jury instructions to make them clearer. Now, let me illustrate these points, if I could have the first slide, please. This is the definition of Doberman pinscher from the dictionary that I use, and you see it has all of these things. It has a picture--not a great picture, a schematic picture. It has some necessary conditions, one of a German breed-large, slender, etc.--and then it says "usually black or brown." Those are salient or prototypical conditions. And it doesn't have negative conditions. It does not say "this is not a cat." Those particular features--saliency, necessary and/or sufficient, some schemata, and the absence of what's false about a concept--can explain certain problems that arise with a jury. And the fact that we assimilate new information into models based on prototypical instances of a category has been used to explain a wide range of legal phenomena. But most pertinent, there is experimental evidence that shows that the jury is subject to this as well. In a very interesting set of studies, the psychologist Vicki Smith asked subjects to list what they considered to be the attributes of certain crimes, and they didn't have any trouble doing this. So, for burglary, they said something of value was taken, it occurs in a home or apartment, there's a break-in, and the purpose is to steal. Well, these characteristics don't match the legal definition of burglary, which requires the government to prove that the defendant, without authority, knowingly enters a building with intent to commit a felony therein. Smith then presented subjects with stories that contained various of the most typical attributes identified for each of the crimes. Burglary was only one of them. She subsequently gave them jury instructions and asked whether the defendant in the story should be convicted or acquitted. She found that the more of the typical attributes the story contained, the more likely subjects were to convict. Less of a fact was whether the defendant had, beyond a reasonable doubt, committed all of the elements of the offense. Additional experiments showed that it was extremely hard to dislodge these preconceived notions of what it means to commit a crime, even with specific instructions of all kinds. All of this suggests that jurors will tend to associate facts with legal categories based on the closest conceptual match to models that they already have, rather than on the government having to prove each element of a crime beyond a reasonable doubt. Now, this is no problem when the government does prove all elements of a crime beyond a reasonable doubt, as it often at does. But if Smith's experiments reflect reality at all, it should be difficult, too difficult, perhaps, for the government to convict a guilty defendant of a non-prototypical crime. Some may say that the recent Susan MacDougal acquittal for criminal contempt might fit into that. At the same time, it should be relatively easy for the government to convict an innocent defendant when the defendant's conduct, while not meeting all the elements of the crime, come close to the jurors' prototypes. If this is true, what can you do about it? One thing you can do is to ask judges to be active in granting motions to dismiss either indictments or cases when the government hasn't offered that proof, because there really is a risk that a conviction will occur that shouldn't. The choice between looking at the elements of a crime versus looking at the prototypical models--happens all the time when judges get to interpret statutes as well. Smith v. United States is a well-known case. There are sentence-enhancing statutes that say if you use a firearm during a drug-trafficking crime, there's an enhanced sentence. And in the Smith case, somebody was accused of attempting to trade a machine gun for cocaine, and he bolted, didn't do anything, didn't trade anything, but he was indicted for that and convicted, and the conviction was affirmed by the Supreme Court. Justice O'Connor, writing for the majority, said, "Well, it says use a firearm, and here's what 'use' means," looking it up in many dictionaries: trading is a kind of using. In contrast, Justice Scalia, for the dissent--it's always nice when he dissents from the left--Justice Scalia said, "Wait a minute, that's not what you think of when you think of using a firearm. You think of using it as a firearm." We had a tension between the elements on the one hand versus a nonprototypical use on the other. There are many, many cases. Again, I'm agreeing with Professor Vidmar's notion that jurors and judges are obviously not different kinds of people, they just have different jobs. Well, let me give another example, and that's burdens of proof. The mental model approach to reasoning predicts that we might have trouble with the concept of proof beyond a reasonable doubt. One might think that the notion "proof beyond a reasonable doubt" suggests that the government puts on proof, that's one model, and then we see whether the defendant can raise reasonable doubt to negate the model that the government proposed. So, the defendant has a chance to supplant the government's model by raising reasonable doubt. In fact, we hear this kind of talk in everyday parlance all the time. In a LEXIS search of newspaper articles, people talk that way all the time. But in our system, we profess that the defendant has no responsibility to raise anything. All the proof in a criminal case is supposed to be the burden of the government. So, what one might expect here is, in cases in which the government's case is weak, meaning the evidence is equivocal or just not very strong but the defendant has no real way of rebutting it (say because the defendant wasn't there and can't testify because of prior convictions, which would be the end of the case), we might get convictions where we shouldn't get any. Well, one solution is to change the jury instruction. An excellent solution has been proposed by the Federal Judicial Center (FJC), which said we should be talking more in terms of whether the government has left the jury firmly convinced of every element of the defendant's guilt. It tells the jurors that they need not separate out the government's burden from the defendant's. Rather, it tells jurors to convict only if the government's model of what happened remains firmly embedded as the only reasonable possibility after hearing all the evidence from both sides. May I have the second slide, please? There's an excellent bit of research by Irwin Horowitz and Larry Kirkpatrick that tests a variety of jury instructions in two contexts--a weak case, where they tried to make about half the evidence favor the prosecution and the other half favor the defense, and then in a strong case. And there is a whole host of instructions, but FC (firmly convinced) is the one that I'm saying changes the entire focus of the trial to the government's case rather than trying to define reasonable doubt in various ways. That is the FJC's standard. The firmly convinced approach to burden of proof is the only one which, for weak cases, doesn't lead to convictions, at least experimentally. What this means is that this, of course, is consistent with some of the things that Professor Vidmar was saying about potential prosecutorial bias, and it also suggests that appellate courts, legislatures, and jury reform commissions can at least play some role in ameliorating some of these problems. Finally, I'd like to talk about some issues where there is experimental evidence but the experimental evidence isn't about juries, the experimental evidence is just the basic research in cognitive psychology, about some problems that might arise more frequently when it comes to scientific evidence. Again, I agree entirely that it's going to arise in juries to the extent it arises for judges and everybody else. One significant theme that runs through the literature on scientific evidence is the seemingly ubiquitous tendency on the part of expert witnesses to demonstrate that two events have co-occurred and then to draw conclusions about one causing the other. The fear in the scholarly literature is that, even if such testimony is subjected to rigorous cross-examination, jurors may make too much of it. In fact, we all draw inferences of causation from evidence of co-occurrence in everyday life, and sometimes that's not a good thing. Often enough, it is a good thing. To give you an example in my own life, all too close to home, the parents of teenagers are familiar with advertisements that tell us that students who take a prep course for the SAT typically score higher than those who don't. Many, not wanting to risk their children's future on matters of logic, enroll their children in the courses. But of course, we have no idea whether the prep course causes them to score higher. It may well be that those who enroll in the prep course are the most industrious or the most studious group who, through studying--however they were going to study--would do just as well as if they hadn't taken the prep course. But it's a very easy inference to make and we make it. Now, I don't mean to take business away from these people. I just used this example to demonstrate that the way we think makes this inference very easy to make, and the literature on mental models explains this by virtue of something I had said earlier, which is that our mental models typically contain what's true but don't contain what's false--the "this is not a cat" that I mentioned. Causation, however, is a very complicated concept. It includes more than one model at the same time. First, it does require association. For the prep course to cause students to do better, they actually have to take the prep course and do better. But it also contains a but-for notion. It says that, had the students not taken the prep course, they would not have done so well, and this second fact about causation involves entirely hypothetical thinking. It asks us to build a counterfactual model of a possible world in which the students didn't take a course that they actually took and then to draw conclusions about this imaginary model, about this possible world, that doesn't exist. Well, the psychological literature suggests that this puts a cognitive load on people, and in thinking about causation, they sometimes don't get that far. I think some of yesterday's debate, incidentally, among Drs. Caskey and Gardner and Lederberg, to some extent, about the use of genetic research and prison population and so on, reflected exactly that kind of fear. There was a fear that if we do this research and publicize it, won't people make too much of it? Well, I think that is right. The intuition is that people make too much of it, and, of course, you don't stop making too much of it when you get the notice for jury duty, even if you don't make more of it than everybody else. What can you do about this? Recognizing this fact about mental models, you can spend a good deal of the trial making sure that jurors don't draw inferences of causation without recognizing the extent to which their conclusions require them to use these negatively based models. This can be done through opening statements, good cross-examination of experts and closing statements and effective jury instructions. I'm not saying that the problem isn't remediable, but it certainly is a difficulty. I'm running out of time, but I want to point out that there is a related problem that deals with if-then statements that are very typical in science "If P, then Q." If somebody says "If P, then Q and P," they infer Q. But if you have a statement "if P, then Q" and Q is false, people have a lot of trouble inferring that P is also false, which is the case. Here's an example from A Civil Action, if I could have the last slide. I won't go through it all, but if you look at the very top line, if you would lower it, please. "If the TCE in the wells had, in fact, been drawn from out of the river, you would expect to find traces of TCE in the riverbed." That's Schlictman cross-examining an expert--if P, then Q--and sure enough, he agreed to that. It turns out there was no TCE in the riverbed. Therefore, one can conclude the negative of P. But Schlictman had trouble. He had trouble either as a strategic matter or whatever. He didn't go that far, although the judge bailed him out. These kinds of inferences, the psychological literature indicate, are hard for anybody to make, and it shouldn't be that hard for scientists to present their evidence in such a way that they do it in the positive way rather than the negative. I'm out of time, but I want to conclude by saying that jurors, while they're not worse than anybody else, are not better than anybody else in thinking, and there are opportunities in the legal system to make it easier for them. Thank you. Dr. Arthur H. Patterson: I kind of like the lights down like this, right? Everybody comfortable with that? A little more soft and comfortable. You know, I promise you that I am not going to answer any questions about jurors' comprehension of scientific evidence, and the reason I'm not going to answer any questions about it is because I don't think there's a question. I don't think there's an issue. It's really very simple, which is, sometimes human beings understand things and sometimes they don't. And when they understand it, it's usually because somebody made it clear to them, and when they don't, it's usually because someone didn't make it clear to them. And I'm going to come back to that idea in a few moments. But what I'd like to start with is something that occurred to me as I was listening yesterday. No, that's not really true. I'm going to be good. I'm going to be honest with you. Of course I thought of this for weeks and got it all ready for you. I didn't make this up yesterday, but the speakers yesterday brought this to my mind. I was thinking about if, 100 years from today, there was a conference such as this, and the so-called legal and scientific experts of that day looked back at what we were doing, whether they would be thinking some of the things that you're going to be thinking when I give you the following information. For example, if you were judges in a court in Italy in 1610 and Francisco Cesi, the leading professor of astronomy at that time, told you the following--if he said, and this is a quote--"Jupiter's moons are invisible to the naked eye and, therefore, can have no influence on the earth and, therefore, do not exist," would you as the judge prevent Galileo from testifying what he had just seen with his new telescope? Now, I would suggest you would, because no one else had a telescope at that point, you know, certainly not reliable in that way, and you just kept Galileo out. I think Galileo might be a good witness. Sixteen moons we now know for Jupiter, by the way. Well, let's get more modern. What about 1807, Westin, Connecticut. Anybody here from Westin, Connecticut? Good. Then I can insult your town. Here's what happens in Westin. I'm kind of fudging the story a little, but the facts are accurate. The police arrest a neighbor who had been feuding with his next-door neighbor because they think, under the cover of darkness, he heaved a stone through the guy's roof. The defense wants to call an expert witness in the guy's defense. But the judge relies on no more brilliant an expert than Thomas Jefferson to keep the expert out. Here's what Jefferson said: "I could more easily believe a Yankee professor would lie"--Thomas Jefferson didn't know professors like I know them--but I'll start over: "I could more easily believe a Yankee professor would lie than stones would fall from heaven." So, a leading astronomer who was going to explain about meteorites is kept out. In Westin, they found a meteorite in 1807 and started the whole theory of meteorites falling to earth. One more example. No, a couple more examples. I think they're important. You think Einstein might be a good witness? Do you think Einstein might be someone who was capable of testifying in our courts? How about George Francis Gillette, in 1929 a leading American engineer? He said: "By 1940, the theory of relativity will be considered a joke." How about the President of Duquesne University? Here's what he said: "We certainly cannot consider Einstein as one who shines as a scientific discoverer of the main theories of physics but, rather, as one who is, in a fuddled sort of way, merely trying to find some meaning for mathematical formulas in which he himself does not believe too strongly but which he is hoping against hope to somehow establish. Einstein is not a logical mind." All right. Let me be current. I mean very current, all right? How about the last few years? Anybody know the name Dr. Ian Wilmot? Does that ring a bell? Okay. Good. A few hands. I'm not going to call on you, so don't worry. (If I say I'm not going to call on you, now everybody knows him, right?) Yes, I know that, right. Well, try this. In the late 1980s in Science magazine, Drs. McGrath and Solter write, "The cloning of mammals by nuclear transfer is biologically impossible." In 1993, Michael Frohman, biologist, State University of New York at Stonybrook, writes, "Research has shown that cloning mammals is theoretically impossible today or with any technology realistically within sight." I wish I could get this blown up. It's Dolly and her clone, right? Dr. Wilmot was able to do it! What's the point I'm trying to make here, folks? The point I'm trying to make is, we have to be real careful about deciding what's true and what isn't, what jurors need in deciding the truth and what jurors don't need. I don't have the answer. I'm just saying, 100 years from today, they may be laughing at our definition of Daubert. But let me turn more directly to the topic of juror comprehension. Let me ask you this question: Why should jurors believe experts? Why should jurors take your scientific point of view? Some scientists told us we had a flat earth. More recently, some scientists told us fiber was good for colon cancer; now it's not good. You know all those stories. I can't tell you that. I can't tell you which one's right and which one's wrong. All I can tell you is that jurors are constantly bombarded with important scientific information that tends to turn out to have been--junk science? Jurors wouldn't use that term. Good scientists make things go either direction. Let me start with a hypothesis, with a hypothetical, which Neil has already addressed, that jurors have trouble understanding complex scientific information. Which frankly I don't believe, but let me throw that out there on the table. And my question to you is, why do jurors have trouble understanding complex scientific information? Let me tell you that it is in lawyers' and judges' best interests to blame jurors when they're not comfortable with a verdict. You know, it's a lot easier to say the jury didn't understand than to say our system didn't work or to say that the lawyer did a bad job of presenting his or her case. So, I get a little uncomfortable when people blame jurors for not reaching the right scientific conclusion, and I would side with Neil that our experience has been that jurors do a pretty darn good job of trying to evaluate the experts and trying to reach what they think is the right verdict, the fair verdict. Let's talk for a minute about what we think is going on with jurors when they're listening to experts. Now, when I say "we think" what--I am going with the technical research, the published literature that you've heard from these two speakers and from--Shari Diamond has written on it. There's a great body of literature on this. But I'm going to tell you--and it's very consistent with what these people have published--what we hear when we talk to actual jurors post-trial and when we do research with mock jurors pre-trial, and the numbers of jurors we've done this with are in the tens of thousands. And what we find is that they're trying to view this trial through their life experiences. What else do they have to view it through, all right? They bring their life experiences to bear. And it gives them certain expectations about what they're going to hear and see and how to interpret it. And let me say that, in the act of persuasion, whether it's in life in general or in the courtroom, all that the act of persuasion is, is trying to get someone to see that a particular outcome is consistent with their life experiences, consistent with their expectations, consistent with their values and attitudes. And what frequently happens is that the lawyers and scientists present a case which is inconsistent with jurors' life experience, beliefs, attitudes, and values. Or at least one side in a case does, and the other side packages it, spins it--and I use those terms on purpose--in a way which is consistent with the jurors' beliefs and the jurors accept that and vote in that way. Look at the term "communication" for a moment. I have to admit that I looked in the dictionary before I came down here to San Diego, and the first definition of communication is simply transmission. It doesn't say anything about persuading or anything like that. I guess if you tap a Morse Code key and send out a signal into space, that's communication; you're transmitting. Whether it persuades anybody is a different issue, and I want to talk about that a little bit. The question I have for you is, what is it we're trying to communicate or what are the facts that we're trying to persuade a juror of? I would offer that only academics and judges think that what we're trying to communicate in the courtroom is the truth, all right? I would offer that lawyers would tell you, truthfully, that what they're trying to do is win. Isn't that your job? Those of you who are advocates in the courtroom? Isn't that what the system is about, advocacy? You're trying to win. And so, what you're trying to do is spin the facts, package it in such a way that it meets the expectations of your jury, of your audience, so that they understand it, believe it, and will vote for the outcome that's consistent with their view and your view. So, I ask you, what's wrong with spinning the science? What I hear in this conference is --and I don't mean this pejoratively; I'm trained as a scientist and I believe it--what I hear is, science is the golden truth and we have to put out the facts. Well, I think as a scientist, you have to do that, but as a lawyer, if you're concerned about jurors comprehending what you have to say and siding with your side of the case, you have to put a spin on that science. I see some uncomfortable faces out there. Sorry. It's the way it is. You spin everything else in the courtroom. Why not spin the science? Now, let me talk a little bit about how you get jurors to comprehend this, comprehend the scientific evidence. This has to do with, are they going to believe it, are they going to understand it, do they hear what you want them to hear in the courtroom? And what I would offer is, it is the witnesses' job and the lawyer's job to present the scientific testimony in a way that the witnesses believe it, understand it, and get the information they want to get. If that doesn't happen, it's the lawyer's fault first and the witness's fault second. I blame the lawyer first, because the lawyers work with the witnesses on what to say. Now, witnesses have certain problems that I think are very simply understood, and it is this: You're not a scientific expert unless you speak in jargon. That's the rules, right? Come on. How many--I know there's a lot of scientists in the room. How many of you give big speeches and say it in plain English? No. The more jargon you use, the more scientific you are. Participant: [Inaudible.] Dr. Arthur Patterson: Always the way? Participant: Always wrong. Dr. Arthur Patterson: Jargon is always wrong. Participant: Absolutely. Dr. Arthur Patterson: God, I love getting votes when I'm up here speaking. That's right. Jargon is wrong. Let me tell you something. I cannot tell you how many times I've been sitting with lawyers and their expert and the expert witness talks in jargon, and what does the lawyer do? Picks up the jargon. It feels real good. "Oh, I'm a smart lawyer. I can speak about SNPs and DNA mitotypings and whatever as well." Should it be going the other way? Should it be going the other direction? So, what I'm saying is, the lawyers and experts should speak English. Use visuals. We heard the "use visuals." But let me ask a question about visuals for you. If 80 percent--I've got to be careful, because this was said yesterday--if 80 percent of what's seen is remembered and only 15 percent of what's said is remembered, then why don't you just use your computer to blow up every sentence that's going to be said in the courtroom and not speak? Obviously, there's a compromise in there. Visuals are great for certain purposes. Let's talk for a minute about what makes the expert credible to the jury. It's not credentials, folks. It's experience. If I asked you in this room right now who you wanted to paint your house, and I said here's a house painter who went to the Old Dutch Boy school of house painting and here's a house painter who's painted half the houses on the block and they look great, you'd take the person who's painted the houses on the block. And to get scientific, how many of you have children? Show of hands. Oh, look at that. We have a fertile audience here. Okay, intellectually fertile. If I asked you--if you got home today and your child was sick with some unusual type of thing that's going around, would you say, "Oh, I need to call Harvard and get the best pediatrician," or would you ask your friends and neighbors which pediatrician in this town has been treating this and has success and who people like and know? You know the answer to that, and jurors know the answer to that. So, for jurors, credibility comes with the witness' experience. But it also comes with the witness' style. Can the witness talk in a way that teaches, educates, that is understandable and comprehendible? Now, let me ask you a question. If two scientists are going to testify in a case, one for one side, prosecution, one for the defense, and if Scientist 1 meets all the criteria of Daubert and credentials and experience, and everyone in this room would say "great scientist," and this scientist says--and it's crucial to the case--that A equals B, does it for the prosecution, and then the defense gets up and has an expert and that expert scientist meets all the criteria of Daubert and everyone in this room would say he is a great scientist and is ethical beyond belief, and that defense scientist gets up and says, "No, A does not equal B," let me ask you: What is the comprehension issue for jurors? What is there to understand? There is nothing to understand. Any reasonable lay person would have to say "I don't understand." How can you understand that? Where is the answer? So, perhaps, the problem lies not with jury comprehension but with the system. And perhaps we should be comfortable with our system and with jurors saying, "If I don't understand, then perhaps reasonable doubt plays a role here." And in civil cases, perhaps a finding of no liability plays a role. If the jury doesn't understand, that must be for some reason. Now, in my last 30 seconds--I was asked to speak about the impact of jury selection on trials where scientific evidence is crucial. And there's two things really to talk about. The first is whether using scientific jury selection, of which I really don't think there's any such thing, but using sophisticated jury selection techniques can somehow stack juries and impact these trials. And I say no, it can't, it really can't. I'm telling you that from experience, and the reason why is, first, limits on the process. You only have a few peremptories to use. But secondly, limits on our knowledge of what's going on in jury selection. Let me ask you, if you have a case where you have complex DNA evidence, and you are the prosecutor, do you want a smart juror because that juror will understand DNA? Or, do you need to be afraid that that smart juror might say, "Oh, statistics don't look quite right, I'm not sure about the sample." I can make the same argument for the other side. I don't know if you want the smart juror or not, and in the same way--and it came up yesterday at lunch--do we want blue ribbon panels for cases involving scientific evidence? And I would strongly argue that we don't, and I would argue it for very nonlegal reasons. I would argue it instead for what I would consider both psychological and--I can't find the right word here--citizen's duty issues, and it's really this: If our laws can't be comprehensible to all citizens, if the finding of fact of who's right and who's wrong, guilty or not guilty, can't be decided by any reasonable citizen, then perhaps we need to change the law. If it comes down to where you have to be a scientific expert to decide who's right and who's wrong, then I think we're getting a long way from the constitutional jury system that we have. And at this point, I'm out of time, and thank you very much. Dr. Shari Seidman Diamond: I think we've gone from sort of one extreme to the other and then back in the middle, and so, I'll come back in the middle but get a little more specific about some special problems that jurors have. Evaluating quantitative evidence is one of the most challenging tasks that jurors face. Moreover, scientific evidence is becoming more common and, in many situations, is expected in the courtroom. Social scientists have begun to map out some of the difficulties that triers of fact face when they're asked--and I say triers of fact including both judges and jurors--to evaluate such testimony, and to suggest ways that we can help triers of fact to navigate this potentially confusing terrain. Unfortunately, the difficulty facing both judges and juries who must evaluate quantitative evidence is magnified in at least two ways, beyond the normal difficulty they face in evaluating ordinary complex evidence. One is that those folks are not likely to have some of the specialized knowledge that experts have in probabilistic and statistical inference. Concepts like P values and regression equations are not generally common talk around the water coolers. The second way that quantitative evidence presents a special challenge is more insidious--laypersons may have some inaccurate "knowledge" about how they should evaluate quantitative evidence. Now, I want to describe for you a very specific study to give you an idea of what happens during jury deliberations, that reveals the kind of inaccurate "knowledge" I am talking about and the role of deliberations in responding to that misinformation. Courts frequently argue that some of the problems with individual juror comprehension aren't troublesome because they will be corrected during deliberations. Judge Bower of the Seventh Circuit said this in the Free case, and there are other instances of that expectation in the literature. In the recent study on the effects of expert testimony, Jay Casper and I looked at the responses of deliberating mock jurors to statistical evidence in an antitrust case involving price fixing of gravel in the road-building industry. It was based on a real case. I want to tell you a little bit about the study so you have a sense of where the data are coming from. The study involved actual jurors who were down at their one-day, one-trial court service in Cook County in Illinois and who were willing to participate in our rather elaborate simulation. We got good cooperation from the court, and a 91-percent rate of participation from the jurors, so this was a very representative group of individuals serving on our case. It's a large-scale study with a number of experimental manipulations, but what I'm going to talk to you about are reactions during deliberations to some of the statistical evidence that the experts presented. This was a case involving price fixing, so that, as is typically true in such cases, experts on both sides described what would have happened in terms of the price of gravel had the price-fixing conspiracy not occurred. One expert presented a set of statistical regression models modeling the price of gravel over time based on labor costs and mechanical costs and making a prediction based on past behavior about what the price would have been during the period at issue if the price-fixing conspiracy had not occurred. There was also an expert on the other side who reported about the prices paid by a comparison company in a different State that was similar in a number of respects. That so-called yardstick model is sometimes used as a standard in antitrust cases. It operates as a nonequivalent control group. The statistical expert talked about the amount of variance accounted for by his model, among other things, explaining that this was a pretty-good-fitting model. One version of his model produced an R-squared of .80, which is good by any standards, in fact not as good as you are likely to find in the real world. Another version of the model that accounted for 75 percent of the variance--that is, an R-squared of .75. The expert explained, among other things, that if a model perfectly reflected price performance, then the R-squared would be 1.0. We have two kinds of measures that reflect how the jurors handled the challenge of this statistical evidence. After watching the videotape, the jurors were told that the price-fixing conspiracy had, in fact, taken place and that their job was to decide what the appropriate amount of damages should be in the case. They were obviously faced with a complex quantitative task where there was competing testimony. What happened when we looked at the comprehension measures that we took at the end of the case? We had two different sets of jurors--the deliberating jurors who deliberated before they reached a verdict and nondeliberating jurors who filled out the questionnaires without deliberating. Jurors were randomly assigned to be deliberators or nondeliberators, of course, so we could compare them to assess the effect of deliberation. It turns out that the jurors had very little difficulty recalling the specific verdicts that the plaintiff and defendant were looking for. They were about 85 or 90 percent correct on those questions--on recognizing the prices that had been in effect before or during the price-fixing agreement. Moreover, deliberations contributed significantly to improvements on at least one of those measures. In contrast, less than 60 percent of the jurors recognized the correct R-squared for the best-fitting model presented by the statistician, and deliberations did not improve that performance on the comprehension measure. We can also examine the deliberations themselves to see how jurors dealt with this quantitative evidence. We videotaped all the deliberations and then coded all of the factual errors made by jurors during the deliberations. I do not recommend this as something to do in your spare time, but it provides a telling picture of jury behavior. We coded regression errors, other mathematical errors, and non-mathematical factual errors in recall of the testimony, and then we looked at the deliberation discussion to see which of these errors were corrected. I'll describe a few examples for you to give you a sense of what these errors looked like. A computational error would be something like, "It was $5 for each ton and there were 70,000 tons. Oh, well, then the award should be $250,000." Well, obviously, that is an error. The award, based on the juror's numbers, should have been 5 x 70,000 or $350,000, rather than $250,000. And if a juror did the wrong calculation and it was corrected to $350,000, we counted it as a corrected mathematical error . The second example is an uncorrected regression error, an error arising from the kind of mathematical problem that presents special difficulties for jurors. The foreperson says, "So, I took $420,000." That number is the lower of the two estimates that the plaintiff's expert offered for the amount of damages suffered by the plaintiff. "And then he (the expert) stated that his correlation coefficient was a 75 (meaning 75 percent or .75), which leaves a 25-percent margin of error. And so, I'm assuming that the margin of error is truly all error, and so I'm deducting 25 percent of what he said." Thus, the juror is treating 75 percent of the explained variance as if it means that 75 percent of the amount is justified. Juror #3 then says, "But in actuality there might not be 25 percent of error." The foreperson responds, "Right." Juror #3 says, " It may just be 10 or 15 percent." The others agree to 15 percent and go on. This is an uncorrected error, because they are using the deduction strategy, which is clearly in error. What they are really discussing is whether the 25-percent unexplained variance is the right number. But then they go on and say, "Well, maybe it's 15 percent." And then they can do the math itself correctly, because a 15- percent deduction from $420,000 gets them down to $357,000 and then they decide to round it off to $350,000. Finally, I want to show you an example of what I would consider a corrected regression error. The foreperson isn't always wrong, nor is the foreperson always the leader, but it happens in these two examples that the foreperson is taking the lead. This time the foreperson is taking the higher of the expert's estimates ($490,000). He says, "Okay, 80 percent of $490,000. Well, let's take 80 percent of $500,000 and we get to $400,000." This is the same deduction strategy we say in the previous example. Juror #1 steps in and says, "No, I don't think that's the way you do it. I don't think that 80 percent has anything to do with the number outcome." Not exactly a felicitous way of saying it, but the juror is getting the correct idea. Juror #3 continues, "The 80 percent only says that 80 percent of the time the model works." Again, not exactly right, but on the right track." The foreperson responds: "Oh, so 20 percent of the time it could not be working?" Well, not quite, but the bottom line from juror #3 is, "Yeah, right, so actually it could be higher than $490,000," which is, generally speaking, on the right path, at least in terms of dispensing with the misunderstanding expressed in the deduction strategy. Now, I would like to show you what happens during deliberations, because when these regression errors occurred, they were rarely corrected. You can see how the correction rate for regression errors compares with the correction rate for other kinds of math errors and with non-math errors, that is, other kinds of factual errors. When it comes to non-math errors, there are three categories of outcomes that can occur: Either the error can be corrected or it can be accepted or incorporated--that is, picked up and used so that the jury is, in fact, adopting something about it. Or it can be ignored. Sometimes jurors during deliberations say things that are inaccurate, but the rest of the jurors merely ignore them and go on from there. That happens with some frequency, because they are generally civil with one another; if somebody says something really stupid, the others just ignore it and move on. The non-math errors get corrected with a fairly high frequency. They are explicitly accepted or incorporated only 17 percent of the time. Math errors, apart from regression, get accepted or incorporated 18 percent of the time, but the regression errors are accepted or incorporated 42 percent of the time. So, the regression errors present a very different kind of mathematical or statistical error than the kind of computational or base sort of errors that I described for you. What does that mean? Well, I think what it means is that you are dealing with the particular kinds of statistical or scientific conceptions that Larry was referring to, a kind of prototype, an image of what the category means and what it includes. And for these jurors, when you use percent of variation accounted for to describe model accuracy, they easily shift to a percent of the estimate, because the prototype of a percentage is the proportion of the whole. As applied here, it leads jurors to systematically use the proportion of the whole--the number that they're really concerned with, the one they are trying to estimate--and to apply that in a crude sort of way and make what is a systematic error. What can we learn from these juror errors? Well, first of all, when jurors come to their task, they have systematic but sometimes inaccurate rules about how to handle the evidence, and more than deliberations are necessary to maximize comprehension. We have to understand some of the inappropriate prototypes that they use. What approaches offer the most promise? Well, for example, jurors might be told explicitly about the errors that they are likely to make in mistreating a percentage of explained variance as a percentage of the estimate. Explicit warnings are common in everyday life but rare in legal settings where jurors are simply told what they should do and not what they should not do. And a second strategy is to provide some explicit examples showing how to analyze quantitative evidence under various assumptions. Here we might turn to some judicial innovations as a partial solution. Federal Judge Pamela Reimer of the Seventh Circuit, when she was sitting as a U.S. District Court judge, had the parties in a computer software patent case convene a tutorial for her. And it was so edifying that she had them repeat it for the jurors. That kind of innovation is only one of a number of innovations that are described in a book that was put out by the National Center for State Courts called Jury Trial Innovations. A couple of the others are things--I guess Dr. Lederberg would be pleased that one of the items is allowing jurors to ask questions, a practice that regularly occurs in Arizona. I've watched this procedure in Arizona and seen very good questions being asked by the jurors. Arizona courts also permit presentation of back-to-back, opposing experts, which allows the jurors to see the experts--not one as part of the plaintiff's case and 1 or 3 days or 4 weeks later as part of the defendant's case, but back-to-back where they can absorb and compare and get the advantage of each one's participation in the cross-examination of the other, because they are both in the courtroom at the same time. I'd like to close with just one observation that brackets what was said about judges and juries. Most of the attention gets focused on juries. In a large antitrust case that took place in Chicago some years ago, a Federal judge--and it seems fitting, with all the talk about science versus anecdote, to close on an anecdote--a very smart Federal judge had just finished listening to extensive economic testimony and had a puzzled look on his face. He took a break and, while the jurors were out, said to the expert economist who was on the witness stand, "You know, well, I'm afraid there's a certain amount of jargon being used here, and I have to tell you that I'm not understanding it, and if I'm not understanding it, I think the jurors are probably having a lot of trouble with it." While they were having this little colloquy, a note came back from the jury room. One of the jurors had written down a question that he or she wanted the expert to address. The judge looked at the question and he called the attorneys over. He said, "Boy, I wish I'd asked that question." The juror's question suggested that while the judge was struggling, at least one of the jurors was successfully sorting through the economist's testimony. So, I think that the better we can help judges, as well as jurors, in presenting scientific testimony, the better pleased the scientists and the lawyers will be with the results. Now, we have 4 more minutes. If the members of the panel don't mind, I'd like to ask if any of the people in the audience have questions. Mr. Ruben A. Flores: I'll try to keep mine rather short. My name is Ruben Flores. I'm a criminalist working in the trenches, in the crime laboratory. The question is to all the panel, but it may have already been at least partly addressed by Mr. Patterson. And that is, if you have credible testimony from two experts on opposing sides, but there is a disparity in the credentials between the two, is there any evidence that the jurors will be swayed by the higher-credentialed person? Dr. Arthur Patterson: Yes. And I think Neil Vidmar also--his research shows what our anecdotal things show, which is, jurors will consider credentials, they absolutely will, and especially when they have a debate over two different things being said, one of the things they will fall back on is the credentials. What I want to make clear is, one of the more immediate things they fall back on is, whom did they understand better? And so, the style of presentation helps resolve the debate as well. Participant: I have a question for Neil. I understand that when you take a body of evidence and show it to the jury and show it to the judge, you get similar results. But it seems to me that, when judges decide cases instead of jurors, they're considering different evidence. They have access to more information than jurors if they want. And I wonder if you've considered the difference institutionally in the kinds of information that are used when judges make decisions and when jurors make decisions. Prof. Neil Vidmar: The Federal Judicial Center produced a study a number of years ago that certainly supports what you say. But it turns out that, when lawyers have a judge deciding alone, they throw the whole kit and caboodle of evidence at the judge. In contrast, they trim the evidence down for the jury, because they know the jury has to do it in a relatively short period of time. Thus, the judge is usually working with a much more complex set of evidence. My concern in these cases is, yes, the judge might also have more time to work on it, but I'm not convinced that the judge is making the right decision in the end. I mean, I think a lot of these cases just come right down to the individual judge versus a jury. Critics always think about a really smart judge that's handling the case--you know, the super judge--but although a lot of judges are very smart in what they do in law, they may not understand the science much better than the jurors, particularly 12 jurors. Dr. Arthur Patterson: But that's anecdotal. You really haven't tested it. Dr. Neil Vidmar: Yes. That is correct. Dr. Arthur Patterson: You have tested when judges decide cases based on the evidence that the judge has, that the judge does no better than the jury where the jury has a different set of evidence? Dr. Neil Vidmar: Well, I don't think we have the evidence. It's very hard to study the judges. When we have been able to get judges and jurors responding to the same kind of evidence, the judges don't perform any better. Dr. Shari Seidman Diamond: There's also a kind of interesting question. Neil talked about the Landsman and Rakos study, and there is an interesting question about which way that cuts. Because to the extent that the judge has more information that may be excluded from the jurors' purview, it may sometimes be information that would affect the judge in a way that would be inappropriate. So, there is reason to be concerned that it would cut one way or the other, and we just don't have the data to tell you which way, on balance, it cuts. Participant: The answer is that we haven't studied it, and I have a follow-up question. If you're comparing what the judge says should have happened in a case with what the jury did, how much do you account for the fact that the judge is not really all that passive, necessarily, in terms of shaping what the jury hears? I would expect there to be some kind of convergence, especially in a complex trial, because the judge will have shaped the evidence based on the judge's conception of the case as it develops. Dr. Neil Vidmar: Yeah. I agree. Participant: When dealing with statistical evidence, and particularly likelihood ratios and things like that, one thing I have found helpful in some cases is to have the expert himself or herself go through hypotheticals that would illustrate application of the model under various assumptions that the jury might make. And this deals to some extent with the kind of counterfactual reasoning bias that Dr. Solan talked about. You can actually take jurors through it and say, "Now, if, members of the jury, you think X, then . . .." Participant: [In progress]. . . but I've encountered a good bit of resistance from judges to this kind of approach with a variety of foundational objections, that either it's invading the province of the jury or that the hypotheticals are not adequately grounded in the evidence, and so on. And so, if you have any comment on that--is that an unresolvable dilemma? Dr. Shari Seidman Diamond: I think it's a wonderful way to do it. Let's ask Margaret, our evidence professor par excellence, whether the judges are right or wrong in excluding it. Dr Margaret Berger: It should be possible. Dr. Shari Seidman Diamond: And you don't see any legal objection to it--so, we'd need an amicus brief to convince the courts to allow it. Participant: If I may, the psychological literature shows that, as people become expert in a domain, they do fill in all these various parts of mental models that aren't usually there and get very good at reasoning about a host of things that would be difficult to the uninitiated, supporting what you're suggesting. Dr. Shari Seidman Diamond: I think this is for Professor Vidmar. Relating to conviction rates, I think the problem, which has been raised anecdotally, at least that I hear from some prosecutors, is, not State rates or Federal rates overall, but rather questions of certain large urban areas, certain kinds of cases, and in some respects, the racial or ethnic mix in terms of certain kinds of witnesses and defendants, and I'm wondering, are there any more specific studies that are being done in localities, as opposed to the overall conviction rate? Dr. Neil Vidmar: Yes, that's a worthwhile thing, and actually, we covered this in our article, to some extent. What we dealt with were just the broad strokes of conviction, because the general argument is the conviction rates are going down, and we said, well, where's the evidence on the other side? There may be some differences, a much more complex kind of role, where you have drugs or you have entrapment and things like that. And there may be some--we're hypothesizing--some jury nullification going on in these cases where there's concerns about the way the police behave. We do not have enough good data to break it down. We can hypothesize pretty well, but the statistics that we have available for this kind of research are just not refined enough for us to answer that question in a specific way. I think it's possible. I've talked to a number of practitioners who hold that view about jury nullification in specific types of cases, but then, you get to the question, is it right anyway? Are juries nullifying in cases where the concerns about what the police have done are correct? And this is where we go back to the other functions of the jury in terms of fulfilling a kind of a community sense of equity? ------------------------------ Panel VI. Science, Technical Knowledge, and Skill: Who Is an "Expert"? Moderator: Vaughn R. Walker District Court Judge U.S. District Court Northern District of California San Francisco, California Panelists: Paul C. Giannelli Weatherhead Professor of Law Case Western Reserve University Law School Cleveland, Ohio Lawrence M. McKenna U.S. District Court Judge U. S. District Court Southern District of New York New York, New York Judge Vaughn R. Walker: The notion that trial judges have broad discretion to screen proposed expert testimony recently has received a rather hearty endorsement by the Supreme Court in the Kumho Tire decision. Indeed, it seems to me, if there is a consistent theme that runs through the Supreme Court's jurisprudence as it affects expert witnesses over the last several years, the Daubert case, the General Electric v. Joiner case, and now the Kumho Tire case, all have stressed the importance of the trial judge's gatekeeping function. And to a trial judge, I must say it's heartening to have the endorsement of the Supreme Court that this responsibility belongs to the trial judge rather than to the appellate judges, as some of the appellate decisions suggested was the case. But how the trial judge exercises that gatekeeping function is going to be the focus of the remarks of our presenters in this portion of the program. I'm pleased to have the opportunity to introduce two speakers who will be able to address the subjects and some of the key issues on the trial judge's gatekeeping function in identifying what is a proper expert to testify and what is the proper subject of expert testimony. Professor Paul C. Giannelli is the Albert J. Weatherhead III and Richard W. Weatherhead Professor of Law at Case Western Reserve University, and he will be our first speaker. Following Professor Giannelli's remarks, we will turn to a Federal trial judge, one of those smart Federal trial judges that Professor Diamond spoke about a few moments ago, or in the parlance of our present discussion, one of the gatekeepers. He is Lawrence McKenna, United States District Judge from the Southern District of New York. I'm not going to emphasize any more of their background, as those are set forth in your program materials, so I won't repeat any of that information. Our manner of proceeding is going to be somewhat more informal than the prior programs. Judge McKenna and I have warned Professor Giannelli that we're going to interject from time to time if he makes a comment from the ivory tower that we who live in what we regard as the real world find deserving of comment. I'm not sure the courtroom is really the real world, but in any event, it's our real world. And similarly, he has warned us that, when we make our observations, he's going to make observations from the Olympian heights of the academic ivory tower. So, with that in mind, let me turn first to Professor Giannelli for his observations on identifying what is expert testimony and who is an expert witness. Paul? Dr. Paul C. Giannelli: Good morning. Before we discuss the qualifications of experts, let me respond to some of the broad questions that this conference has raised. As an evidence and criminal procedure teacher, I have been interested in experts for over 20 years, and before that, I tried cases both as a prosecutor and a defense attorney. From my view, I do not see the clash between science and law that other people have mentioned. I think that the criminal justice system has to use science more. We have to obtain good science, and we have to introduce it into the process at an earlier stage. Second, if we want to improve the overall quality of expert testimony in criminal cases, we should focus on the crime laboratories--make sure that they are fully funded and provided with the resources to be run as scientific laboratories. Yesterday, Dr. Pollard talked about the National Academy of Sciences' two DNA reports--one was published in 1992, the other in 1996. DNA profiling is a tremendous success story in many ways: exonerating the innocent and convicting the guilty. But we should also look at how the criminal justice system responded to DNA profiling. There are two aspects to "justice." There is "substantive" justice, i.e., has the innocent person been acquitted and the guilty person convicted? There is also "procedural" justice, i.e., was the accused represented by an attorney or was the jury composed of a fair cross-section of the community? Many times, procedural justice will affect substantive justice. Now, going back to the National Academy's two DNA reports: They were well done. But the second one was published after we had already executed the first prisoner based on DNA evidence (Timothy Spencer in Virginia). And neither report was available when DNA evidence was first introduced in 1986 and 1987. The problem is that there is no institutional way for courts to create scientific reports. In the silicone breast implant litigation, Judge Pointer faced the same problem, so he appointed a panel of independent experts. There are some problems with these types of panels, but it is better to have such a panel's scientific report in the trial system at an early stage. With DNA evidence we had to litigate the admissibility of every type of DNA evidence in 50 States over a 10-year period. Society would have been better off had the funds that were expended in that litigation been used to have an independent study look at DNA profiling at an earlier stage in the process. Third, I favor scientific evidence because we have encountered serious problems with other types of evidence--i.e., eyewitness identifications and confessions. The Justice Department published a report on 28 convicts who had been exonerated by DNA profiling: Connors, Lundregan, Miller & McEwen, Convicted by Juries, Exonerated by Science: Case Studies in the Use of DNA Evidence to Establish Innocence After Trial (1996). The report confirmed what a lot of social science research had already documented on eyewitness identification. With a few exceptions, those cases involved eyewitness misidentifications. We know that there are significant problems with this type of evidence. That makes scientific evidence even more important. There are also problems with confessions. The first DNA case in Leicester, England, is illustrative. The suspect, who was later exonerated by DNA, didn't just walk into the police station and say "I'm guilty." The police pressured him into making incriminating statements. They bullied him into making a false confession. The police went to Dr. Alec Jeffreys because the suspect would not confess to a second rape/murder, which had been committed in a strikingly similar manner. The police contacted Dr. Jeffreys to prove that the suspect committed both crimes. Jeffreys came back and said, "Okay, you're right. The same person did both, but it wasn't this guy." See Giannelli, "The DNA Story: An Alternative View," 88 Journal of Criminal Law & Criminology 380 (1997). In addition, one of the cases in the Justice Department report involved a guilty plea and a confession. David Vasquez also confessed to a crime he did not commit. He was borderline mentally retarded, and he pled guilty to avoid the death penalty. The problem with the case is, not only was an innocent person in prison, but Timothy Spencer, a serial rapist and a brutal murderer, was on the loose. When we have a wrongful conviction, we have two injustices. First, an innocent person is serving prison time, and second, there is a guilty person on the streets--committing more crimes. So, if we had had DNA evidence sooner, we hopefully would have been able to stop Timothy Spencer before he committed the other murders. Let me use the Spencer case to make another point. The case shows the power of DNA evidence but also some of the problems. The defense wanted to see the DNA expert's lab notes, but under Virginia law they had no right to see those lab notes. Spencer v. Commonwealth, 384 S.E.2d 785, 791 (Va. 1989). That's shocking. We're going to execute somebody based upon DNA evidence, and the lab notes are not available to the defense! We are hiding scientific evidence. If good scientific reports were required (reports that set forth the methodology and results), we would reduce misconduct by forensic scientists, we would reduce misconduct by prosecutors, we would reduce incompetence by defense attorneys, and we would eliminate many of the other problems set forth in my outline. See Giannelli, "The Abuse of Scientific Evidence in Criminal Cases: The Need for Independent Crime Laboratories," 4 Va. J. Soc. Policy & L. 439 (1997). In sum, I think we need more scientific evidence in the criminal process and at an earlier stage. Judge Vaughn R. Walker: Paul, is the problem that you outlined taken care of by Rule 16 of the Federal Rules of Criminal Procedure? Dr. Paul C. Giannelli: Rule 16 was amended several years ago to require a summary of the expert's testimony. I think this amendment is a major step forward. Many States, however, do not have such a provision. Also, the "expert summary" rule is sort of a compromise. There is a different section of Rule 16 that requires discovery of scientific reports. The problem is that those reports are not "scientific." We should require detailed scientific reports and open discovery. We should not permit trial by ambush. Important information should not be hidden. Why do we accept laboratory reports that identify marijuana without revealing the tests used to reach that conclusion? There is nothing "scientific" about that. Moreover, the last page of the report ought to explain to the jury what the test results mean. So, I think amended Rule 16 is an important improvement, but I don't think it goes far enough. See Giannelli, "Criminal Discovery, Scientific Evidence, and DNA," 44 Vanderbilt Law Review 791 (1991). Judge Vaughn R. Walker: Couldn't a Rule 16 mechanism be adopted by a State judge acting on his own even though his or her State law doesn't explicitly provide for it? And secondly, what are you proposing? The kind of full-scale deposition of experts that we have on the civil side? Dr. Paul C. Giannelli: A State trial judge could in some jurisdictions, under inherent authority, order such disclosure. In some States, the judge could not. The best way to deal with the problem is by amendment to a jurisdiction's discovery rule. But no, I do not want to go as far as civil discovery. For those in the audience who are not lawyers: "Discovery" is the disclosure of information before trial to avoid trial by ambush. There is far more disclosure of expert testimony in civil cases than there is in criminal cases. You would have thought that this is backwards--and it is backwards. I would propose that every lab have published scientific protocols and that they be published at the public library, on the Internet, wherever. Moreover, the c.v.s of all laboratory analysts should be public information as well. The crime laboratories should be writing comprehensive scientific reports that tell us the methodology employed, the results, and the limitations of the findings. At the end of the report there should be a summary for the jury. The purpose of the written word is to communicate, and the people who need to understand this evidence are nonscientists: the lawyers, the judges, and the jury. If you review the cases, you will find that important information is frequently kept out of scientific reports. What I am proposing is not anything that is new to crime labs. The American Society of Crime Laboratory Directors has taken the same position. I do not think that anybody in this room would disagree, on a scientific basis, that we ought to have comprehensive reports. And, I would allow discovery depositions of experts in certain criminal cases. I would give the judge that discretion. Judge Lawrence M. Mckenna: Wouldn't a trial judge, though, if something didn't appear in the expert report or in the Rule 16 summary, simply exclude anything that wasn't in them going into evidence? Dr. Paul C. Giannelli: The judge could, as a remedy, exclude evidence. When attorneys "play" discovery games, where information is hidden until the last minute, exclusion of evidence is a real deterrent. But I would prefer to prevent us from getting to that point. And I would like to shield the laboratories from the pressures inherent in the adversarial system, where the attorneys push the expert as far as they can. It is not just prosecutors. Defense attorneys are not any less culpable. But it's part of the adversary system. We should have procedures that protect the laboratories, protect the experts, and at the same time require a much higher quality of expert testimony. Judge Vaughn R. Walker: Let me ask you about the timing of Rule 16 discovery, a rule which uses one of those phrases that drives scientists crazy, like "a reasonable time before trial" or something like that. Prosecutors, no offense intended, but prosecutors tend to try to disclose things they have on the Friday before you begin selecting the jury on Monday, and if you're dealing with complex scientific evidence, for instance DNA, that doesn't help much. I know some judges have pushed it back. I know Judge Duffy in the World Trade Center bombing case set a date, if my recollection is correct, 2 months prior to trial for Rule 16 expert disclosure, but that's not binding on anybody. Dr. Paul C. Giannelli: I think every lab report should be self-explanatory so that another expert could review it to ensure that it makes scientific sense. The report should be disclosed as soon as a defense counsel is appointed; all the scientific reports should be disclosed to the defense. I simply would not allow the adversarial system to intrude so extensively into the pre-trial investigation as much as it now does. Everything should be discoverable with scientific evidence. That's the only way the lawyers can deal with scientific evidence. The good lawyers can deal with scientific evidence very well, but then there are the rest of us-- the average attorney trying these cases, such as an overloaded public defender. The system ought not be designed for the best attorneys, but for the average person. Judge Vaughn R. Walker: Well, Paul, if this is the subject, how you treat scientific testimony, how do we identify it? What is scientific testimony or scientific evidence, and who are the experts that we should permit to testify as experts in criminal cases? Dr. Paul C. Giannelli: This is what is called in trial practice a subtle hint by the judge (who has well-deserved discretion) for me to move on. Nobody on this panel disagrees with Kumho, I can assure you! Let me mention some of the qualifications requirements. There are some rules of thumb, what lawyers call black-letter rules of evidence. The Federal Rules of Evidence definitely favor the use of expert testimony. The drafters intentionally did that. First, you do not need an academic degree to be an expert. See State v. Mack, 653 N.E.2d 329 (Ohio 1995)("Qualifications which may satisfy the requirements of Evid. R. 702 are multitudinous. . .. [T]here is no 'degree' requirement, per se. Professional experience and training in a particular field may be sufficient to qualify one as an expert."). Second, you do not need to be an outstanding practitioner. See United States v. Barker, 553 F.2d 1013, 1024 (6th Cir. 1977)( "An expert need not have certificates of training, nor memberships in professional organizations . . .. Comparisons between his professional stature and the stature of witnesses for an opposing party may be made by the jury, if it becomes necessary to decide which of two conflicting opinions to believe. But the only question for the trial judge who must decide whether or not to allow the jury to consider a proffered expert's opinions is, 'whether his knowledge of the subject matter is such that his opinion will most likely assist the trier of fact in arriving at the truth.'"). Third, the experts for both sides do not have to have the same background. See United States v. Madoch, 935 F. Supp. 965, 972 (N.D. Ill. 1996) ("[O]ne expert need not hold the exact same set of qualifications to rebut another expert's testimony .... This Court need not analyze, as Defendant contends it should, whether a psychologist or psychiatrist is more qualified to testify as to the psychological condition of a patient at the time of the offense."). Finally, there are cases in which criminals testify as experts on criminal conduct. See United States v. Williams, 81 F.3d 1434, 1441 (7th Cir. 1996) ("There was no pretense that he was impartial, or a member of a learned profession. Neither condition is required to qualify a person as an expert witness under the current rules of evidence . . .. There is not even a paradox in the suggestion that the biggest experts on crime are, often, criminals."), cert. denied, 118 S. Ct. 723 (1998). Judge Vaughn R. Walker: Who better to know? Dr. Paul C. Giannelli: That's right. My favorite case is United States v. Johnson, 575 F.2d 1347, 1360-61 (5th Cir. 1978), cert. denied, 440 U.S. 907 (1979), where an experienced marijuana smoker testified that the marijuana came from Colombia. I always think of the commercial about Colombian coffee with Juan Valdez drinking the coffee and saying "Ah, Colombian." The expert in Johnson could say, "Ah, Colombian." "Licensing" of an expert is generally not a requirement. Many times a license should be a minimum requirement, but sometimes there are people who don't have a license and are better qualified than those who do. But see People v. West, 264 Ill. App. 3d 176, 636 N.E.2d 1239 (1994)(witness not licensed to investigate fires under a State statute was not qualified to testify about the cause of a fire in an arson prosecution). "Certification" of experts is important because of the number of "junk science" experts that testify in the criminal courts. But look at a voiceprint case, United States v. Williams, 583 F.2d1194, 1198 (2d Cir. 1978), cert. denied, 439 U.S. 1117 (1979), which was subsequently cited in Daubert, 509 U.S. 579 (1993). It was cited for the "potential error rate" and for "the existence and maintenance of standards." The problem with the Supreme Court's citation of Williams is that a National Academy of Sciences report on voiceprints, published in 1979, scrutinized the organization promulgating the standards. This group was composed of law enforcement officers who were trained to do voice identifications. Only one person in the group, Dr. Tosi, who conducted the initial voiceprint experiments at Michigan State University, was a scientist. You have to go beyond the title. How do they certify people? This was not a scientific group, as the National Academy of Sciences report later pointed out. When I was preparing this lecture, Joe Cecil put me on to an article in The Wall Street Journal, February 8th. The title is "The Making of an Expert Witness: It's in the Credentials." The article discusses the American College of Forensic Examiners, which makes $2.2 million a year. The roots of this organization can apparently be traced to the Daubert decision, which was intended to tighten the standards of expert testimony. This organization appears to be a "certification" mill. It costs $350 to get certified. In fact, I want you to write this number down. It's 1-800-4A-EXPERT. Getting back to the outline--there is also a difference between lay witnesses and expert testimony. Proposed Federal Rule 701 is intended to strengthen that distinction. See Fed. R. Evid. 701 (1998 proposed amendment, adding a third requirement--"(c) not based on scientific, technical or other specialized knowledge." The criminal discovery rule, Rule 16, applies only to experts. You should not be able to bypass the rule simply by saying, "This witness is a lay witness, therefore no discovery" and then have the witness essentially testify on expert matters. See United States v. Figueroa-Lopez, 125 F.3d 1241, 1246 (9th Cir. 1997)(prosecution should not "subvert" the expert discovery rule by offering expert opinion on drug trafficking as lay opinion testimony), cert. denied, 1998 U.S. Lexis 3300. There are also a number of "false credentials" and other misconduct cases. Twenty years ago I would not have expected to see this kind of misconduct. Again, you can avoid this problem if experts are required to disclose their credentials prior to trial. I do not see any reason why that should not be automatic in criminal cases. See Doepel v. United States, 434 A.2d 449, 460 (D.C. App.) (serologist testified that he had a master's degree in science "whereas in fact he never attained a graduate degree"), cert. denied, 454 U.S. 1037 (1981); Commonwealth v. Mount, 435 Pa. 419, 422, 257 A.2d 578, 579 (1969) (death penalty vacated when it was discovered that a prosecution expert, who "had testified in many cases," had lied about her professional qualifications: "she had never fulfilled the educational requirements for a laboratory technician."); Starrs, "Mountebanks Among Forensic Scientists," in 2 Forensic Science Handbook 1, 7, 20-29 (R. Saferstein ed. 1988); Saks, "Prevalence and Impact of Ethical Problems in Forensic Science," 34 J. Forensic Sci. 772 (1989) (listing other cases). If you look at other qualifications cases, some are outrageous. In the Wisconsin Law Review study on drug testing procedures, we have an expert who had been testifying for 43 years and did not have a high school degree. Forty-three years, 2,500 court appearances. See Stein, Laessig & Indriksons, "An Evaluation of Drug Testing Procedures Used by Forensic Laboratories and the Qualifications of Their Analysts," 1973 Wis. L. Rev. 727, 728. Some people might ask, "How could this happen, what's wrong with this lab?" But I teach lawyers, and I ask, "Where were the prosecutors? How could they let this happen? And where were the defense attorneys, and how could this happen for such a long period of time?" The criminal justice system is driven by lawyers; we are responsible for both procedural and substantive justice. The neutron activation cases are also illustrative. In one case a city crime lab examiner testified about neutron activation analysis. I don't know of any city crime lab that has a nuclear reactor attached to it. See Ward v. State, 427 S.W.2d 876 (Tex. Crim. App. 1968); Comment, "The Evidentiary Uses of Neutron Activation Analysis," 59 Cal. L. Rev. 997, 1009, 1036 n.216 (1971)(questioning the qualifications of the expert in Ward as well as his conclusions). And there are the hypnosis cases--oh, this is just too much fun; I couldn't resist. In Wyoming, they know how to write dissenting opinions. In Gee v. State, 662 P.2d 103 (Wyo. 1983), the majority opinion says, "Well, maybe this expert didn't have great credentials, but we're going to let him testify anyway." This is a criminal case. The dissent said that a hobo passing through town or a derelict in a county jail could hypnotize a witness and be qualified in Wyoming. The dissent also said that there is a professor at "Croaker College" in California who trains frogs. Why are we not surprised that there's a frog college in California? If I were going to open a frog college, I'd come to California too. Judge Vaughn R. Walker: Now, be careful, Paul. Dr. Paul C. Giannelli: The judge is exercising discretion here. This professor hypnotizes frogs, and the dissenting judge says this professor would be "overqualified" in a Wyoming court. I would say this judge went overboard. But I misjudged him. Look at the next case. The expert is a janitor who took a 32-hour correspondence course on hypnosis. He's a maintenance man at Pacific Power and Light, and they let him testify. So, I think the dissenting judge was correct. See Haselhuhn v. State, 727 P.2d 280 (Wyo. 1986), cert. denied, 479 U.S. 1098 (1987). The more typical problem is experts who stray beyond their area of competence. This happens quite frequently. In State v. Adams, 481 A.2d 718, 727-28 (R.I. 1984), a pathologist testified about bite marks. Actually, I think a good forensic pathologist can identify bite marks, but the good forensic pathologist will also have access to a forensic dentist to confirm the identification. Then we have the "technician" problem. People v. King, 72 Cal. Rpt. 478, 491 (Cal. App. 1968)(courts must "differentiate between ability to operate an instrument or perform a test and the ability to make an interpretation drawn from use of the instrument.") is a California case--an excellent California case. Judge Vaughn R. Walker: I didn't have anything to do with it. Dr. Paul C. Giannelli: The courts must distinguish between a technician and a scientist. The classic example is the Breathalyzer. The police can be trained to operate the machine, but does the police officer have the expertise to interpret the results? See French v. State, 484 S.W.2d 716, 719 (Tex. Crim. App. 1972) ("an officer may administer a breath test even though he is not otherwise qualified to interpret the results"). Let me also say that I do not use the word "technician" in a pejorative sense. We have fingerprint experts who are essentially technicians but who are also highly qualified and provide us with important evidence. But sometimes it is important to make the scientist-technician distinction. Then there is the "bias" problem. We need to build firewalls between crime labs and the police and prosecutors. We can go back to 19th century cases. A 100 years ago, a Minnesota court wrote: "There is hardly anything, not palpably absurd on its face, that cannot now be proved by some so-called 'expert.'" Keegan v. Minneapolis & St. Louis R.R. Co., 76 Minn. 90, 95, 78 N.W. 965, 966 (1899). Now jump ahead to the Fifth Circuit in 1986: "[E]xperts whose opinions are available to the highest bidder have no place testifying in a court of law." In re Air Crash Disaster at New Orleans, 795 F.2d 1230, 1234 (5th Cir. 1986)(it "is time to take hold of expert testimony in federal trials"). Throughout this conference there has been an undercurrent-- that we hire experts, we pay them, and they will say anything to collect their fee. With experts receiving contingent fees, the bias is obvious. But there are experts whose careers are "contingent fees." If you are working as a doctor for an insurance company and you start agreeing with plaintiffs, you are not going to get any more referrals. The insurance company is going to use some other doctor. The doctor knows what the insurance company expects in this case, and the doctor wants to be hired next year, next week. Judge Lawrence M. McKenna: Paul, could I ask you a question about that? Dr. Paul C. Giannelli: Sure. Judge Lawrence M. McKenna: When the bias of the sort of witness you've been describing--and I think every judge has seen plenty of it--is so evident and so evident from facts, is there anything the trial judge can do? Can the trial judge say you have simply gone outside the pale of being an expert because you are so biased and I'm not going to let you testify? Dr. Paul C. Giannelli: I don't have a problem with that. As a judge, what do you do? Judge Lawrence M. McKenna: I've been tempted, but I've never had the nerve to do it. Participant: [Inaudible.] Judge Vaughn R. Walker: I think there is some law on that. And I must say, I was going to get some free advice, if I could, from Paul, on this very subject today and see if he would back me up. Because I excluded a so-called police expert who had testified 350 times for the police and has never testified against the police in any case. And on that basis and that basis alone, basically, I excluded him as a witness. Am I on shaky ground, Paul? Or am I going to be affirmed if that decision is appealed? Dr. Paul C. Giannelli: I think there is very little case law. In a criminal case, when you exclude a defense expert because he testifies only for the defense, you may run into constitutional problems--the due process right to present a defense. Most of the fingerprint experts work for the government, so there is an availability problem. I think these hired experts should be excluded from the process--the sooner, the better. So, when you get overturned, Judge, you can cite me. Personally, I like Daubert. I like Kumho. Critics are upset about the open-endedness of those cases, but somebody once said that it is better to be "generally" right than "precisely" wrong. Judge Vaughn R. Walker: All right. Well, it will be too late, then. One area that has concerned me a good deal, in criminal cases particularly, is something that you touched on just a moment ago, the confrontation problem. The issue of police modus operandi experts, which you discussed in the later portion of your outline. The use by the prosecution generally of witnesses who testify as to the method of operation of criminal enterprises or syndicates or gangs or what have you, and attempt to bring in the most generalized kind of evidence by this method seems to me to pose a very serious confrontation problem, and I'd like to see what your guidance is for those of us who have to make these decisions and deal with those kinds of proffers of evidence. Dr. Paul C. Giannelli: There has been a trend, mostly in the Federal courts, to use police officers as "modus operandi" experts. Actually, some cases are coming out of California with experts testifying about the modus operandi of gangs. But the great body of law is in the Federal system. It makes sense to use this type of testimony when we're talking about a "code" word that the jury does not understand or about the operations of a clandestine PCP lab. There is a proper use of this type of evidence. See United States v. Griffith, 118 F.3d 318, 321 (5th Cir. 1997) ("we now have, by one count, 223 terms for marijuana"); United States v. Anderson, 61 F.3d 1290, 1297-98 (7th Cir.) (PCP laboratory), cert. denied, 516 U.S. 1000 (1995). But I think the cases have gone too far. There is also the problem of "mirroring hypotheticals," where a police officer testifies to factual matters and then also testifies to expert matters. The same person can be both a fact witness and an expert witness. The officer testifies to the facts and then about how drug operations work. Finally, the same witness is given a hypothetical question based upon the facts in the case. The police expert, in effect, is saying that the accused's conduct is "criminal." This goes too far. Some cases involve "duct tape," which is used to incapacitate people. See United States v. Moore, 104 F.3d 377, 384 (D.C. Cir. 1997) ("duct tape such as that found under the hood of Moore's car is often used by people in the drug world to bind hands, legs, and mouths of people who are either being robbed in the drug world or who need to be maintained"). Judge Vaughn R. Walker: Therefore, someone who possesses duct tape is a gang member. Dr. Paul C. Giannelli: This type of expertise is not very helpful. I don't think we need it. Drug dealers use beepers. They use weapons. Juries know this. You have a bank robbery. Armed robbers tend to use weapons. We don't need an expert here. It is "showcase" testimony. Sometimes it's harmless. But many times the expert is used to bolster the credibility of the fact witness, who sometimes is the same person. The jury may think that this police expert, the DEA agent with 20 years experience, knows something that the jury doesn't know, maybe some inadmissible hearsay, when that's not the case. This is problematic. See United States v. Cruz, 981 F.2d 659, 662 (2d Cir. 1992) ("That drug traffickers may seek to conceal their identities by using intermediaries would seem evident to the average juror from movies, television crime dramas, and news stories"; "[T]he credibility of a fact witness may not be bolstered by arguing that the witness's version of events is consistent with an expert's description of patterns of criminal conduct, at least where the witness's version is not attacked as improbable or ambiguous evidence of such conduct."). Judge Vaughn R. Walker: Larry, what are your views on this? Judge Lawrence M. McKenna: Well, I was going to give Paul an example of a case that seems to me to go beyond even what he described. It was a pretty high-profile case on the right coast a few years ago. It's actually called-- Judge Vaughn R. Walker: The right rather than the correct coast? Judge Lawrence M. McKenna: The right rather than the correct coast. United States v. Locasio, but most people will have learned about it on TV from the name of the other defendant who went to trial, who was John Gotti, who ended up getting a life sentence in this case. And trying to compress it, it was a RICO prosecution, but the predicate acts were, in large measure, murders. Among the witnesses was another fellow who has appeared on television pretty frequently, Salatore Gravano, who apparently was known to his friends, if he has any left, as Sammy the Bull, who testified for the government under a cooperation agreement. In any event, an FBI agent was qualified as an expert and then testified to his opinions, which included opinions regarding how organized crime--and this was the Gambino crime family--operated, and one of the things he said was that, in the crime families, including the Gambino family, a boss must approve all illegal activity and especially murders, and he then testified that Mr. Gotti, sitting over there, is the boss of this family, and he then admitted that the sources of the information were not necessarily before the court. This seems to me very obviously to present a confrontation clause problem, and it's one that hasn't really been addressed by the Supreme Court. Kumho and Joiner and Daubert, the trilogy we spoke about yesterday, are all civil cases, and they have their application in the criminal context, but they, of course, don't deal with the confrontation clause. So, if you think about this testimony, yes, it is based upon the agent's experience, but that experience was in large part debriefing informants and listening to wiretap conversations. So that once the fact of a murder by somebody identified by the agent as a member of the Gambino family was proved, the testimony tells the jury that Gotti must be responsible for that murder. That, of course, is ultimate issue testimony, but Rule 704 says that's okay. It's also the kind of testimony that Professor Berger identifies in the Federal Judicial Center Manual on Expert Evidence as evidence which is completely within the control of the person proffering the evidence. Only law enforcement officers can gain the experience that this particular agent had, and there may be situations like Mr. Juan Valdez and his marijuana where you can get somebody to testify, but frankly, I think that's pretty unlikely in an organized crime case. I think anybody who's going to testify in that situation is going to want immunity, and while I've had defense lawyers ask me to direct the government to give immunity to defense witnesses, the law simply doesn't permit that. Now, this case was affirmed in the Second Circuit, and their one-sentence answer to the confrontation clause problem was, "You can cross-examine the expert." And that, at least in the Court of Appeals' view, dealt with the argument that this agent was really being used as a conduit to just put tons of hearsay before the jury. You can construct--but I'm not going to do it for you today, because it would take some time--but I think, based upon cases like Idaho v. Wright and Ohio v. Roberts, you can construct a very strong argument that this is impermissible hearsay coming before the jury, and I think you cannot say it is firmly rooted hearsay, because allowing experts to testify to hearsay is something that's really an innovation with the Federal Rules of Evidence. Under standard Supreme Court confrontation clause law, you're supposed to, if you're going to get hearsay in and it's not firmly rooted, you're supposed to show that it's accompanied by particularized guarantees of trustworthiness, and that seems to me to be a much higher fence to jump over than what Daubert and Kumho tell us you need to get expert testimony into evidence. There's only one Supreme Court case that I'm aware of--it's always dangerous to say things like there's only one case when you're surrounded by law professors--but there's only one that I'm aware of that deals with this, and it's sort of a bizarre case that came to the Supreme Court on a habeas from Delaware, I believe--yes, Delaware. Some fellow was convicted of murdering his live-in girlfriend by strangulation. An FBI agent testified as an expert that one of two hairs which were similar to those of the victim and which were found on a cat leash which was found in the apartment where the petitioner and the victim lived had been forcibly removed, and he said that was his conclusion, but he didn't remember how he reached that conclusion. And he said, "I know there are a couple of theories about this"--I think he mentioned three--"but I don't know which one fits this case, I don't know what theory I used," and he had no notes. And the petitioner argued that, "Hey, how do you cross-examine an expert who doesn't know why he reached a conclusion?" The Supreme Court said that's perfectly okay, because you can point out through cross-examination that this fellow doesn't know why he reached his conclusion. Now, that doesn't really reach the police MO testimony, because this case doesn't deal with hearsay. Now, as far as this MO testimony is concerned, as I said, no case has reached the Supreme Court. I do believe that the 12 Federal circuits that deal with criminal cases, however, are probably unanimous that this is admissible. So, I think the only comfort defense counsel have in this particular area is probably under Rule 403, and I think-- Judge Vaughn R. Walker: 403 is? Judge Lawrence M. McKenna: 403 really requires a trial judge to balance evidence, whether its probative value sufficiently outweighs any unfair prejudice it may cause. The law in the Second Circuit, but not the others, is pretty good on that. I was going to tell you more about this, but I don't have time. But Judge Newman, back in the '80s, wrote a concurring opinion, and I should tell you, Judge Newman is not only a scholarly judge, but he spent a lot of time as a Federal district judge in Connecticut, so he knows what goes on in courtrooms and what the impacts of certain kinds of evidence on juries may be. "The very breadth of the discretion afforded trial judges in admitting such an opinion"--and he, again, is talking about police MO testimony, in his particular situation it was a drug case--"should cause them to give the matter more rather than less scrutiny. A trial judge should not routinely admit opinions of that sort at issue here and should weigh carefully the risk of prejudice." And of course, that, once again, is just saying that the trial judge has a lot of discretion, so it's really up to counsel in any individual case to try to be persuasive if some witness is going too far, and maybe if you want to kind of preserve a confrontation clause objection, and who knows, you may be the person to get to the Supreme Court with it. We've already talked about Rule 16, which I was going to talk about at some more length. In her piece on criminal cases in the Federal Judicial Center Manual on Scientific Evidence, Professor Berger raises the issue of the typical defendant's inability to afford experts. I personally think that, under the Criminal Justice Act, obviously the district court can give appointed counsel whatever they need to defend the case. There are provisions in there--and I don't have the citation with me--but I believe even in a case where a family has scraped together enough money to hire defense counsel rather than having a public defender appointed--and parenthetically, that's usually a mistake; the public defenders are much better than the public perceives them to be--I think there is authority in that act for the judge, even with retained counsel, to authorize funds to hire an expert. So that if you were to get into a case with DNA, where I assume experts are pretty expensive, you would have the authority to give the defendant money to do that, just as you have the authority to give a defendant daily copy if the government's getting daily copy to kind of keep an even playing field. So, those are the protections I see. I think there is something here that needs to get to the Supreme Court, but, of course, if the circuits are all in agreement, that's not the kind of thing the Supreme Court usually troubles itself to look at. But there is something there that should get there with this police MO testimony, and maybe some defense counsel here will be the first person to get it there. Judge Vaughn R. Walker: What other problems do you foresee in this area? How about chain-of-custody testimony? Judge Lawrence M. McKenna: Chain-of-custody testimony is important. Frankly, in my experience--and I'm dealing mostly with FBI cases--the chain of custody has been very well kept and people know how to testify about it. The other thing I might say is, defense counsel usually don't want to spend too much time on chain of custody, because I think they perceive the fact that if they spend a lot of time challenging chain of custody, the jury's going to perceive that there's not anything else to their case and they're just making noise and giving them something to talk about. But obviously, it's going to be very important, especially when you're getting into areas like DNA, which I've never had to deal with in any particular case, but when you're dealing with blood samples or hair samples or saliva samples or whatever may come into a DNA case, chain of custody is extremely important. Yesterday, just talking privately with Dr. Lederer, he said the problem we have to watch out for in the DNA cases is the criminal who deliberately plants some false DNA at a crime scene to confuse the DNA technologists who are going to come in and pick up the DNA. How we deal with that, I don't know. I mean, I suppose that's a question for the DNA scientists to deal with. I don't know how you'd handle that. I just don't know enough. Judge Vaughn R. Walker: So, you've got to wonder if somebody comes up to you and asks for some of your DNA. Judge Lawrence M. McKenna: You've got to watch for it, yes. It's like the fellow who wants you to hold his package at the airport. Be suspicious if somebody wants just a little clipping of your hair. Judge Vaughn R. Walker: One of the things we haven't talked very much about is the so-called technician, as opposed to the scientific expert, a handwriting expert or a handwriting analyst or someone whose background and knowledge is not in a scientific field but is, rather, an applied art or something that is of less scientific character than many of the things that we have been discussing, and certainly DNA is among them. What are the cautionary things that judges ought to have in mind, Paul, when dealing with issues of technical experts? Dr. Paul C. Giannelli: You have to appreciate that historically crime labs started in police departments; the Chicago crime lab after the St. Valentine's Day massacre, the Berkeley police lab in California, and then the FBI laboratory. The examiners were coming out of police backgrounds, and that made sense when you are talking about fingerprints, questioned document examinations, and firearms identifications. There are no schools. You take agents who had been investigators and train them--on-the-job training for 2 or 3 years before they were allowed to testify as an expert. But when you start talking about neutron activation analysis, DNA, and the Oklahoma City bombing, you need experts who have academic degrees. If you understand this history, it's not surprising that after Daubert and after the DNA litigation, you can understand why fingerprints, questioned documents, and firearms identification never had to satisfy the rigorous admissibility standards now applicable. The question is: are we going to put the resources into the labs and do the kind of scientific work that is required. Actually, we have better research on eyewitness identifications and psychology than we have on fingerprints or firearms identification. Judge McKenna decided the questioned document case. United States v. Starzecpyzel, 880 F. Supp. 1027 (S.D.N.Y. 1995). I think it had a positive effect. Because of that case, there is now funding for research. How do the prosecutors, defense attorneys, and judges deal with this evidence until the research is completed? Very few courts have kept out questioned document examination, but the trial judge did so in the Oklahoma City bombing case. Hair comparison evidence is another example. In the Justice Department report on convicts exonerated by DNA several cases involved hair evidence. Experts have gone far beyond what hair evidence can show. In the Williamson case we had an expert overstating the significance of the evidence, and a prosecutor, as an advocate, pushing the evidence as far as he could. They should have known better. Williamson v. Reynolds, 904 F. Supp. 1529, 1558 (E.D. Okl. 1995), rev'd, Williamson v. Ward, 110 F.3d 1508, 1512 (10th Cir. 1997) (the due process, not Daubert, standard applies in habeas proceedings). I think that the limitations of hair comparison evidence should be pointed out in the lab report. The report should explain what a "match" means. You can explain that in common English. You can avoid a lot of problems with comprehensible laboratory reports. Once this information is in the lab report, based upon a protocol adopted by that laboratory, the lab expert is protected from the prosecution pushing the expert too far during the heat of the trial. Judge Vaughn R. Walker: As Paul points out, Larry, you've had some firsthand experience in dealing with this question. Would you share your observations? Judge Lawrence M. McKenna: Well, I had a famous or infamous case, it must be 4, 5, 6 years ago now, where the defense challenged under Daubert--Daubert was pretty new at the time--the admission of handwriting identification testimony, and they said it wasn't scientific, and I might point out that defense counsel at the time was Barry Scheck, who spends his life dealing with DNA, and he was all full of theories and so forth, and I, at that point, was induced to holding a hearing on this. I'm not too sure, if I were faced with the question again, I would, but I did, and to the best of my recollections, my conclusion was that, under Rule 702, this was admissible evidence but that it wasn't scientific, and the fact that it wasn't scientific or could not meet the four-prong Daubert test didn't mean it was inadmissible. I think I took some measures to try to ensure that it got across to the jury that whatever trappings the handwriting identification community might put on their work, such as five levels of certainty and so forth and so on, that might have made it look like science, didn't really make it science. My concern really was just that the jury know what they're getting, that it was admissible evidence, but please, this is not science as, for instance, DNA testimony might have been science. Judge Vaughn R. Walker: Why can't you depend on the lawyers to bring that out on cross-examination? Judge Lawrence M. McKenna: That is the way I would think today. I think, today, I would have skipped the hearing, although today, I know now from Kumho, I might actually have posed some of the Daubert questions, which Kumho now says I can. I guess what it led to, though, I think, is true--oh, I should add one thing, that a lot of what I really relied on was, a couple of years before Daubert, I had had a document examiner--who was probably the most widely respected one in my part of the world, who testifies regularly, has a government background, but he's in private practice now--testify on direct that handwriting identification is a science and an art. In other words, he admitted that this is not really strict application to the facts of scientific principles, there is an art to it, and I agree with him. So, I don't really think I was saying something that at least one respected member of the handwriting identification community didn't really agree with. And I think the principle is right, that whatever a jury gets by way of scientific testimony, all the way down to other specialized testimony--and I've gotten to the level of, believe it or not, in a Jones Act case, having a chief of a steward's department on a vessel telling me how people who make beds are supposed to make beds--I have no problem with that. That's slightly specialized knowledge, and I guess they do it differently at sea. I think what's valid about Starzecpyzel is that the jury should know what they're getting, and it stops there, and I don't mind admitting a lot of things if the jury knows where it fits on the scale from making beds to DNA testimony. Judge Vaughn R. Walker: Well, I hope our discussion has been helpful to you. We'd be happy to entertain any questions that this colloquy may have precipitated. Are there any members of the audience who'd like to throw a question in our direction rather than for us to be throwing them back and forth? Yes. If you could come over to the microphone, so we can hear you. Good morning. Ms. Kerstin Gleim: Good morning. I'm Kerstin Gleim. I'm a forensic scientist from Seattle, and I have a question for Mr. Giannelli. You want to eliminate hired experts, where does the defense get their advice on physical evidence or other forensic questions? Dr. Paul C. Giannelli: That is a serious problem. There are differences between civil cases and criminal cases. In some jurisdictions, 80 percent of the criminal cases involve indigents. Sometimes it is difficult getting attorneys in capital cases, much less defense experts. The judge talked about the Federal statute, the Criminal Justice Act of 1964. You have more resources in the Federal system than in most States. The defense has a due process right to an expert under Ake v. Oklahoma. I came across a State court case that says a defense expert in DNA cases is not automatic. The defense must make a special showing, even if the government uses DNA evidence. The court went on to write about all the CLE conferences and all the books on DNA, that those books are available to lawyers who can use cross-examination to deal with DNA. Public defenders usually have a huge caseload, and I've read a lot of those CLE books, and I don't understand all the aspects of DNA evidence. That attitude in the appellate case makes no sense. The National Academy of Sciences 1993 report said that there ought to be a defense DNA expert. There's an inequality, because the prosecution has the crime lab available. On the other hand, there can be abuses with a request for defense expert funds. There are no financial limitations on a defense attorney. All the incentives favor making as many requests as you can. There is an Ohio case, a capital case, in which the defense asked for 15 experts. They got three from the court. Why would they ask for so many? There's no reason not to. The defense may use the motion to leverage the prosecution into a plea deal. The trial judge is trying to figure out if this is a valid request or not. The trial judge is in the awkward position of worrying about the treasury. Again, I think that comprehensive and intelligible scientific lab reports are the first step because the defense attorney can take the report to a university. The university professor might be willing to at least review the lab report and say, "They did everything right." There are experts who voluntarily do pro bono work. They may not want to be a witness, but they might look at a report, and if they see it's bad--and that's what happened in the Castro case with the DNA. Moreover, if the protocols are available, other experts can review them to see if they are compatible with scientific protocols. The experts who write lab reports ought to be accountable; they ought to sign the reports. The Inspector General report cites instances where lab reports were changed without informing the primary examiner. This may raise a confrontation problem. The witness at trial may not have actually done the work, and the defense might not know this. If the report is signed and made public, the forensic science community can review the work, and if it is shoddy, secure a defense expert. Judge Vaughn R. Walker: Larry, do you want to comment on this question? Judge Lawrence M. McKenna:: Yes. I think the Federal Judicial Center has compiled at least some sources or some people who know of sources that you can go to and contact if you're looking for experts in various fields. I don't know the details of that, but there are places who will find experts or help you find experts and tell you where to go. Judge Vaughn R. Walker: : You mean other than this gentleman who was featured in The Wall Street Journal? Judge Lawrence M. McKenna: Yes. That's sort of, you can create an expert that way. I've seen some cases where appellate courts--naturally, I think mainly in terms of the Second Circuit --have been sensitive to not allowing defendants to put on expert evidence. One was kind of odd and it's worth giving you the facts. They reversed a case, a conviction, recently where the trial judge didn't admit the expert testimony of a commodities analyst. This was a drug case, by the way, I should say, where the defendant wanted to put on an expert commodities analyst to testify to the economic motivations for smuggling gold dust into the United States from Nigeria, and that was offered in support of a defense that the defendant really thought that the 98 condoms he had swallowed were filled not with heroin but with gold dust. Now, it--but it's worth mentioning that. I mean that--I suppose my immediate reaction is-- being told that that's what you're going to prove is--you've got to be kidding, but the judge who excluded that evidence was reversed. So, there is some sensitivity on the part of the appellate courts to being careful about what they prevent defendants from doing. That's probably going to get important as, you know, we're beginning to get death penalties in the Federal system now. I haven't had one, but I've talked to some of my colleagues who have, and the tendency is to be more generous than not with experts and various things of that sort, not simply because of the--part of it is the stakes involved, but part of it is just a feeling that these things should be done correctly and right and be--if there's going to be a conviction, it should be sustainable in the appellate courts. So, I think there will be some leeway in that direction. Judge Vaughn R. Walker: Dean Grady, do you have a question? Dr. Mark F. Grady: Yes. I was just wondering whether Daubert created a distinction between expert testimony in its classic sense. For instance, I suppose DNA identification might be a classic type of expert testimony, as compared to an expert's evidence about custom, for instance. I think maybe Wayne Stewart testified about how [inaudible] might figure--I was just speculating--might figure into [inaudible] for instance. Judge Lawrence M. McKenna: Yes. It was a Jones Act case, yes. Dr. Mark F. Grady: Right. So, it seems to me that the DNA evidence [inaudible], but in the situation where the expert is simply testifying about custom, there would not necessarily, you wouldn't have to think [inaudible] science in order to think that that evidence might be relevant in a court action. [Inaudible] the Daubert principle only applies to some types of cases in which experts testify? Judge Lawrence M. McKenna: Kumho says that, when you get something that's not scientific, you should look at whatever it may happen to be in a given case and then you decide whether or not the four Daubert factors might apply. In the case of the fellow making beds, it didn't seem to me I needed to go into, for instance, publication and peer review, and I kind of--besides, there was no objection. There was no objection in that case. In fact, both sides had an expert on the subject. By the way, they presented it very neatly in the way we saw yesterday afternoon in videotaped depositions. They can, by the way--and this is sort of out of left field, but that was an interesting presentation. I tried a case about a year ago where I think the government's use of very beautifully done videotape depositions probably hurt its case. It was a criminal case that resulted in an acquittal, because the videotaped depositions were depositions of the alleged victim of the crime and their agents, and I think the jury's perception--I don't know this for a fact, because I don't personally interview jurors--but I think the jury's perception was that the fact that the victim had testified by videotape, rather than being sufficiently interested to come to New York City and testify in person, did not help the government's case. So, there can be downsides to using that kind of technology. Judge Vaughn R. Walker: Yes, sir. Could you state your name? Mr. Bert Black: I'm Bert Black. Judge Vaughn R. Walker: Yes, sir, Mr. Black. Mr. Bert Black:: A couple of years ago, I had a situation, same lawyer, same expert [inaudible]. One side they were the plaintiff; one side they were defendants. [Inaudible] and it occurred to me that one solution to the problem [inaudible] just to establish a registry of every [inaudible]. You could even charge a $10 expert registration fee [inaudible] subject matter [inaudible]. Judge Vaughn R. Walker: That actually is pretty well taken care of on the civil side, with the disclosures that have to be made for expert testimony, and those same disclosures could be applied under Rule 16 by an order of the judge, I should think. Isn't that right, Larry? Judge Lawrence M. McKenna: They could, and I might add that I'm aware that, and I forget the name of the body, but a national organization of insurance companies has already started the kind of database that you're talking about, mainly physicians who routinely testify in personal injury cases, but other experts as well. Now, whether that's accessible to anybody outside of the insurance business, I doubt, but I know it's there. Mr. Bert Black: [Inaudible.] Judge Vaughn R. Walker: All right. I think we have time for one more question, if there is one. Yes, sir. Mr. Rich Wally: I'm Rich Wally, and I'm a forensic scientist at San Diego, and I routinely get appointed on State defense cases to assist defense counsel and also [inaudible]. Participant: Could you use the microphone, please? Mr. Rich Walley: Certainly. And I wanted to make one statement and then a question to the judges. The discovery process doesn't work well for forensic scientists for a variety of reasons. In the Federal system, at least in this area, this jurisdiction, it doesn't work at all. We don't get discovery until day of trial, so we have little idea of how to prepare the defense attorneys, and that doesn't seem to be improving. But for the State side, in California, discovery is pretty forthright and complete, we get all the reports, and an expert can advise the attorney what those reports are, all the instrumental printouts, the handwritten notes, whether or not he probably used an assistant that will have notes, and we can get that material, but I just learned of Rule 16 today. I don't see it working, and it really impacts the forensic scientist in that 80 percent of our time is frequently spent in the discovery process, getting up to the point of maybe reexamining the evidence but not examining it. Maybe 20 percent of my time is actually spent analyzing evidence. The other 80 is administrative and assisting attorneys in the discovery process. The next comment, or, it's actually a question. On the bias issue, judges, don't you think that the juries pick up on that pretty readily, if you can't do anything prior to it getting to the jury to eliminate an expert? Judge Vaughn R. Walker: Larry? Judge Lawrence M. McKenna: The answer is yes. I think, generally, juries do pick up on that. I won't go through a lot of particular cases I remember, but I think juries do pick up on bias, because cross-examination normally brings out the fact that a given doctor hasn't had a patient in the last 17 years but has testified 46 times, and juries know what that means, and you don't really have to tell them that. Judge Vaughn R. Walker: Paul, would you want to make one comment? Dr. Paul C. Giannelli: I want to close by pointing to one of my favorite cases. It's Giles v. State. The quote is: "The defendant did not cut himself as badly as he would have done if the knife had been sharp." This is a defense attorney, in closing argument, in an aggravated assault case involving a knife. He was using the knife as demonstrative evidence. He wanted to show the jury that the knife was not a dangerous weapon. He ran it across his hand, and this is his last statement, as he's bleeding in front of the jury. For some reason, the client was a little upset and tried to say this was ineffective assistance of counsel. Judge Vaughn R. Walker: All right. With that, we're going to close our proceeding today and bid you all a good afternoon. ------------------------------ Panel VII. Expert Witnesses: Is Justice Ruined by Expertism? Introduction: Richard M. Rau Program Manager Forensic Sciences National Institute of Justice Washington, D.C. Moderator: Barry A.J. Fisher Past President, American Academy of Forensic Sciences Director, Los Angeles County Sheriff's Department Crime Laboratory Los Angeles, California Panelists: Bert Black Partner Hughes & Luce, L.L.P. Dallas, Texas E. Michael McCann Milwaukee County District Attorney Milwaukee, Wisconsin Dr. Richard M. Rau: This one has a distinguished panel, and I'm going to let the chair, of course, get up and introduce his members and organize it, and then I want to go out there and talk to David. But this is my last chance to say something to you: I think that this has been an extraordinary conference. And I can't wait to get Jeremy to approve the publication of the transcript, because I think that's going to make a great difference in what's going on in the criminal justice field. So, turn it over to Barry and the rest of the team. Mr. Barry A.J. Fisher: Good afternoon. It is a pleasure to be here, today. My name is Barry Fisher. I am the immediate past president of the American Academy of Forensic Sciences, and the crime lab director for the Los Angeles County Sheriff's Department. Today's panel deals with expert witnesses: "Is Justice Ruined by Expertism?" We have two very interesting speakers today. Mike McCann is the District Attorney for Milwaukee County, Milwaukee, Wisconsin, and Bert Black is a lawyer and engineer from Dallas, Texas. I think that we will see more of Bert's lawyer side today, but every so often, the engineer in him may pop out. This has been an interesting meeting. In thinking about my remarks to get this session kicked off, I am reminded of the cartoon character, Pogo, and his famous statement, "We have met the enemy and it is us!" We may be the enemy. Why is DNA evidence different from any other type of physical evidence? In reality, it should not be viewed any differently from any other type of scientific evidence. The same standards that the courts expect from forensic DNA evidence testing are appropriate to any other type of forensic evidence testing. The national DNA Advisory Board defines all manner of standards, protocols, and training for DNA typing. Why not have a national forensic science commission (made up of forensic scientists, forensic science administrators, and consumers of forensic science information) to define standards of quality, practice, training, and reliability for all types of forensic science evidence? During this conference, speakers have suggested what forensic science laboratories and practitioners should do to prove the reliability and efficacy for scientific evidence. The problem is that these opinions do not carry much weight. On the other hand, a national board or commission could mandate policies and guidelines for forensic science practices. Another problem concerns the level of science and technical knowledge of lawyers and judges. My experience is that only a handful of judges and lawyers have a background in science or know how to effectivity deal with forensic evidence and expert witnesses. I suggest that the bench and bar have an obligation to better educate themselves in the science and technology as it relates to their professions. There are ways of getting this information. Groups such as the ABA are appropriate for such training. I mentioned earlier that I am the past president of the American Academy of Forensic Sciences. The AAFS has a Jurisprudence Section. I would like to see more practitioners become members because that is an easy way to get scientific information and to make contacts. One final thing--and I know that Bert Black will mention the Kumho Tire case, the issue of reliability as well as changes to the Federal Rules of Evidence, Rule 702. The courts are beginning to focus on the issue of reliability of science and technology in some decisions. A criminal defense attorneys will certainly attack nonacademic, knowledge-based experts. Little research outside of forensic science is published which demonstrates the reliability and efficacy of some techniques routinely used in forensic science laboratories. Courts are not likely to accept firearms examination, fingerprint identification, tool marks examination, footwear and tire impression evidence identifications, handwriting examination, as a matter of faith. These nonacademic-based forensic practices have been in existence for decades, but have never undergone rigorous scientific scrutiny. Intuition tells us they are reliable, but Kumho and FRE 7.02 suggest further scrutiny may be needed. If public forensic science laboratories and their examiners are challenged to prove reliability, most do not have the resources to conduct the research. Furthermore, while some research conducted in forensic labs is appropriate, research outside of that community is desirable as well. There are many universities and national laboratories that could conduct forensic research and development, if funding were available. There are several organizations that could fund the work, such as the National Science Foundation, National Institute of Justice, Department of Energy, and the National Institutes of Health, provided Congress authorized the funds. Other groups such as the National Academy of Sciences, the American Association for the Advancement of Science, the American Academy of Forensic Sciences, and the American Society of Crime Laboratory Directors should be invited in to help define a national forensic science agenda. Forensic science has an important role to play in the justice system. Issues relating to the quality, reliability, and efficacy of forensic science are important and must be considered. Law school professors, practitioners, and judges are in unique positions to raise the issues we have discussed at this conference to decisionmakers at the State and national level. Thank you. I would now like to introduce Bert Black who will talk about the trilogy. Mr. Bert Black: My topic is justice and expertism, but before getting into this topic, I want to clarify a great misunderstanding that has pervaded this entire meeting. I think there's an assumption that Daubert was a toxic tort case about Bendectin. Not so. I've done some research into both Daubert and its relationship to Kumho Tire, and this is what I've discovered. Daubert was Jake Daubert, who was a baseball player for the Brooklyn Superbas from about 1912 until 1922. In fact, I even found a picture of him. There he is. There's Jake Daubert. In fact, Jake was a pretty good baseball player, and one year, he won the MVP award. Back then, it was called the Chalmers MVP award, because the winner received a Chalmers automobile, and Jake, one year, won the Chalmers automobile. Now, let's take a look at the relationship now between Daubert and Kumho Tire. I found a picture of Jake in his Chalmers car, and lo and behold, he was having tire problems. In fact, the wheel once came flying off. So, I think I've resolved the big question of how we got from Daubert to Kumho Tire. The question we need to ask is, "What made the Kumho tire come off Jake Daubert's car?" Maybe I should sit down now. Anyway, back to expertism versus expertise or justice and expertism. To me, this is an easy question. If expertism means testimony from an expert who isn't really doing expertise, it's fake. And, of course, fake expertise is not good for justice. I start from the premise: no truth, no justice. We often hear that trials are not about finding the truth, they're about doing justice. How many people have heard that in the past? Most of us have heard that. I don't think that's right. At least it's not exactly right. I think a better statement is that trials are not just about finding the truth. They're about learning the truth so we can do justice. You can't do justice in a factual vacuum. Having said that, there will be times when we need the kind of information that can only be obtained from experts. And the Supreme Court has given us some guidance on how to determine when expert information, when expert testimony, is or is not reliable. Daubert requires valid expert knowledge, Joiner requires a practical explanation, and Kumho Tire requires that all expert testimony constitutes genuine expertise and that there must be empirical support. When an expert testifies in court, what that expert is testifying to has to be genuine knowledge from his or her field. Why that should be surprising, I don't know. Why should a doctor get to testify in court about something he or she would never testify about or say outside the courtroom? That comes from Daubert, it was repeated to some extent in Joiner, and certainly in Kumho Tire. What's the difference between point two explanation and point three empirical support? Both of them have the word "explanation" in them one way or another, and in trying to think of an example, this is the best that I came up with. About this time last year, I was going to the Grand Canyon to go hiking and go backpacking, and a few weeks before, I developed some numbness in my left hand and wrist, and there was some concern that maybe I had pinched a nerve in my neck, so I went to see the doctor about this. The doctor pinches me in a couple places, wiggles things a couple ways, says, "Does this hurt? Does that hurt? I said yes or no, and he said, "It isn't your neck, you have got a nerve that's being pinched in your wrist or your elbow, you do these exercises, it's going to go away, and in any event, you can go to the Grand Canyon." And off I go, and I'm fine, and because I'm here today, you know that I made it back. Well, that doctor had a pretty clear theory, a pretty clear picture of how the human nervous system is put together and how it works. That's why he was able to correctly do the diagnosis. What if, instead, I'd gone to the doctor and he tells me, "You know, I've looked at lots of people with numb left wrists, and I can tell you, based on all this looking at people with numb left wrists, go on to the Grand Canyon, don't worry about anything"? I think I would say, "Wait a minute! How many people have you diagnosed? How many times were you right? How many people are there still at the bottom of the Grand Canyon that didn't make it out because you got it wrong?" That's the sort of empirical support that I would demand if there wasn't a sort of clear theoretical explanation that the doctor actually had. Now, that kind of theoretical explanation is not without empirical support itself, but I hope that story sort of distinguishes between my points one and two here. What we've heard during this symposium, what you see in some of the post-Daubert case law, even, is the argument that these kinds of requirements don't necessarily have to apply to forensic science, or at least to forensic expert testimony, because it isn't really science, it's based on experience, it's a craft instead of a science. I don't think that the craft approach is going to last much longer, and this is consistent with what Barry said. The trilogy is going to catch up to it. But there are some other reasons that I don't think the craft approach is going to survive too much longer. First of all, there is some recognition of empiricism now in courts apart from Daubert. United States v. Hall is a case involving psychological testimony, and the court recognized that many, many different areas involve empirical propositions, "that may be investigated and sometimes refuted through scientific means." So, the scientific method is not limited to hard science, whatever that might be, to Newtonian science, whatever that might be, to physics, chemistry, or whatever else. The scientific method, the idea that you require empirical support for propositions, goes well beyond what we normally consider the sciences, and it certainly applies to almost all of the topics that would be the subject of forensic expert testimony. United States v. Ironcloud provides another example. It involved some sort of a sobriety testing device that was used on Ironcloud, who apparently had run over somebody after consuming a number of beers. But what was his blood alcohol level? The mere fact that a test has been used for a long time does not make it reliable. You can't just go into court and say, "This is what we've done forever." Based on that logic, we'd still be bleeding people to cure diseases, and in any event, the Eighth Circuit has recognized that that doesn't make something reliable. So, there are other reasons why the craft approach isn't going to work. And now, I come to the trilogy. And I think that, under the trilogy, we have to look more closely at the empirical underpinnings, the explanatory connections, all of those factors that we just talked about, and more. I emphasized what I think are the highlights. There's also going to be a carryover from the civil cases. At least in Federal court and in a number of States now that have adopted Daubert, judges are learning to look more closely at expert testimony. Even if it's only on the civil side, there are a number of examples where courts just don't automatically accept testimony based on an expert's qualifications; there has to be a good explanation, there has to be empirical support and so forth. That habit, that way of viewing expert testimony, will necessarily spill over to the criminal side when they handle criminal cases. And then, finally--and we've already heard about this--reports of mistaken convictions are going to cast doubt on forensic evidence. I think it was Paul Giannelli who was saying that if we convict somebody based on hair comparison evidence, and then prove pretty conclusively that this person was wrongly convicted based on the DNA evidence, the gold standard, maybe we shouldn't be relying on hair comparison so much Maybe we need some empirical support for evidence like that before we use it. So, the trilogy is ultimately going to require that we develop new methods, a new way of thinking about forensic science or forensic expertise, and ways of validating it. What I would like to suggest to you here is a breakdown of forensic evidence into five categories, for each of which we'll probably have to develop some different mode of validation, and I'm not going to go through what the mode of validation ought to be for each, but I think, in terms of doing further work, these five categories will help organize our thinking. First is matching evidence, the hair comparison evidence, fingerprint, DNA. Those are all matching techniques. There's something at the crime scene. There's something associated with the suspect, and if the suspect's thing, whatever it is, matches what was found at the crime scene, then we have at least some connection between the two. Either piece of evidence on its own is meaningless. So, the way you do matching becomes important. Now, there's also questions, of course, about the validity of the technique to make sure you're measuring something that can be matched in the first place, but the question here is, "How do you do a match and make sure you really have something?" Then there's explaining evidence, and I put psychology into this category. I'm also lumping psychology together with blood spatter evidence. What you do with blood spatter evidence says, well, if I see something like that, knowing what the normal blood pressure is in a person, then they must have been struck with a blunt instrument in such-and-such a way and that's why you have blood spatter in this fashion at the crime scene. Crime family modus operandi--you're explaining what something means. You know, when the mafia Don said hit him, what does that really mean? You're explaining something. How do you know when you get the report that "hit him" means kill him and the person tells you that--how do you know you can rely on that? Why isn't that person just making it up? You know, how many times have you been around somebody who said "hit him" and somebody died afterwards? What's your basis for saying that other than watching a lot of bad movies? Causation evidence, which is probably a subset of explanation evidence, but I think it's a special subset. Cause of death or accident reconstruction would fall into this category. What I call simple factual evidence--what is this person's blood alcohol level? It's a fact. You don't have to compare that with anything. You don't have to explain it. Once you've got a blood alcohol level, in most States, you have a statute, if you're above a certain level, you're driving under the influence. In the same category as factual evidence perhaps would be drug analysis, chemical tests to determine whether something is marijuana or heroin or what have you. And then my fifth category is veracity or recall evidence. You still see a lot of cases showing up about the polygraph. In fact, you see more cases showing up about the polygraph since Daubert than before. There have been at least two or three cases, e.g., U.S. v. Passado in the Fifth Circuit, where courts have said, you know, after Daubert, this per se rule against exclusion is probably not right anymore, we have to hold a hearing, at least, and see if it's reliable. Most of those courts wind up still excluding the evidence, and of course, as a constitutional matter, the Supreme Court has told us in United States v. Scheffer that the military per se rule against admissibility is at least constitutional. I'm not going to go back through each category and try and speculate about how you might do validation of each category, but I do want to give you some idea about what I mean with regard to matching evidence and why it's important that we hold people to empirical standards, I'd like to talk about the case of United States v. Stifel. How many people here have heard of United States v. Stifel? I know Paul has. We have a few. Well, this is one of the most important forensic cases ever in terms of some lessons that it teaches. Let me tell you the story of Orville Stifel. He had an altercation with an ex-girlfriend. I think they were both in college. She was going to school at Ohio State, and he goes down to visit her, and it's not a pretty scene. Orville doesn't behave real good, and he may say some nasty things. In particular, I think that he threatened her either boyfriend or fiance at the time. He said, you know, "If you go on seeing this guy, I'm going to kill him," and soon thereafter, the fiance is, indeed, killed by a package bomb. Orville is a suspect, and he is nailed with the forensic evidence. The tape and the packing material on that package were identical chemically to packing material found in the storeroom where Orville worked. We got Orville, right? Well, he sure as the devil is convicted. But take a look at the rest of the Orville Stifel story. The fiance's parents, the parents of the young man who was killed, had recently split up. Whether they had been divorced yet or not, I don't know, but it was real unpleasant. The fiance's father was in the Merchant Marine, and this was back during the Vietnam era, and he was shuttling explosives--I guess not to and from--but to Vietnam, and he had access to the kind of military explosive that was in that package. No one ever linked Orville Stifel to that particular explosive. More. The address on the package was destroyed in the explosion, so we weren't even sure that it was addressed to the fiance. In fact, the postman sort of remembered that maybe it was the fiance's brother, who had sided with the mother. And that forensic evidence nailed it for us? Those packing materials were so common that if you did a similar test on packing material found in 85 percent of the offices in this country, you would have gotten the same test results. It was diagnostic of nothing. Find blood at the crime scene; it's red. Cut somebody in front of a jury with your knife, it's red blood. Boy, must be the same person, right? Well, of course not, because we all have red blood. Well, if 85 percent of the packing material is the same as what convicted Orville Stifel, he shouldn't have been convicted based on that evidence. Now, after 13 years in prison, he is eventually released, and not because of any appeal based on that forensic evidence but because the government had withheld information about the father, the guy in the Merchant Marine. He had been a suspect for a while, and they never told Orville's lawyers about that, and they never told Orville's lawyers about some of the interviews and some of the evidence against this alternative suspect. So, after 13 years, Orville is released, and eventually he went to law school. He became a defense lawyer. You would not expect him to be on the prosecution side, would you? Anyway, we don't want anymore cases like Orville Stifel. We should all be embarrassed by that in terms of practicing in the legal profession anyway, or in criminal justice. And so, I suggest that the trilogy should be a catalyst for reform in this area, and here are some reforms that I have to suggest real quickly. We need a research agenda on how to validate these various forensic methods, and once we've developed the research agenda, we need a program for doing the research to validate and refine the methods that we use. Some of them we'll keep, some of them we'll improve, some of them we'll throw away. We need research on new methods and techniques beyond what we have already. All this is going to require increased funding. We've heard that several times. We need increased extramural research and an external review of the methodology to establish reliability. Set up academic programs for doing forensic engineering or forensic science. Establish journals to publish this. Do all the things that are required in a scientific community. And finally, we need to learn how to formulate forensic questions in terms of hypotheses and to test those hypotheses, learn to do this as science, because that's what's really required if we don't want anymore Orville Stifels. At the end of the day, I would hope that the result of such reform would be that we solve more crime, convict more criminals, and most important of all, we'd be more certain when we have convicted somebody that we've got the right person. Thank you. Mr. E. Michael McCann: Good afternoon. My work has been predominantly in criminal prosecution. Obviously, justice ought be the object of what's happening in our courts and in our entire criminal justice system. Any conviction that is secured in violation either of an ethical code or on the basis of incompetent or junk science, or because evidence was falsified, is not justice at all but injustice. Anyone who tolerates that whether a prosecutor, defense attorney, police officer, or agent for a laboratory is endangering his own or her own liberty. It seems to me that if we tolerate that type of conduct, no man or woman is safe from an unfounded prosecution, and those of us in the criminal justice system should know that better than anyone. Bert has adverted to it in the Stifel case. There are many problems that flow from violations of the Brady v. Maryland requirement that evidence that tends to exculpate must be provided by the prosecutor to the defense. Every prosecutor ought to have an equivocal policy consistent with Brady and ought vigorously to ensure that all prosecutors on staff follow policy. Destruction of evidence problems also occur in some cases. I chuckled in reading over Professor Giannelli's recounting of the Colorado case of People v. Morgan. In that case, police recovered a digit of the offender's finger. The police kept the digit in an inadequately refrigerated facility because, understandably, if they placed the digit in the refrigerator in the district station, it might pollute the officers' lunches. The finger decayed and the case was thrown out because the police were held responsible for the loss of the evidence. Obviously, prosecutors can't fail to follow up on evidence. A beautiful quote emerged from a case arising in California involving a military Preparedness Day Parade in 1915. The best known of many appeals related to the case is styled Mooney v. Houlihan. The trial prosecutor in that case, one Brennan, described in almost poetic terms how a prosecutor caught up in the fevered chase of his quarry can overlook signals of potential innocence and thereby fail to follow avenues that might lead to exculpatory evidence. Such a danger always confronts police, prosecutors, and overzealous forensic analysts. Partial understanding of evidence recovery limitations can also cause problems. Because of the public's partial knowledge about fingerprints, in stolen car cases where fingerprints are not discovered or the police have failed to search for the same in the recovered vehicle, the defense attorney often argues, "where are the fingerprints? The defendant must be innocent." Dr. Caskey in his opening remarks suggested that we should be reviewing cases where thin circumstantial evidence convictions occurred and suitable evidence remains to run DNA tests to determine if the results support or tend to impeach the conviction. Barry Scheck is doing that now in New York and some of his highly publicized cases have resulted in releases of persons that DNA evidence showed were innocent. I recently was with him at a meeting when he spoke of a requirement that forensic laboratories in New York be certified. There should be a committee in every State working on improving crime laboratories. Prosecutors, defense attorneys, forensic analysts, and scientists should be in the vanguard supporting such efforts. Assistant District Attorney Norman Gahn in our office worked closely with our State crime laboratory as DNA testing came online there. We have now secured a number of convictions in rape cases solved by running unknown offender DNA samples against the State crime laboratory DNA databank. We have so far not encountered an instance of an already convicted person being found innocent upon DNA testing. Is it the responsibility of the district attorney to comb back through old cases involving circumstantial evidence convictions to see if DNA tests could be or should be run? I have not seen that expressly conceptualized in terms of the ethical responsibilities of a district attorney. I will speak to that shortly. Defense attorneys who have handled such cases certainly ought to summon the attention of the district attorney and the court to discern the need for DNA testing of appropriate evidence. A district attorney certainly ought not to object to such testing and ought to support the same as part of the pursuit of truth, recognizing that inevitably, inasmuch as the criminal justice system is a human enterprise, improvident convictions occasionally occur. We have an open file policy in our office. I think that's the only appropriate policy when we're looking for justice. However, we've encountered recently two types of objections. We now have, as do many States, a victim and witness rights statute. We have begun to receive objections from victims and witnesses because, as we open our files, witnesses and victims are more frequently contacted in advance of trial by defense attorneys and investigators. Some victims and witnesses object to this. Victims may be particularly upset if contacted directly by the defendant himself or herself. Gang case prosecutions raise particular problems. It is not unusual for a defense attorney receiving our file to copy the same and provide it to the defendant to study for defense purposes. Unfortunately, gang members are undertaking to circulate such copies among themselves as part of an effort to determine who in the gang or its affiliates may be providing information to police. Obviously, the gang's intent is to effect violent retribution. On occasion, the law requires that an informant be brought forward or the case dismissed. Naturally, a promise of anonymity must be protected as allowed by law or the case will be dismissed. Because of the practice of gangs circulating copies of our case files in gang-related cases, we are growing increasingly concerned about the wisdom of an open files policy, particularly in gang homicide cases. One must be alert to the possibility of errors even in dealing with very competent, scientific laboratories. In the Milwaukee County case of State v. Mendoza, the defendant was charged with slaying two off-duty police officers. The evidence showed the defendant had discharged a firearm and was then arrested by the two officers, who took the gun from the defendant. A struggle then ensued, and the defendant succeeded in getting one of the officers' firearms and shooting both officers to death. The defense indicated that at one point one of the officers was striking the defendant in his head, opening the door to a later argument that the defendant acted to prevent injury to himself. Evidence was submitted by the police to the FBI laboratory and, at the request of the defense, to the Wisconsin State Crime Laboratory as well. The FBI laboratory reported that with respect to one of the officers the killing shot entered the officer's front chest and exited his back, while the State crime lab reported that the bullet instead entered the back and exited the front. This was a case involving the death of two police officers, and one would anticipate that it would garner close and assiduous handling by every laboratory. I called the FBI expert, pointed out the conflict between the lab reports, and requested that he review his file. An hour later, the much chagrined FBI technician advised me that he had erred in recording his findings despite procedural protocols designed to prevent such errors, and that his findings in fact were consistent with the State crime lab report. I'm sure many defense attorneys have had cases where the defendant says, "I didn't shoot him in the back, I shot him in the chest as he was attacking me." Where a killing bullet entered the body is often completely dispositive in any issue of self-defense. If the entry wound is in the back, self-defense is not going to fly. If the entry wound is in the front, at least there are some grounds to argue self-defense. This wasn't a case of bad science. It wasn't a case of junk science. It was a case of human error. Inadvertent false testimony can occur. I recall a case I presented in Milwaukee where the defendant was involved in multiple slayings, terrible bludgeon beatings with a distinct M.O. In solving a number of the slayings, circumstantial evidence was involved. In trial testimony, a competent, adequately trained officer testified about a palm print matching the defendant's which the officer had recovered from one of the walls in one of the slaying scenes. The officer, in testifying, said that the palm print was "fresh." This testimony was of double importance, supporting the prosecution theory of guilt and indirectly addressing the potential defense argument that the defendant had been in the house at an earlier date for some other reason than slaying the deceased. The defense attorney, a very capable lawyer, did not attack this testimony. Overnight, I thought about that testimony. I had never heard testimony of "fresh fingerprints." I thought, "has there been a new development?" I called the technician at his home and asked, "you testified that the palm prints were 'fresh'. Is there new technology that can date palm prints?" He responded, "no, I was in error; it was a mistake." I put the technician on the stand the next morning to recant his own testimony that the palm print was "fresh." I firmly believe that the error was inadvertent in that the technician's keen desire to support the prosecution and anticipate the defense caused him to subconsciously put the word "fresh" before the words "palm print." Professor Giannelli's submission details incompetence and ill will in a number of cases where crime laboratories--publicly supported crime laboratories--submitted reports that wouldn't meet anybody's standards. Giannelli cites the Black Panther case out of Chicago in 1970 in which the Chicago police crime lab properly took a hard hit. The State's attorney, a man who had heretofore enjoyed a good reputation, suffered much as a result of a very dubious crime lab report before he was exonerated. All of us--prosecutors, defense attorneys, scientists, and forensic technicians--must be intolerant of such laboratories and must aggressively challenge and push to reform such operations. Our integrity requires nothing less than that. Inadequately skilled publicly employed technicians pose a particular problem. Civil service regulations or union contracts can shield incompetence in the laboratory. A technician may be honest and well intended but below desired standards in terms of proficiency. Both the liberty of an innocent accused and the safety of the community against an assaultive offender may be sacrificed by employment of an earnest but incompetent technician. Removing such a technician, who may have been many years on the job and may be a decent person can pose almost insurmountable difficulties. However, a reassignment to other work must be sought for the integrity of the justice process. Galileo's Revenge was written by a lawyer attacking junk science. The lawyer selected entirely civil cases as examples to pillory courts he believed were permitting the admission of pseudo-science. It seems to me that if you are a civil litigant, what's the pressure against putting in junk evidence? It's this, I suppose. If you are the plaintiff and win, the case could be reversed. If you are the defendant and win, again, the case could be reversed. In the criminal justice system, however, the paradigm is somewhat different. The prosecutor has the same problem as the civil litigant. If he or she puts on junk evidence, and the court admits it, the case could be reversed. However, the defense attorney in the criminal prosecution is uniquely situated. If he or she chooses to put in junk evidence and wins the case, there can be no reversal because of the constitutional guarantee against double jeopardy. Obviously then, the temptation may exist for a criminal defense attorney to say, "What do I have to lose? Why not put in the junk evidence?" Of course, be you a civil or criminal litigant, there is always the potential that a vigorous cross examination challenging junk evidence can destroy the credibility of the proponent in front of the jury and thus jeopardize the entire case. However, in some criminal cases, the only hope for the defense may be the use of junk science. Thus, it is clearly incumbent upon the district attorney to anticipate any attempt to introduce junk science and to move aggressively to adduce the judge as the evidence gatekeeper under Daubert and Kumho Tire to keep out such evidence by well-prepared motions in limine. I cite two cases. Jeffrey Dahmer was prosecuted in Milwaukee for the slaying of 15 young men. He was a serial slayer, a necrophiliac who did a number of very odd things as well to his victims. The only defense was insanity. The trial took 3 weeks. The defense advised me that one Bill Resslear might be called as a defense witness. The FBI has a special unit at Quantico [Virginia] that studies serial killers. The agents on that unit profile these killers and oftentimes are helpful to police in identifying characteristics of an unknown serial slayer. Resslear had worked in that unit and was a respected, honest, and capable agent who had recently retired from the FBI. Dahmer was raising an insanity defense. Resslear's profiling unit had developed various paradigms which broke out murderers into "organized" and "disorganized" serial killers. The defense gave notice that Resslear was going to testify apparently so that the defense could thereafter suggest that because Dahmer was a "disorganized" criminal, he should be found insane. But profiling is developed to apprehend people, not to ascertain on trial if they should be held criminally responsible. We received 1 day's notice that Resslear was to testify. At our request, a professionally respected criminologist on the University of Wisconsin- Milwaukee faculty worked all night and testified the next morning on our motion in limine to preclude Resslear's testimony, that there was absolutely no existing scientific studies or documented support for concluding that Dahmer must be insane because Resslear was going to testify that he was a "disorganized" serial killer. The judge ruled in limine that Resslear's testimony would not be permitted. In another case, the defense attempted to raise an insanity defense based on "urban psychosis." We wanted to knock that out at the earliest stage possible. The defendant was a young teenager from a tragically violent background who had become involved in the killing of another child for a coat that child was wearing. Urban psychosis is a new creative concept unknown to the authors of the Diagnostic and Statistical Manual of Mental Disorders of the American Psychiatric Association, to many the Bible for categorizing mental diseases and disorders. The "urban psychosis" defense did not survive the preliminary hearing. While the post- traumatic stress disorder defense replaced it, at least the prosecution could come to grips with this known psychiatric phenomenon in a rational, knowledgeable manner. There are good reasons, however, why a prosecutor might not attempt to strike evidence even though an appropriate Daubert or Kumho Tire motion might be sustained. Assume we have a defense attorney who puts on some scientific evidence. The evidence is good, but during cross examination of the expert it becomes clear that the defense attorney, either from lack of knowledge or absence of skills, cannot adequately adduce testimony from the expert to meet the Daubert or Kumho Tire standard. Knocking out such evidence in limine might become a Pyrrhic victory. On appeal, new counsel will argue the incompetence of trial counsel and a second trial may well be ordered. There are other ways, of course, to secure the exclusion of evidence which may be relevant and well founded. Under Federal Rule of Evidence 403, and State evidentiary rule analogs, good evidence can be excluded on grounds of possible prejudice, confusion, or waste of time. On such a claim, a prosecutor might try to knock out very complex, but scientifically sound evidence, claiming it will create confusion in the minds of jurors or may consume untoward amounts of trial court time. Again, knocking out such evidence might prove a Pyrrhic victory when the case is overturned on appeal. Another technique to exclude evidence in limine is to pounce upon a forensic expert who has inadvertently violated a sequestration order issued under Federal Rule of Evidence 615 or a State analog. This rule precludes witnesses who have not yet testified from discussing testimony that has already been presented in court during the trial. Again, the prosecuting attorney who induces a judge to knock out sound evidence on the premise of a violation of this rule risks a Pyrrhic victory. Again, on appeal, new counsel can argue that trial counsel was incompetent in failing to adequately alert all witnesses as to the sequestration rule and thereby secure a retrial. Generally, in this conference, we have been discussing progress in forensic sciences. We've had some reversals in the field, however. An example lies with modifications of the insanity defense. Under the old Federal practice, the insanity defense was a common law creation. Whether you favored the Federal rule followed in Durham v. United States, 214 Fed.2d 862 (D.C. Cir. 1954) or the permutation of the American Law Institute Rule favored in United States v. Brawner, 471 Fed.2d 969 (D.C. Cir. 1972), one must recognize that such rules were certainly more scientifically founded than the throwback to the M'Naghten rule adopted under the Federal Insanity Defense Reform Act of 1984 enacted after John Hinkley, Jr.'s attempt to kill President Ronald Reagan. A second example of ballistic science in reverse is on the scene. Many police departments are now using Glock handguns. You can't trace a bullet from a Glock. I'm waiting with baited breath for the first case in which a police officer armed with a Glock has a shootout with an offender armed with a Glock and an innocent bystander is slain and it can't be determined which Glock fired the fatal bullet. Let's try to keep going forward scientifically. Thank you. Mr. Barry A.J. Fisher: Before we start our break, I'd like to open up this panel to any questions or alternate speeches from the floor. Participant: [Inaudible.] E. Michael McCann: Barry Scheck has gone on the initiative using DNA to prove the innocence of convicted persons in New York. A number of years ago in a Wisconsin case antedating DNA science advances, a very brutal rape of a young woman was involved. The offender stabbed her numerous times and left her for dead. She affected death to survive. Police finally apprehended an individual identified by the victim, an identification which was supported with various strands of circumstantial evidence. The accused was convicted. Incident to the sexual offense, semen had been left in the victim's undergarments. At the time, not much could be done with that. As science moved forward, it was discovered that certain individuals were secretors and that one could determine from the semen of such an offender what his blood type was. At the request of the defense attorney, advanced approximately 8 years after the conviction, the semen preserved in the young woman's undergarments was tested by two separate laboratories. It was discovered that the individual who had deposited the semen was a secretor and that the blood type was different from that of the individual who had been convicted. The convicted man was freed after 8 years in prison. Our office prosecutes some 6,000+ felons a year plus many more thousands of misdemeanors. Some cases involve direct evidence and some involve circumstantial evidence. In some cases there may still exist blood or semen samples that could be tested against the DNA of persons convicted with respect to those offenses. Clearly, our office would cooperate to conduct such tests at the request of any defense attorney or, indeed, any defendant if it appeared there were any grounds for conducting such tests. There is some turnover, of course, of assistant district attorneys on our staff. It would be very difficult to go back and identify on our own initiative, without a claim from a convicted prisoner, cases appropriate for DNA testing. I applaud the work of Barry Scheck and hope that district attorneys and defense attorneys around the country will be responsive to claims by prisoners that DNA testing would exculpate them. Clearly, DNA science is sound science which can aid in convicting the guilty and freeing the innocent. Mr. Barry A.J. Fisher: Any other questions? Participant: Hi. This is mostly for Bert, I think. You talked about the trilogy requiring that there be more extensive empirical validation of science of all types and particularly forensic science. And the problem that I have with understanding what that means is that, quite often, particularly in forensic science, I'm struck by the extent to which each case is a unique set of circumstance and that the expert's testimony often may depend on judgments applied to a case-specific and unique set of facts that wouldn't necessarily arise elsewhere. So, I mean how does one go about validating judgments of that sort when they're non-repeated unique kinds of judgments. And just how far does this requirement of validation go? Does it extend to each and every subjective judgment that an expert makes that is somehow opinion-determinative, or are there certain classes of judgments that have to be validated and others not? I find it very amorphous at this point. Can you help me? Mr. Bert Black: Can I help? First of all, the extreme of taking every assumption that's made, every subjective judgment, and validating it, wouldn't work. It wouldn't work in science. You know, even in measuring whether or not you have observed--and I mean that in the scientific sense, because you certainly don't see it with your eye--an electron, you do that with instruments that let you detect whether there's some change in voltage somewhere, and so, you know, what you see on the dial, your judgment as to whether or not it's gone to a certain point, is what determines whether there's been an electron pass through your instrument or some kind of atomic particle, and so, there's a subjective judgment of what you've seen there, and we just have to rely on people telling the truth about that at some point. So, you can't pursue everything out to its absolute nth degree in terms of subjective judgment. As to situations in which a number of different methods are put together to come up with a result, because it's unique to the individual case, each of those methods or methodologies should be validated in the sense that I've been talking about, and there has to be a reasoned explanation as to how they're all put together. I think that explanation requirement comes from Joiner. Joiner is the case in between the two, and it doesn't get cited for this proposition very often, but in some ways it's the most important case, because it talks about the need to explain things. The expert who is going to do something at least in some ways unique from the specific case had better be able to explain it and, specific to the individual case, had better come up with some justification so that we know it's worthy of being relied upon. That's not a complete answer, but it's probably the best I could do. Mr. Barry A.J. Fisher: We'll take one more question, if there are any. Okay. Time for a coffee break. Thank you very much. ------------------------------ Summary Discussion Moderator: David G. Boyd Director Office of Science and Technology National Institute of Justice Washington, D.C. Mr. David G. Boyd: What I'd like to do first is cover a couple of administrative things. One is that those of you who would like copies of the post-conviction protocols, if you would be so kind as to drop off a business card or name and address, or you can even write us later if you like, but they'll collect them at the table across here, up front, as you leave. If you'll drop it there, they'll make sure that they give us the information we need to get a copy of the protocols to you. Second thing that we should have emphasized better during the last few days, and I'll show you how we're going to try to fix it, is that, when you speak--because we're going to give all of you an opportunity to do some serious responding after I make a hash out of the summary up here--if you would step to the microphones in order to make your comments and state who you are, so we can identify you. Now, how are we going to fix that for those who haven't? Well, when we get them done, a draft transcript will be sent to each of you. We would very much appreciate it if you would go through that transcript, find those comments you made that aren't attributed, and tell us that you're the one who made those comments. That's assuming that you're willing to accept the responsibility for those comments! And then we'll make sure that you're given credit. You will also get a copy of the final draft of the transcript once it's finished, and we're also going to try to put it up on the Internet. ***** Now, let me try to go through--I don't want to call this a summary or review, but you might. I used to have a boss who made comments about thoughts while shaving. Well, I wasn't shaving while I made these notes, but it's sort of the same kind of thing. What I'd like to do is kind of go through some of these. I'm going to make some kind of bald statements about what I think has happened here in this conference or the kind of observations that have been made. Most of them aren't going to be properly qualified, there aren't going to be sufficient caveats in it, and I know, in the presence of prosecutors and judges, that that may really leave me open, but I'm going to do that anyway. Who knows? In the process of doing this, I may actually aggravate some people and get some serious discussion going here. Let me start first by suggesting that one of the things that I think has been a strength throughout the conference is that there are kind of two communities here. There is what we might call the science community, and I'll broaden this a bit later on, and then the legal community And both of them are interested in the pursuit of truth or of justice. I'm going to use the term "truth" because it applies well in both categories. And there are some tensions within each of those communities. One of the speakers suggested that, among those who were interested in the pursuit of truth and justice on the legal side, one of the issues is that attorneys have as their principle goal not so much the pursuit of justice as the issue of winning the case. And in fact, the concept behind our legal system is that, if, in fact, we have good advocates on both sides, fighting hard to win, that the truth will emerge. On the other side, we have scientists who aren't persuaded that that's necessarily the case. And so, the scientific witnesses have a very different perspective. But they also have some internal tensions, among which are that they don't want to be terribly embarrassed in these cases when they're full-blown scientists, or it may be that their careers are attached to how they testify or how satisfied their clients are about the nature of their testimony, and this is not testimony where the client is necessarily interested in what's the right answer as in what's the answer that will get the right conclusion. The second one is that both of these--I'll refer to them as disciplines but very broadly, the science discipline and the legal discipline--have very different approaches to arriving at what is the truth. The one, the scientific arena, uses the notion of consensus; that is, we'll debate things, we'll argue things back and forth, and we will arrive at some point at a consensus about what the right answer ought to be. The other uses the adversary process that we've talked about so much here. Now, let me suggest just as my observation that may not be a very good characterization, but in fact, we have a little different kind of phenomenon to weigh, that what we have on the science side are scientists, engineers, technicians, researchers who are busy studying, debating, and arguing over what the right answer ought to be and arriving at a consensus, and on the other side, we have attorneys fighting to frame an argument, to frame an issue, so that somebody who is not directly involved in the debate can make a conclusion. But that means that there's a very different way in which these things are approached. On the one side, on the science side, in a very general sort of way, all are equipped with the same tools to understand what the arguments are, what the debates are. On the other side, there may not be any tools present on the part of the jury, they may not understand any of these issues, and they therefore are depending on these two opponents to paint a picture so that they, as the triers of fact, can figure out what they think the truth is. And that is a difference that's even greater in some respects than the notion of how you go about arriving at the truth. The fact is that, on one side, all of the people arriving at truth are also participants, while on the other side, they're actually external to the debate itself. They're observers. The next issue is that neither side adequately understands the other, that scientists don't fully understand what it is the legal system is about or how the legal process works, and the lawyers and the judges on the other side have some real problems with the mystical environment of the physical sciences. Whereas my boss occasionally--and he's an attorney, and a very bright guy, but doesn't always understand the science--and he quite frequently will make references to what we do as a form of magic, because they don't fully understand how the thing works. But it's also obvious, as I think this conference has demonstrated to this point, that both sides want to figure out how to talk to the other side, and how to understand what it is that the other half of this important equation does. Now, as an aside, let me also observe that (and I'm not going to dwell on this a lot) our focus throughout this conference has tended to be on science in litigation or in that part of the law that involves litigation. In fact, we ought to be interested in and we ought to think more broadly about science and the law, beginning all the way up at the point that the law perhaps is actually drafted in the first place, all the way through its use in the courts. Now, science is very, very powerful, sometimes too powerful, as this conference has observed, or at least people assume it's more powerful than it really is, and part of the difficulty is in communicating which of the two characterizations applies. The scientists, on the one hand, are frustrated with the attorneys, because the attorneys don't want to hear all the caveats, the qualifications, and the equivocations which are important to the sciences, because in some cases, they undercut the power of the evidence. So, the attorneys tend to be really frustrated at scientists, because they won't give good, firm, unequivocal answers. The scientists, on the other hand, are frustrated that the lawyers want, from their perspective, a black-and-white, unequivocal kind of response to what is, in fact, an equivocal scientific base. Part of that comes about because scientists don't speak English, but neither do lawyers. And in fact, one of the interesting things that happens in the court, going back to my earlier point, is that we have the scientists speaking in one language that the lawyers, who are using them as witnesses, don't fully understand, who are being asked questions from a frame of reference and using legal terminology sometimes which the scientist doesn't understand, in order to clarify issues for a jury that understands neither language. That means that one of the things that we came up with here is that there is a real requirement that we address a very broad range of issues and a very broad set of impacts ranging from training and education, not just of the attorneys and the judges, but we also wind up raising questions about the basic scientific training on the part of the people who may wind up serving in the jury. But it's also important that the scientists--let me step back a bit here. Lawyers are those people who avoided taking the physical sciences, who became lawyers because they didn't want to get involved in the hard sciences. You need to know that scientists are people who got involved in the sciences because they didn't want to write all those papers. Both of these are inadequate stereotypes, but I think you get my point. So, you have here one set of people who like equations and images and one side who like densely written drafts, and I can sympathize with that dichotomy, because we in my shop frequently spend a lot of time writing memos to explain what we think could be much better explained as a series of slides or drafts, but we are writing for attorneys. (Most of the budget folks in Washington are all attorneys.) Now, what that means is that, one of the things that became very clear here is that we have to think about how we're going to go about providing this education and this training, and that raises the question of qualifications--qualifications on all sides, qualifications on the part of those who are doing the testifying and the qualifications of those who are asking the questions or eliciting the testimony. Now, as an aside, let me make a point, and that is one of the questions raised here was the notion that forensic scientists aren't scientists, that the people who come out of the crime labs, for the most part, aren't scientists; they're technicians. Let me suggest that that means that we need to think a bit about where we need a scientist and where we need a technician. I would suggest to you that, if you want a good scientific test performed reliably and consistently, you want a technician, you don't want a scientist. You don't want the engineer who designed your car to fix it. You want the mechanic with dirty fingernails. So, we need to think about the scientist as the one who provides us the information and the background to help in determining admissibility of evidence, and the technician is the one who does the casework to determine whether it's good or not. Finally, resources for the research foundation was one of the things that was raised as something that we need to do, more of the research background, more of the research foundation. Let me tell you just a little story about that. There has been a dramatic increase in the amount of funding that's available today to do that kind of research, but let me put that in perspective. We have grown, over the last 5 years, so much that we have more money being applied to forensic science research than was in the entire Science and Technology program 5 years ago. Five years ago, we were extremely tiny. Our growth has been explosive, so that today we are a little less extremely tiny, but we're still very tiny. In fact, if you've ever looked at any of the graphs of R&D applied to the various parts of government, you find that those applied to justice in the criminal science arena are so small that, when I drew a pie chart that I was going to show to Congress while I was testifying, I had to darken the line, not the pie slice, but the line which represented us just so they would know there was a line there, because there was no way I could draw a line thin enough to represent our share of R&D and still have it visible. There also is an inevitable tension among those resources, because we not only need to find the resources--and this is the bureaucrat whining now--to be able to fund the R&D that's necessary to build the foundations you talked about, we also need the money to provide the seed capital to improve the crime laboratories so they can make use of that science once it's developed. Now, there wasn't a whole lot of consensus that came out of this conference, but let me suggest some general areas where there was some consensus. The first one was that the question of science in the law is unavoidable. I think we've all acknowledged that science is now here to stay, that we're going to have to live with it in the courts, that judges are now trapped into having to make hard decisions about science. That scientific and technical questions are going to be present in our courts increasingly is inevitable And, finally, I think we all agreed that change is a pretty wrenching phenomena, but in this case, it's unavoidable, it's going to happen whatever we do. So, the questions I'd like to open to the floor now are these. If we were to do this conference again a year from now, what are the kinds of issues that we ought to make the principle focus of that conference? What are the things that we have not explored far enough or the issues that we have not raised that we ought to? Let me suggest a couple of my issues to kick it off. You can feel free to jump all over these if you like. One is, how do we protect the independence of the scientific witnesses? And I don't mean that just from the point of view of the defense attorney--we've talked almost exclusively here about criminal cases, which is our interest--but also the independence of the crime laboratory. And I would suggest to you that that issue involves more than just whether they get paid for the work, but also involves, for example, whether the laboratory is part of the investigative agency and the degree to which the careers of the folks in the laboratory may be affected by how well they serve the investigative agency or the prosecution, and that that's an issue that also ought to be raised and we ought also to talk about how best to handle that. In fact, at the State level around the United States, all of the models are present. There are some where crime labs are independent of investigative agencies. In those cases, they tend to provide that kind of objective science for the agencies, but they tend, then, sometimes to wind up being opposed in the budget process by the law enforcement agencies who think they're taking funding away from them. In other cases, they're part of the law enforcement agency, and then some would suggest that they're at the bottom of the funding pile and so, therefore, their funding arguments never get there in the first place, much less the casework once it goes in. The next question that we touched on here is the issue of the disclosure of scientific testimony and of the sciences used and how we ought to do that. Now, those are two of the issues--only two of the issues--that I think were raised here. What I'd like to do now is see if anybody here has some things they would like to raise that we haven't really addressed that we ought to, that we ought to make part of the focus of a conference next year. Mr. David Boyd: Please. Judge Sam C. Pointer, Jr.: Sam Pointer from Birmingham. I apologize, since I've already had some air time, that I'm taking a little bit more, but I have two that seem to me should be or might be on an agenda for another conference. One that it seems to me we did not touch on is the use of expert testimony, not to say this person was involved or this person was not involved, but more scientific testimony that will help a jury determine better how to understand the evidence that's been provided. And we have this provision in the rules allowing testimony by scientists in the form of opinion or otherwise, and we have this notion that we can have some things coming in through such people that are not "he did it or he didn't do it," but that will simply assist the jury in terms of making better decisions One of the reasons we've sort of shied away from it is the sense that so many people had about eyewitness identification testimony, but no matter how you stand on that issue, it seems to me it's another area in which we have some major opportunities we're not taking advantage of. The second, I'm not sure if I'm right about this, but we know that within the legal community, we have fairly clear divisions between judges, plaintiffs' lawyers, prosecution lawyers, defense lawyers. We're able to sort of see potential divisions and perhaps differences in attitude. I think, I may be wrong, that there are some similar divisions and separations within the scientific community that have not really come to bear here, and I would say at least two of those are forensic versus nonforensic. I don't know, but my hunch is that, if you start talking with people involved in the AAAS, and you start talking with people involved in the National Academy of Sciences, and you start talking about those who are involved in litigation, much of whose lives are involved in litigation, the crucible of being examined by lawyers and having to fend in that arena, then you turn to those who are involved in pure--if we can call it that--academic science, research, who believe that litigation is rather abhorrent, it involves ad hominem irrelevant attacks on one's personality, it involves inappropriate examination by nonexperts, namely lawyers, about things, and the lawyers have an interest in the outcome, they are somewhat horrified by the notion of being put to that crucible and prefer some form of exploration through their own peers. I think there's some other classifications. You might have, for example, in the medical field, the treating physician, who is probably not directly on either side or the other. You probably have some other areas in which you can find some divisions within the scientific community. Margaret Berger spoke briefly yesterday about the difficulties of finding those who are in the pure, the academic, the untainted, the unspoiled and unsoiled scientific community to get involved in litigation. And I think the whole system suffers from the unwillingness of so many of those people to get involved. We don't get the benefit of their testimony, they do not act as potential breaks and checks on perhaps what may be excesses. Seems to me it's an area that we need to look at, including, are there some rules and things that really retard and prevent this academic world from being willing to get involved in litigation? Now, maybe we can do some things such as giving greater protection to their mental process work, so they're not totally exposed, allowing devices for examination of several experts at the same time, which they'd be much more comfortable with than the purely Q&A's of our normal system, plaintiff, defense, and so on. I think there are some areas to explore there, and I think, until we somehow are able to bring these people into the litigation process, we're going to be missing the boat. Participant: I guess I'd just say something not really so much in response to what Judge Pointer just said but maybe to suggest a possible line of stuff you could pursue next time based on that. I don't know how often scientists fear this, about being involved in the law, but if you do give testimony on some controversial issue, that might not be the end of it. I mean people do have their data subpoenaed sometimes, and I think that, for people in a controversial area, that might--being aware of that kind of thing happening--be a real deterrent, that just going in and giving a one-shot testimony on something might not be the limit to the scope of your involvement, and I'm not suggesting that there should be any absolute way to protect somebody from subpoena. But some assurance that there would be reasonable limits on the degree to which your work would be exposed to intensive intrusion might help the participation of scientists in the legal process in some respects. And I think it's an issue that many of you would be familiar with. To some degree, it's become critical for science, with the recent Office of Management and Budget rule about Freedom of Information Act requests. That's not what we're talking about here, and it's not relevant to a criminal procedure, but I think that as the awareness of that has pervaded the scientific community quite rapidly, I wouldn't be surprised if concern about, you know, once I get my work public and it seems relevant to the legal system, there may be no stopping this, could, if anything, decrease participation even more. Mr. David Boyd: Thank you. The OMB rule he refers to is a rule which makes available for Freedom of Information Act requests the data of researchers who have conducted projects with Federal money, who have been given grants or contracts or whatever to conduct that work. And there is considerable consternation in the research world over that issue and it's big. I don't know what the final outcome will be, but there are any number of responses being made to OMB on that issue. Anything else? Please. Mr. Sheldon L. Trubatch: [Inaudible ] I'm Sheldon Trubatch. I'm a lawyer. I used to be a physics professor. With regard to the relationship between scientists and lawyers, I can speak from my personal experience (anecdotal, I'm sorry). It took me a long time to make the mental shifting of gears from being a scientist, where you believe that there is an absolute truth out there in the universe and that you are slowly gaining on it or coming closer to it, to becoming a lawyer and realizing that there are only arguments. And you can take the same law and the same fact and you can reach two diametrically opposite conclusions with a straight face, and that is something which is really anathema to scientists, who haven't had to go through that in their professional lives. As far as questions for the next time, there is a vast body of experience at the administrative agency level, many technical administrative agencies, some of which do have adjudicatory processes, which hasn't been adverted to at all here, and that then raises a question as to when those agency decisions get reviewed at the court of appeals level, how does the court of appeals prepare itself to deal with those issues? It's true, they are deferential, but they still have to make sure that the record is coherent. So, that would be, I think, another area that should be explored. Mr. David Boyd: Please. Dr. Joseph L. Peterson: I'm Joe Peterson of the University of Illinois in Chicago. I would have three comments or suggestions. The deficiencies in education both of lawyers and scientists has been brought up time and again. I think that we should try to get other educators involved in forensic science at the conference, as well as students. I think we have to worry about this next generation of lawyers and scientists and to try to begin to prepare them and to better educate them, perhaps look at alternative models for preparing future forensic scientists and lawyers. Secondly, I would encourage more papers about examining the process of justice, the process in which science gets used or misused or not used. A number of you have been involved in that, but I think that we should encourage more research on that. I'm involved with a study that's going on at the University of Maryland, looking at how DNA evidence has been used in different jurisdictions out there in terms of case processing and case outcome and so forth. So, I would encourage more dialogue and involvement in the process in which science gets used and misused. The last point I have is on these different organizational models that we've talked about. We know that there are problems inherent in laboratories being a part of police organizations. This morning, Paul Giannelli spoke of--he would place his money on the crime labs, to improve them, and that that's where we're going to get the greatest benefit. I've often thought that maybe putting more resources in the defense side, in public defenders' offices and so forth, better educating them, to force the prosecution, to force the police laboratories to come up to an acceptable standard is the better way. So, I think there might be some additional thought and papers perhaps solicited on these different organizational models and ways of delivering the science. Thank you. Mr. David Boyd: Please. Judge Sherrie L. Krauser: I'm Sherrie Krauser. I'm a judge in Maryland. One concern I had was that, in the very beginning of the conference, it looked like we were going to talk about a lot of the developments that are going on in science, and as you pointed out, most lawyers don't know a whole lot about science unless it gets presented in our courtroom, or at least what we can remember from those requirements back in college. I think it would be helpful in a future conference to go into--I know there's way too much going on in science in the broadest sense all around the country, but if we could even just sort of touch on things that are likely to result in developments that might see their way into a courtroom in the next couple of years, it would kind of give us a tip of the iceberg, just heads up on what ramifications there may be to some of the applications that we're going to be looking at in terms of the Daubert and Kumho Tire problems. I guess my concern is that, whenever you render a decision that has allowed certain kinds of evidence to come in, you've set a precedent, whether it's just for your trial court or whether that decision is affirmed on appeal, and the language that's used, which is obviously a very different approach than scientists take, but the language that's used by the courts in accepting or rejecting certain kinds of evidence often has unforeseen consequences, and it would be helpful, I think, to have some idea of what those consequences might be in terms of what's coming up right behind. Mr. David Boyd: We had someone back here. Participant: Consistent with some of the suggestions I've made (I hope consistent with some of the suggestions I've made) if the goal, ultimately, is to put forensic sciences on a more empirical footing and to develop means for validating forensic techniques, then perhaps it would be appropriate to have papers delivered on methodologies, scientific methodologies for validation, and also papers delivered on the application of those methodologies to particular types of evidence to see where we stand. I would not suggest that a whole conference be devoted to that, that we turn it completely into a scientific conference, but there ought to be a more scientific element like that, and consistent with the sentiment expressed by many that the lawyers have to learn this stuff, it wouldn't hurt them to sit through some of those papers. Mr. David Boyd: I have to sit through some of those periodically when we do our updates every year at the American Academy of Forensic Sciences. I have to tell you, some of those sciences are really esoteric. Margaret? Dr. Margaret Berger: Margaret Berger, Brooklyn Law School. Actually, my suggestion, I guess, is a subset of Bert Black's, which is that it seems to me that one of the themes that has been running through these 2 days, though not explicitly stated but that comes out of Sherrie Diamond's work with jurors, that comes out of Bert Black's presentation of the Stifel case, is that we have a great deal of statistical illiteracy on the part of jurors and on the part of lawyers, which I think becomes much more significant in terms of the application of science in the courtroom, a very difficult thing to cure in 5 minutes, but I wonder if one could think about mechanisms, including perhaps legal education in terms of changing curriculum, and other ways in which one somehow could deal with these kinds of issues effectively, for judges as well as lawyers. Mr. David Boyd: We had one here in the back. There you go. Dr. Patricia J. McFeeley: I'm Patty McFeeley. I'm from New Mexico. I'm a medical examiner, but I'm also a tenured faculty in the department of pathology in our medical school, and so, I'm used to kind of straddling some of these areas that Judge Pointer was talking about, the forensic versus the nonforensic, and I think the consensus has been, and I really agree, that we do need to do the research in forensic sciences, but we need to do that by putting it into the traditional academic or scientific areas, and I think the way you do that and avoid some of the reluctance that has been talked about is put it into an area where it does become a positive feature for academic people. Although Dr. Pollard said that academics work pro bono when they're on these research review committees, that's not entirely true, because those are very prestigious, and those are the kind of things that, in a department, in an academic thing, are what give you promotion and what give you tenure. Those are the activities that are very valuable, and we need to put the forensic sciences and the forensic research--and whether that includes testifying, it includes being a friend of the court, participating in that, we need to put that into an area where it is valued in the scientific community, and I think that could be done by putting the research into some of the traditional scientific areas and having them help us to validate what we're doing and make it more of the pure science that they were talking about. Mr. Curt Lee Owen: I'm Curt Owen. I'm a public defender, San Diego. At one point in the conference I took considerable offense at one of the things said by a scientist, but it made me think. And in essence, what that particular scientist said was it reflected a notion that was totally in error, that attorneys go out looking for an expert and hold up a bundle of cash and say, "All right, this is what I want you to say, who's willing to say it?" Which is not, in my experience, what any attorney does. But it did make me realize that there is a tremendous difference between the ethics under which an attorney operates to do an attorney's job right and the ethics under which a scientist operates to do a scientist's job right, and I'd like to see a little more exploration of just how those two different sets of ethics clash, which is exactly what happens in a courtroom. **** Mr. David Boyd: Okay. It looks like we've exhausted most of the key issues. I want to thank all of you for your participation; it's been very useful. Thank you.