Title: Evaluation of a Test Protocol for Explosives Trace Detectors Using a Representative Commercial Analyzer (NIJ Report 100-99). Series: Law Enforcement and Corrections Standards and Testing Program Author(s): Gary A. Eiceman, Cecily M. Boyett, John E. Parmeter Published: National Institute of Justice, September 1999 Subject: Technology in law enforcement 33 pages 70,000 bytes ------------------------------- Figures, charts, forms, and tables are not included in this ASCII plain-text file. To view this document in its entirety, download the Adobe Acrobat graphic file available from this Web site or order a print copy from NCJRS at 800-851-3420 (877-712-9279 for TTY users). ------------------------------- U.S. Department of Justice Office of Justice Programs National Institute of Justice National Institute of Justice Law Enforcement and Corrections Standards and Testing Program Evaluation of a Test Protocol for Explosives Traces Detectors Using a Representative Commercial Analyzer ------------------------------- U.S. Department of Justice Office of Justice Programs 810 Seventh Street N.W. Washington, DC 20531 Janet Reno Attorney General Raymond C. Fisher Associate Attorney General Laurie Robinson Assistant Attorney General Noel Brennan Deputy Assistant Attorney General Jeremy Travis Director, National Institute of Justice Office of Justice Programs World Wide Web Site: http://www.ojp.usdoj.gov National Institute of Justice World Wide Web Site: http://www.ojp.usdoj.gov/nij ------------------------------- ABOUT THE LAW ENFORCEMENT AND CORRECTIONS STANDARDS AND TESTING PROGRAM The Law Enforcement and Corrections Standards and Testing Program is sponsored by the Office of Science and Technology of the National Institute of Justice (NIJ), U.S. Department of Justice. The program responds to the mandate of the Justice System Improvement Act of 1979, which created NIJ and directed it to encourage research and development to improve the criminal justice system and to disseminate the results to Federal, State, and local agencies. The Law Enforcement and Corrections Standards and Testing Program is an applied research effort that determines the technological needs of justice system agencies, sets minimum performance standards for specific devices, tests commercially available equipment against those standards, and disseminates the standards and the test results to criminal justice agencies nationally and internationally. The program operates through: The Law Enforcement and Corrections Technology Advisory Council (LECTAC) consisting of nationally recognized criminal justice practitioners from Federal, State, and local agencies, which assesses technological needs and sets priorities for research programs and items to be evaluated and tested. The Office of Law Enforcement Standards (OLES) at the National Institute of Standards and Technology, which develops voluntary national performance standards for compliance testing to ensure that individual items of equipment are suitable for use by criminal justice agencies. The standards are based upon laboratory testing and evaluation of representative samples of each item of equipment to determine the key attributes, develop test methods, and establish minimum performance requirements for each essential attribute. In addition to the highly technical standards, OLES also produces technical reports and user guidelines that explain in nontechnical terms the capabilities of available equipment. The National Law Enforcement and Corrections Technology Center (NLECTC), operated by a grantee, which supervises a national compliance testing program conducted by independent laboratories. The standards developed by OLES serve as performance benchmarks against which commercial equipment is measured. The facilities, personnel, and testing capabilities of the independent laboratories are evaluated by OLES prior to testing each item of equipment, and OLES helps the NLECTC staff review and analyze data. Test results are published in Equipment Performance Reports designed to help justice system procurement officials make informed purchasing decisions. Publications are available at no charge through the National Law Enforcement and Corrections Technology Center. Some documents are also available online through the Internet/World Wide Web. To request a document or additional information, call 800-248-2742 or 301-519-5060, or write: National Law Enforcement and Corrections Technology Center P.O. Box 1160 Rockville, MD 20849-1160 E-Mail: asknlectc@nlectc.org World Wide Web address: http://www.nlectc.org ------------------------------- The National Institute of Justice is a component of the Office of Justice Programs, which also includes the Bureau of Justice Assistance, Bureau of Justice Statistics, Office of Juvenile Justice and Delinquency Prevention, and the Office for Victims of Crime. ------------------------------- U.S. Department of Justice Office of Justice Programs National Institute of Justice Evaluation of a Test Protocol for Explosives Trace Detectors Using a Representative Commercial Analyzer NIJ Report 100-99 Gary A. Eiceman Cecily M. Boyett Department of Chemistry and Biochemistry New Mexico State University Las Cruces, NM 88003 and John E. Parmeter Sandia National Laboratories Albuquerque, NM 87185-0782 Coordination by Office of Law Enforcement Standards National Institute of Standards and Technology Washington, DC 20899-8102 Prepared for National Institute of Justice Office of Science and Technology Washington, DC 20531 September 1999 NCJ 178261 ------------------------------- National Institute of Justice Jeremy Travis Director The technical effort to develop this report was conducted under Interagency Agreement 94-IJ-R-004, Project No. 97-028-CTT. This report was prepared by the Office of Law Enforcement Standards (OLES) of the National Institute of Standards and Technology (NIST) under the direction of Alim A. Fatah, Program Manager for Chemical Systems and Materials, and Kathleen M. Higgins, Director of OLES. The work resulting from this report was sponsored by the National Institute of Justice (NIJ), David G. Boyd, Director, Office of Science and Technology. ------------------------------- FOREWORD The Office of Law Enforcement Standards (OLES) of the National Institute of Standards and Technology (NIST) furnishes technical support to the National Institute of Justice (NIJ) program to strengthen law enforcement and criminal justice in the United States. OLES's function is to conduct research that will assist law enforcement and criminal justice agencies in the selection and procurement of quality equipment. OLES is: (1) Subjecting existing equipment to laboratory testing and evaluation, and (2) conducting research leading to the development of several series of documents, including national standards, user guides, and technical reports. This document covers research conducted by OLES under the sponsorship of the National Institute of Justice. Additional reports as well as other documents are being issued under the OLES program in the areas of protective clothing and equipment, communications systems, emergency equipment, investigative aids, security systems, vehicles, weapons, and analytical techniques and standard reference materials used by the forensic community. Technical comments and suggestions concerning this report are invited from all interested parties. They may be addressed to the Office of Law Enforcement Standards, National Institute of Standards and Technology, 100 Bureau Drive, Stop 8102, Gaithersburg, MD 20899-8102. David G. Boyd, Director Office of Science and Technology National Institute of Justice ------------------------------- CONTENTS FOREWORD 1. OBJECTIVES 2. INTRODUCTION 3. EXPERIMENTAL METHODS 4. GENERAL RESULTS AND DISCUSSION 5. RESULTS FROM THE TEST PROTOCOL 6. CONCLUSIONS APPENDIX TABLES Table 1. Average peak areas, standard deviation and percent standard deviation Table 2. Table listing the detection limits, probabilities of detection, false- positive rates, and false-negative rates for TNT, RDX, and PETN FIGURES Figure 1. Spectrum from the Itemiser using 5 ng of TNT Figure 2. Spectrum from the Itemiser using 10 ng of TNT Figure 3. Spectrum from the Itemiser using 20 ng of TNT Figure 4. Spectrum from the Itemiser using 9000 uL of Acetone Figure 5. Mobility spectra taken in replicate illustrating the variation in baseline Figure 6. Spectrum from the Itemiser for 500 ng of 3-methyl-4- nitrophenol (false-positive interference test for TNT) Figure 7. Spectrum from the Itemiser using 500 ng of TNT with 500 ng of 2,4-dinitrophenol (false-negative interference test for TNT) Figure 8. Spectrum obtained from the Itemiser using 500 ng of 2-methyl- 3-nitrophenol (false-positive interference test for RDX) Figure 9. Spectrum obtained from the Itemiser using 6000 ng of RDX with 6000 ng of 2,4-dinitrophenol (false-negative interference test for RDX Figure 10. Spectrum from the Itemiser using 500 ng of musk ambrette (false-positive interference test for PETN) Figure 11. Spectrum from the Itemiser using 500 ng of PETN with 50 ng of 2,4-dinitrophenol (false-negative interference test for PETN) Figure 12. Mobility spectrum for a thiol in positive polarity at 200 degrees C ------------------------------- COMMONLY USED SYMBOLS AND ABBREVIATIONS A ampere ac alternating current AM amplitude modulation cd candela cm centimeter CP chemically pure c/s cycle per second d day dB decibel dc direct current C degree Celsius F degree Fahrenheit dia diameter emf electromotive force eq equation F farad fc footcandle fig. figure ft foot ft/s foot per second g acceleration g gram gr grain H henry h hour hf high frequency Hz hertz i.d. inside diameter in inch IR infrared J joule L lambert L liter lb pound lbf pound-force lbf-in pound-force inch lm lumen ln logarithm (base e) log logarithm (base 10) M molar m meter min minute mm millimeter mph miles per hour m/s meter per second N newton N-m newton meter nm nanometer No. number o.d. outside diameter ohm (ohm) p. page Pa pascal pe probable error pp. pages ppm parts per million qt quart rad radian rf radio frequency rh relative humidity s second SD standard deviation sec. section SWR standing wave ratio uhf ultrahigh frequency UV ultraviolet V volt vhf very high frequency W watt wavelength (wavelength) wt weight area=unit[2] (e.g., ft[2]. in[2]. etc.); volume=unit[3] (e.g., ft[3], m[3], etc.) PREFIXES d deci (10[-1]) da deka (10) c centi (10[-2]) h hecto (10[2]) m milli (10[-3]) k kilo (10[3]) u micro (10[-6]) M mega (10[6]) n nano (10[-9]) G giga (10[9]) p pico (10[-12]) T tera (10[12]) COMMON CONVERSIONS (See ASTM E380) 0.30480 m =1ft 4.448222 N = lbf 2.54 cm = 1 in 1.355818 J =1 ft-lbf 0.4535924 kg = 1 lb 0.1129848 N m = lbf-in 0.06479891g = 1gr 14.59390 N/m =1 lbf/ft 0.9463529 L = 1 qt 6894.757 Pa = 1 lbf/in[2] 3600000 J = 1 kW-hr 1.609344 km/h = mph Temperature: T[degree Celsius] = (T[degree Fahrenheit]-32)x5/9 Temperature: T[degree Fahrenheit] = (T[degree Celsius]x9/5)+32 ------------------------------- NIJ Report 100-99 EVALUATION OF A TEST PROTOCOL FOR EXPLOSIVES TRACE DETECTORS USING A REPRESENTATIVE COMMERCIAL ANALYZER G.A. Eiceman,[1] C. Boyett,[1] J.E. Parmeter[2] A Test Protocol for evaluating trace detectors, crafted at Sandia National Laboratories, has been reviewed and evaluated using a representative commercial analyzer. The Test Protocol was included as part of the NIJ Guide 100-99, "A Guide for Selection of Commercial Explosives Detection Systems for Law Enforcement Applications." The objective of the present report was to evaluate the Test Protocol and provide a model test experience. The Test Protocol in its final form was found to be suitable for use by nonspecialists in trace contraband detection and provides a uniform standard test procedure for the evaluation and comparison of trace detectors. In this report, details from the application of this Test Protocol are described and discussed. In addition, background reference material relevant to ion mobility spectrometry (IMS) is provided. The Test Protocol was applied to a single instrument, the Itemiser [registered trademark] [3] from Ion Track Instruments (ITI), Inc., and was found to be directly applicable to this analyzer. There is every reason to expect direct application to analyzers from other manufacturers. However, the analyzer was received following a prolonged period of storage and nonuse. Consequently, the trace analyzer required refurbishment and restoration to manufacturer's specification. The poor performance of a trace detector owing to secondary causes rather than instrument design was recognized quickly since the work was completed in a laboratory specializing in IMS. In other circumstances, losses in time and productivity might have occurred before the poor performance was attributed to these causes. Thus, all trace detectors should be brought to a standard of performance specified by the manufacturer. While this should not be a problem with newly purchased instruments, analyzers that come to a user from uncertain conditions of use or storage should be critically and rapidly assessed to ensure performance as designed and delivered originally. Specific comments on the structure and format of the Test Protocol are summarized in the conclusion section and are incorporated into the present version of the Test Protocol. The Authors would like to give special thanks to Dr. Alim A. Fatah, Program Manager for Chemical Systems and Materials, for programmatic support and for valuable comments on an earlier draft of this document. 1. OBJECTIVES The objectives for the efforts described in this report were the following: (a) to employ in a laboratory setting with a nonspecialist, the Test Protocol contained in NIJ Guide 100-99 for evaluating trace explosives detectors, (b) to clarify where the Test Protocol may have been unrealistic or unclear, (c) to modify the Test Protocol with corrections or deletions as suggested from the laboratory experiences, and (d) to describe or document the actual findings from the laboratory testing with one trace detector. The intention was to discover flaws or pragmatic barriers to application of the Test Protocol, both in the written descriptions of principles and in the procedures provided to evaluate trace detectors. All tests were made by a nonspecialist with an undergraduate education in chemistry. A particular interest was to determine if the proposed methods were realistic, clearly presented, and technically sound. 2. INTRODUCTION 2.1 Background A Test Protocol for evaluating trace chemical detectors was crafted at Sandia National Laboratories in FY-98, and included in NIJ Guide 100-99. It was aimed at those in law enforcement nationwide who have interest in drug or explosives detection but lack the training or background of specialists. In the Test Protocol, the important and vital steps of operating a trace chemical analyzer, preparing samples, maintaining records of measurements, and more, are given with special attention to interpretation of the results. As such, an appropriate document would be understandable by nonspecialists and sufficiently complete to withstand the scrutiny of technical criticism by specialists in trace chemical detection. In the Test Protocol, the steps to access various performance parameters are described in detail and when completed, using authentic samples, should allow two or more analyzers to be compared reliably. Ideally, this could be accomplished at city, county, or State levels of local law enforcement agencies without assistance of specialists. However, the Test Protocol needs to be free of technical jargon and sufficiently complete to allow a nonspecialist to perform an evaluation and obtain meaningful results. A field trial by a nonspecialist under the supervision of a specialist was undertaken to investigate these issues and to bring modifications as needed to the Test Protocol. The Test Protocol, though applied to a single detector, was written to be as generic as possible. Thus, a single successful application would suggest a well-written Test Protocol. A successful measurement of explosives or drugs at trace levels is often ruined by faulty sample preparation, storage, and handling rather than instrument calibration or instrument failures. Moreover, most trace detectors share common mechanical and electrical designs, particularly within a given class of analyzers such as ion mobility spectrometry (IMS). Thus, a single evaluation was deemed a suitable and essential beginning for assessing strengths and correcting weaknesses of the Test Protocol. 3. EXPERIMENTAL METHODS[4] 3.1 Instrumentation The IMS analyzer used in this study was the Itemiser (Ion Track Instruments, Wilmington, MA). It was provided and used without modifications. Details of the drift tube design and internal construction are proprietary. The system weighs 22.26 kg (49 lb) and occupies a space of 36.83 cm x 54.61 cm x 46.99 cm (14.5 in x 21.5 in x 18.5 in). A thermal desorption anvil was attached to the front end of the instrument, and this allowed swipe samples to be volatilized into the system. The air flow provided to the IMS drift tube was scrubbed air which required clean molecular sieves. A dopant gas was used to control ion chemistry inside the drift tube ion source, and this required recharging on a semiannual basis. 3.2 Procedures Chemicals and solutions: Most trace detectors will show a positive response to amounts of explosive material on the order of a few nanograms or less (1 ng = 10(-9) or one billionth of 1 g). A common method of handling microscopic amounts of explosive material is to dissolve the explosive in volatile organic solvents. For example, a typical solution might be TNT dissolved in methanol at a concentration of 1 ng of TNT per microliter of methanol. A known amount of explosive can be obtained from such a solution by withdrawing a given volume of solution (typically 1 uL from a syringe) and then depositing this volume onto a sampling pad. The volatile solvent will evaporate very rapidly at room temperature and the explosive material will remain on the sampling pad. This pad containing a measured amount of explosive can then be used to challenge the detection system. With most commercial trace explosives detection systems, there are two common means of collecting samples: swipe collection and vapor collection. In swipe collection, a sampling pad (usually supplied by the manufacturer) is wiped across a surface suspected of having residues of explosive material. The sampling pad is then inserted into a sampling port on the instrument for analysis. In contrast, vapor collection involves the use of a small hand-held vacuum cleaner to collect vapors or particles in air or the space of a package or container. A collection filter or pad is located on the inlet of the vacuum cleaner and air is drawn through the sampling pad. Explosive material, if present in the air in the form of either vapor or particles, will be trapped on the sampling pad. Typically, one vacuums just above the surface to be investigated. The pad is then removed from the vacuum cleaner and analyzed by the system in a manner similar to the analysis of a swipe sample. Stock solutions were prepared for use on the Itemiser following the above procedure. The stock solutions were prepared for TNT, RDX, and PETN with concentrations of 100 mg/mL, 50 mg/mL, and 0.100 mg/mL, respectively, using acetone as the solvent. Dilute solutions with concentrations of 1 ng/uL were made daily for each explosive using the stock solutions. The stock solutions remained in a freezer when not in use. 3.3 Comments on Sampling When presenting any chemical detection system with a sample, it is imperative to guarantee that the system is clean. If the system is contaminated, an apparent detection may result from the contamination that was already present in the system and not from the sample being analyzed. Clearly, this could lead to false-positive results. For this reason, trace detectors must be certified as clean after a true positive response or alarm shows the presence of explosive material. In addition, the system must be verified to be clean before analyzing the first sample, and this is done by challenging the system with clean (blank) samples that are known not to contain explosive material. Thus at the start of each period of use of the instrument, and after the processing of any sample that results in an explosives alarm, challenge the system with such a clean sample and see whether or not an alarm is recorded. If no alarm is recorded, one can proceed to process additional samples. If an alarm is recorded, it almost surely results from contamination of the system, and one needs to keep challenging the system with blanks until no alarm is recorded. In extreme cases of exposure to very large explosive masses, it might be necessary to let the system sit for several hours with the detector at a high temperature in order to purge the system of all the explosive contamination. 3.4 Procedures as Abstracted from the Test Protocol 3.4.1 Probability of Detection - [p(d)] The probability of detection [p(d)] for a given mass of a certain explosive should be determined for or defined by each type of sample collection. The procedure outlined below concerns the most common method for explosives screening where a known amount of explosive from a standard solution is deposited directly onto a sampling pad. 1. See that the system is turned on, calibrated, and that the alarm level is properly set. 2. Using a syringe, place a known amount of explosive in a solution of a volatile solvent onto the center of a sampling pad (e.g., 1 ng of TNT in 1 uL of acetone). Use the type of sampling pad recommended by the manufacturer for that instrument. 3. Wait 1 min for the solvent to evaporate. 4. Present the sampling pad to the instrument as appropriate for that system. 5. Observe the system response and record whether or not an alarm occurs. 6. Present the system with clean pads (the blank) and observe whether or not an alarm is recorded for the explosive in question. If three consecutive clean pads produce no alarm, the system can be assumed to be clean and can again be challenged with a pad containing explosive material. 7. Repeat this overall procedure for a total of 20 measurements. The [p(d)] is then [p(d)] = [number of alarms recorded/20]. 3.4.2 Detection Limit (DL) The detection limit (DL) is defined here as the lowest mass of explosive material with a [p(d)] of 0.9 or higher. Note that this is dependent upon where the alarm level for a particular system is set and is a pragmatic (not rigorous) definition of the DL. A general procedure to do this is outlined below. We again use the procedural example (outlined below) of known explosive masses deposited directly onto a sampling pad of the appropriate type. 1. See that the system is turned on and properly calibrated, and that the alarm level is set at the appropriate level. 2. Challenge the system with a blank sample pad (no explosive) to verify that it is clean. If no alarm is recorded, proceed to the next step. If an alarm is recorded, repeat this step until no alarm is recorded. To save pads, this step may be repeated with the same pad initially if desired, but always challenge the system with at least one new clean pad and obtain a result of "no alarm" before moving on to the next step. 3. Challenge the system with a sample mass that is suspected of being well above (on the order of 10x) the DL. If an alarm is recorded, move on to step (4). If no alarm is recorded, the mass chosen was too small. After running a blank sample to verify system cleanliness, double the mass, challenge the system again, and see if an alarm is recorded. Repeat this procedure until an alarm is recorded. 4. Perform a [p(d)] test for the mass where the alarm is recorded, as described above in the section on Probability of Detection. If [p(d)] for this mass is 90 percent or greater, this can serve as the starting point for the test performed in step (5). If [p(d)] is less than 90 percent, double the mass and again perform a [p(d)] test. 5. Repeat this procedure until a mass is found that is above the DL, i.e., that has a [p(d)] of at least 90 percent. 6. Once such a point above the DL has been found, perform a series of tests where the system is challenged with 10 different masses, ranging from no explosive to the mass determined in step (4) in equal increments. For example, if it was determined in step (4) that a mass of 2 ng was above the DL, the masses tested should be (0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, and 2.0) ng. Start with the lowest mass (no explosive) and work upwards, performing two tests for each mass, and recording whether or not alarms are recorded. Make sure that a clean blank sample is run and that no alarm is recorded following each challenge with explosive material. 7. Find from the test in step (5) the lowest mass for which two alarms were recorded and run a [p(d)] test with 20 challenges at this mass. This mass should be very close to the DL. If [p(d)] for this mass is found to be less than 90 percent, perform a [p(d)] test at the next highest mass and keep moving up in mass until a mass is tested where [p(d)] is greater than 90 percent. If, on the other hand, [p(d)] is greater than or equal to 90 percent for the mass chosen initially, move down to the next lowest mass and perform a [p(d)] test and continue to move down until a mass is reached with [p(d)] less than 90 percent. Once this procedure is complete, the lowest mass tested with [p(d)] > 90 percent can be taken as the DL. 3.4.3 False-Negative Rate For a given set of experimental conditions, the false-negative rate is simply one minus the [p(d)]. Note that all experimental parameters must be identical for this to hold true. If any parameter is changed, a new [p(d)] must be determined before the false-negative rate can be calculated. 3.4.4 False-Positive Rate The following test procedure can be followed for a laboratory test of the false-positive rate. The example of clean sampling pads is used. 1. Make sure the system is turned on, calibrated, and that the alarm level is properly set. 2. Challenge the system with a clean sample pad. If no alarm is recorded, the system can be assumed to be clean and the testing can begin. If an alarm is recorded, continue to challenge the system with clean pads until no alarm is recorded three consecutive times. 3. Challenge the system with clean sample pads 20 times and record each time whether an alarm is recorded and for what explosive. If an alarm is recorded, interrupt the test by challenging the system with blank pads that are not counted towards the total of 20. When three consecutive pads produce no alarm, the system can be taken to be clean again. At this point, continue where you left off in the test sequence of 20 pads. The false-positive rate is then the [number of alarms in the test sequence of twenty pads/20]. 3.4.5 Nuisance-Alarm Rate Nuisance-alarms are alarms that result from the actual detection of an explosive, but where the explosive material present originates from an innocuous source rather than from a threat item. A nuisance-alarm rate can be determined in the same way as a false-positive (i.e., false-alarm) rate, but the tests will need to be performed in a real-world operating environment rather than in a laboratory. 3.4.6 Interference Tests Interferences (or interferents) are chemicals that may interfere with the detection of explosives using a trace detection system. Since interferences may produce either false-positives or false-negatives, two different tests are outlined to check for each of these effects. The focus is again on the common sampling procedure of placing a known amount of explosive in solution onto a swipe pad. Interference False-Positives. To check for false-positives, follow this procedure: 1. Make sure that the explosives detection system is on, properly calibrated, and that the alarm level is set as desired. 2. Challenge the system with clean sampling pads until no alarm is recorded on three consecutive challenges. 3. Place a known amount of the potential interference onto the center of the appropriate sampling pad. Use an amount that is at least 100x the DL for an explosive that has been tested with the system. For most materials, this step will most easily be accomplished by using a solution of the material in some volatile solvent. However, for solid materials (e.g., lipstick), this test could be performed crudely by simply smearing the material across the sampling pad. Note that this test could be made quantitative by weighing the sampling pad before and after application of the test substance, if a sensitive enough microbalance is available. 4. Challenge the system with the sampling pad containing the material being tested for interference. If no alarm is recorded for explosives, the material is not an interference that is capable of producing false-positives. 5. Repeat step 4 with at least two more sample pads containing the material, to make sure the result is correct. Before each test, challenge the system with a clean sampling pad to assure that it is not contaminated. Interference False-Negatives. To check whether or not a potential interference can create a false-negative (that is mask the presence of an explosive), use the following procedure: 1. See that the system is on, properly calibrated, and that the alarm level is set as desired. 2. Challenge the system with a clean sampling pad to assure that it is not contaminated. 3. Take a clean sampling pad. Deposit onto the center of this pad a known amount of the explosive of interest that is approximately twice the DL. Also deposit onto the pad a known amount of the potential interference to be studied (preferably the same mass as for the explosive). 4. Challenge the system with this pad and see if an explosive alarm is recorded. If no alarm is recorded, the substance studied is an effective interference. 5. After clearing out the system, repeat the test with half as much interference and continue this process until the explosive can again be detected. This gives an idea of how much interference is required to mask the presence of the explosive. 6. If an alarm is recorded, keep repeating the test while doubling the amount of interference used, until either the explosive can no longer be detected or the experiment becomes impractical. If even then the explosive can be detected, the material present is not an effective interference. Note that this procedure really investigates only gas phase interference and does not investigate the separate (but important) issue of matrix effects that may occur when an explosive is mixed with another material in solid form. Such effects can significantly reduce vapor pressures from those of the pure compound. For example, the vapor pressure of RDX emitted from C-4 is expected to be less than that emitted by pure RDX, because the plasticizing agents in the solid C-4 lead to less direct sublimation. 3.4.7 Throughput Rate The throughput rate is the number of distinct samples that can be processed in a given period of time. It is usually expressed in units such as samples per minute, samples per hour, and so forth. It needs to be determined by challenging the system with a large number of samples over a considerable period of time, in order to obtain an accurate average value. In general, the throughput rate can be determined by following this procedure: 1. Collect a large number of the appropriate sampling media (e.g., swipe pads). 2. See that the detection system is turned on, calibrated, and has the alarm level properly set. 3. Challenge the system with a clean sampling pad to make sure that it gives no alarm. 4. Start timing with a stopwatch. Follow the steps listed below for 5 min or for the time it takes to process 10 samples, whichever is greater. Use only clean (blank) samples in this test, to make sure that no time is lost due to detection system clear down time. The experiment will then give a measure of the optimum throughput rate for the sampling process chosen. 5. Obtain a sample according to the chosen procedure (e.g., swipe a clean tabletop), challenge the detector with it, and record the result (alarm or no alarm). 6. Repeat step 5 until 5 min are up or 10 samples have been processed. Work deliberately and continuously, but do not rush. 7. Stop the watch and record the finishing time if less than 5 min. The throughput rate is the number of samples processed, divided by this total time. For example, if 30 samples are processed in 3 min, the throughput rate is 10 samples per minute. If you did not analyze 10 samples within the 5-min time span, then the throughput is (no. samples)/5 min. 3.4.8 Sampling Time The sampling time is the time needed to acquire a sample and to present it to the explosives detection system for analysis. Like the throughput rate, it can only be determined accurately by averaging over a large number of measurements. Since by definition the sampling time plus the analysis time equal the total processing time, the sampling time is perhaps best determined by measuring the other two quantities (see below) and solving the equation below for the sampling time. Obviously, the sampling time will vary depending on the method of sampling chosen. Total processing time = Sampling time + Analysis time 3.4.9 Analysis Time The analysis time is the time required for the system to analyze a sample with which it has been challenged, including the time needed to produce an alarm and the corresponding readout. For most trace detection systems, this time can be determined by continuously challenging the system with the same sample (e.g., swipe pad) that is either clean or contains explosive material, waiting for the system to complete the analysis between each challenge, and then dividing the total time elapsed by the number of challenges. 3.4.10 Total Processing Time The total processing time is the time needed to collect and analyze one sample with the detector in question. It is equal to the sum of the sampling time and the analysis time. It should also be equal to the reciprocal of the throughput rate. For example, if the throughput rate is five samples per minute, the total processing time should be 1/5 min per sample, or 12 s per sample. 4. GENERAL RESULTS AND DISCUSSION 4.1 Installation and First Use of the ITI Itemiser The Itemiser arrived onsite after an indeterminate period of use or storage elsewhere. Consequently, preliminary efforts to use the instrument were frustrated by technical difficulties. These difficulties included, but were not limited to, flaws in cabling, restoration of molecular sieve packs and dopants, and a thorough application of routine maintenance. This undertaking required specialist knowledge about the performance of IMS and experience with the Itemiser. Consequently, diagnosis of flawed behavior was understood in context of proper operating response, and eventually the instrument was returned to the manufacturer for repair. This consumed nearly 2 weeks of effort and certainly would not have been understood by nonspecialists. Thus, instruments should arrive onsite or should be expected to be received (and accepted) onsite by nonspecialists only if the instrument is accompanied by a logbook of maintenance and use. Alternatively, instruments which have been used or traded between agencies could be returned to manufacturers for refurbishment. Otherwise, enormous effort and eventual frustration will occur. A second aspect of this was the tedious and expensive effort expended by both users and manufacturers in field support of an instrument that is dedicated to trace chemical detection. These instruments are not designed to be self-healing, and can fail due to gas impurities or minor electronic glitches. Had this instrument arrived at a site staffed only by technicians, the instrument may have been deemed inoperable and perhaps unworthy of further attention or effort. A set of recommendations (in bold type) regarding instrument history and guaranteed performance would be a useful addition to the Test Protocol. Naturally, when an instrument receives onsite installation by a manufacturer's field engineer, this is unnecessary. However, users may reasonably expect that trace detectors become commodity items and will be traded or shipped between agencies, and thus an appropriate warning is essential. The software for the Itemiser was found to be user friendly, intuitive, and comprehensive. The instrument arrived in commercial configuration with an onboard Intel 486 based computer which permitted processing of results, storage of data on a hard drive, a floppy disk, operations in DOS, and more. On the whole, the manual was clear, thorough, and understandable by a non-specialist, although a background in computing would be helpful. For example, the storage of spectra in subdirectories was necessary owing to limitations on housekeeping of files on the hard drive. This was not clear in either the manual or software and caused consternation with a non-specialist with a weak computing background. Altogether, the time from receipt of this used instrument to routine operation with confidence required over 2 weeks of dedicated labor. Following this, the user was able to complete the entire test program within another 5 weeks. Two aspects of the Itemiser required patience and special attention. One was the warmup period of 1 h, which presented only time management concerns. More critical was the need to present a calibration sample to the instrument before the software would allow passage to the next level of operation. Without the presentation of a commercially available standard sample, to which the instrument made a favorable match, the screen was locked in a startup menu and passage was blocked. This is a useful precaution or protection against further use of an instrument in poor condition. A singular difficulty was the lack of provision of calibration papers for PETN. Some effort was made to characterize the Itemiser that was not part of the test protocol. For example, raw signal was interrogated and evaluated using spectral processing software. The tests that were outside the protocol included effect of mass on standard deviation, verification of a calibration curve, and inspection of spectra with examination of absolute voltage levels. These would not be completed by nonspecialists and required both experience and extra tools. Concentration was not found to significantly affect standard deviation and the response was saturated between 10 ng and 20 ng for TNT. This is shown in table 1. Of course, the instrument will respond to levels of TNT above 20 ng, but the linear range of response was restricted to a span from the detection limit (3 ng) to 10 ng to 20 ng. This is a very narrow range even for IMS. Table 1 lists the average for the peak areas obtained from experiments run using 5 ng, 10 ng, and 20 ng of TNT. The table also lists the standard deviations for each experiment as well as the percent standard deviations. Each of the peaks from the experiments were run on peak fit, where the baseline was reduced to zero and peak areas were calculated. The spectra for TNT were normative for an ion mobility spectrometer as shown in figures 1-3 for mass amounts of 5 ng, 10 ng, and 20 ng. In addition, a blank or control spectrum is shown for the solvent (here acetone, see fig. 4). The spectra show an intense peak at a drift time of about 2.65 ms to 2.75 ms and this is the reagent ion, Cl(-), which arises from the reagent gas (dichloromethane in the Itemiser) and undergoes reactions with TNT per equations (1-3). There are several other peaks associated with the reagent gas and system blank as shown in the figure for 9000 ul of acetone, though none of these small peaks presents a potential interference for TNT. The spectrum for 5 ng of TNT showed a distinct peak in the range of 5.90 ms to 5.96 ms. Spectra for masses of 10 ng and 20 ng are similar, and the peak for TNT, the product ion peak, is proportionally larger than in the 5 ng spectrum. These masses were used for 40 replicate experiments in order to determine an estimate of the precision of the total method, especially that of the Itemiser. The precision of the instrument at these low levels of TNT is shown in table 1 and was 23 percent to 27 percent relative standard deviation (RSD). In these determinations, the peak areas were used in order to provide precise measurements and avoid baseline errors. Further, this was accomplished with spectral deconvolution to properly integrate. This lack of dependence of the percent of RSD on mass suggests that the error of sample handling was still the dominant source of error. Potential users of the method of depositing sample from solution onto a filter paper substrate should be aware that excessive levels of solvent did degrade IMS performance and render the instrument response unreliable for trace detection methods. Consequently, the solvent was evaporated from the filter paper as much as possible before an analysis. While the spectra are presented to the viewer with a clean graphic image and the spectrum is nearly set to a baseline of zero, the actual raw spectra captured by disk showed a large spread in the baseline levels (see fig. 5). These raw spectra provided a glimpse into the actual stability of the analyzer and analyzer electronics. The results for the region of baseline from 13 ms to 18 ms showed a range from -25 mV to 320 mV, with an average of 195 mV and standard deviation of 78 mV. Thus, the signal instability was 40 percent RSD, roughly twice the variance obtained for the analytical response. Consequently, the Itemiser must have had on- board signal processing that compensated for this variation. The essential step of thermal desorption of explosives from the filter paper was found to be influenced dramatically by the type of filter paper, the history of the paper, exposure of the paper to solvents, and shelf life of the paper. Rigorous testing was made on the effects of passivation of the paper and detection limit. It was found that the paper would yield improved detection limits after repeated use with gains of a few nanograms at most. However, the improved performance was offset by inventory control, oven management, and more. As a consequence, single use of filter papers and strict control over the source and shelf life of paper was deemed necessary. In conclusion, studies of the Itemiser using raw spectra revealed that the instrument showed mobility spectra that may be considered normal for TNT and RDX. Efforts to find the unfragmented ion M*Cl(-) or M(-) for PETN were unsuccessful for reasons not yet known. The precision was about 20 percent to 25 percent RSD, which seems high compared to reported values for other IMS analyzers (rated usually at 5 percent to 15 percent RSD), and which could be due to the thermal desorption event and the sensitivity of explosives to such thermal treatment. One surprise was the instability of the baseline which would not be evident to a line investigator or a general user, but suggested variability in signal processing or drift tube control that is not common. Otherwise, peak shape was sharp, ion chemistry was effective, and spectra showed good S/N values. In the next section, results from using the Test Protocol are described and discussed. 5. RESULTS FROM THE TEST PROTOCOL 5.1 Probability of Detection - [p(d)] The first element in the Test Protocol was a section describing how to determine the [p(d)]. While this section was generally clear and straightforward to follow, it was somewhat redundant. The reason is that step 4 of the following section, dealing with the Detection Limit (DL), required a [p(d)] test to be performed. For this reason, it might be more efficient in terms of both time and understanding if these two sections were combined into a single section in a future version of the Test Protocol. 5.2 Detection Limit This section was clear and was completed using the standard software package provided commercially with the Itemiser (the alarm level was used in a default condition but can be modified with an impact on this test). An audible and visual alarm was used in these tests to establish the detection limit. The detection limits were found for TNT, RDX, and PETN to be 3 ng, 4,000 ng, and 2 ng, respectively. It should be noted that the PETN was detected using the nitrate fragment ion. The results for [p(d)] in the detection limit study are shown in table 2. The experience of this study showed that the detection limit based upon the [p(d)] approach was rapidly obtained for RDX and TNT. However, PETN required extra effort since the instrument was set initially to alarm for the parent ion, which was not seen. The tests reported herein used the nitrate peak in the mobility spectrum to complete this section. Thus, work with PETN was guided by a specialist's knowledge, and would fail without extensive background or precautions. A potentially serious difficulty with the test protocol is that a concentration of mass must be chosen by the user or tester. For trace detectors, which can be easily overloaded, the choice of this starting mass could be a time-consuming error for nonspecialists. For example, the use of milligram or even microgram amounts of TNT or PETN would render the detector inoperable for hours or perhaps longer. It should be noted that the detection limit of 4,000 ng determined for RDX in this study may not accurately reflect the ability of the Itemiser to detect that compound. Studies conducted at Sandia National Laboratories several years ago,[5] using a different protocol, obtained similar values for the detection limits for TNT and PETN, but a much lower value (a few ng) for RDX. This raises the possibility that the limit reported here may have been obtained under IMS conditions (e.g., drift tube temperature) that were far from optimal for this compound, or that a systematic error may have occurred in making the solution that was used. Unfortunately, this discrepancy was not noted until the Itemiser had been returned from New Mexico State University to Sandia, so the issue could not be revisited in the same laboratory. 5.3 False-Negatives False-negative values were calculated as one minus the [p(d)] values derived above. It is inferred that the false-negative value is calculated only at the detection limit and at no other amounts or concentrations. Consequently, only one false-negative value was given, namely when the [p(d)] was >0.9, and false-negative values were 0 where [p(d)] is 1. Naturally, false-negatives could be obtained at values below the DL (i.e., where [p(d)] is <0.9), but this was discouraged in the Test Protocol and not done in this study. 5.4 False-Positives In this test, commercially provided filters were subjected to 20 screens and no false-positives were found. This test proved that the filters were free of interferences and that the detector contained no residual amounts of chemical. The test was described thoroughly and was completed without difficulty as should be expected for a simple test. 5.5 Nuisance-Alarm Rate This test cannot be performed in a laboratory setting, so it was not performed in this study. 5.6 Interferences Tests In this section, each explosive was subject to testing at a single amount with two potential interferences each, one each for a false-positive and for a false-negative. In this exercise, the selection of false-negative and false- positive interferences was not initially provided and required expert knowledge of IMS response to explosives. These are given below for each explosive. Once these chemicals had been selected, the procedure was straightforward and completed easily. Some difficulties arose in the deposition of interferent on the filter pad. The tester used dilute solutions of the interferents and thus used large amounts of solvent. This should be avoided in the future and concentrated solutions of interferences should be employed. For TNT, 500 ng of 3-methyl-4-nitrophenol showed no false-positive, and 500 ng of 2,4-dinitrophenol showed no false-negative as shown in figures 6 and 7, respectively. For RDX, 500 ng of 2-methyl-3-nitrophenol showed no false-positive, and 6,000 ng of 2,4-dinitrophenol did mask the presence of RDX as shown in figures 8 and 9, respectively. For PETN, 500 ng of musk ambrette gave no effect, and 500 ng of 2,4-dinitrophenol did not mask PETN, as shown in figures 10 and 11, respectively. 5.7 Time Based Measurements These were fast, clear, and generally easy to perform. 5.8 Throughput Rate The description of throughput rate was comprehensible and easily followed by a non-specialist. For the Itemiser, the throughput rate was determined using a swipe method for sample collection and not a vacuum method. The rate determined for the Itemiser was 10 samples for every 5 min and 37 s. This was completed in an ideal situation where distractions and other responsibilities were nil. Thus, the throughput rate reported here should be deemed a maximum value. 5.9 Sampling Time Sampling time tests were uncomplicated, and in the laboratory environment the sampling time was found to be 21 s per sample. This was governed largely by the user and instructions and was not instrument specific. 5.10 Analysis Time Analysis time was found to be 127 s per 10 samples (12.7 s per sample) and was clearly governed by the Itemiser. This included all aspects of the measurement following introduction of the sample into the Itemiser inlet. 5.11 Total Processing Time The sum of sampling time and analysis time was calculated as 337 s per 10 samples or 33.7 s per sample. There were no complications in this calculation, and the Test Protocol should serve nicely as a benchmark measurement of an analyzer. 6. CONCLUSIONS As might be expected from this type of assessment, several areas were identified where caveats or warnings should be included or highlighted. These spanned the subjects from instrumentation to elementary recommendations on sample storage. Allowing nonspecialists to manage the Test Protocol, without such warnings or guidance, could lead to disappointing results. Such disappointments may arise not from the IMS analyzers, but from the principles of handling trace amounts of chemicals or of care and maintenance of trace analyzers. A singular discovery that came from the application of the Test Protocol for trace detectors was that users of trace detectors might receive analyzers under warranty from manufacturers or might obtain surplus or excess equipment indirectly without manufacturer's support. Those who receive used equipment are handling technology that requires careful management. These devices are not yet engineered to be household appliances and require stabilized conditions of gas moisture and reagent chemical concentrations. Consequently, users of the Test Protocol should be forewarned that the first step in application of the Test Protocol is to ensure that the equipment has been brought to original standards from the manufacturer. Users or prospective users should also be warned that trace analyzers have been engineered to be used in front-line screening, but knowledge of the arts of sampling and sample handling is needed to operate these instruments. The learning time involved is nearly all associated with specific vocabulary, or the methodology of preparing the instrument to receive, process, and store data. Prospective users should be warned that several days may be necessary to become proficient on a given instrument. The use of filter papers to present a sample to a thermal desorption apparatus is now standard within commercial IMS analyzers, though this work highlighted the variability with the history and composition of the filter paper in obtaining clean and sensitive response. Since this subject was not part of the mandate of the program of study, no extensive investigations were undertaken. However, prospective users might be warned of this region of uncertainty. Another aspect of the Test Protocol that is somewhat incompatible with trace detectors is the high level of solvent that is inadvertently included in a measurement when a sample is delivered to the filter paper via a syringe with solvent. However, this method is anticipated to be not especially detrimental to comparisons of instruments. The final problem area is that of storing the samples, which are prepared in highly diluted solutions. Such solutions stored at room temperature may have shelf lives of only hours, or a day at most. Refrigeration of the samples may extend the shelf life to a day or more. Nonetheless, the best practice with dilute solutions is to prepare daily a fresh dilution from a concentrated stock solution. This too should be highlighted for nonspecialists in trace organic detection. All of these caveats might be summarized into a page of cautions and included in the Test Protocol. Some references might exist in the open literature and should be documented. Mindful of these comments, which are directed toward prospective users who lack experience with trace chemical detection, the Test Protocol was realistic and usable by a nonspecialist in IMS analyzers, though with a B.S. degree in chemistry. The Test Protocol is a reasonable attempt to bring order to the comparison of trace detector evaluations. APPENDIX A.1 The Itemiser by Ion Track Instruments, Inc., and Ion Mobility Spectrometry The Itemiser is one of several commercial detectors based upon ion mobility spectrometry (IMS), a trace detection technique capable of detecting explosives and other chemicals in very low concentrations, with high speed and in the presence of other chemicals. Other IMS instruments include (but are not limited to) the Ionscan (Barringer Research, Inc), the Plastec (Graseby Ionics, Ltd.), and the Orion (IDS, Inc). The principles are largely the same among these analyzers though the engineering and application of the principles may vary. Thus, the general description of IMS given in this section is applicable to all of these analyzers and others that are based upon IMS. Ion mobility spectrometry is based upon two principles: (1) the ionization of sample molecules through gas phase chemical reactions by the transfer of an electron or a proton, and (2) the characterization of these ions based upon gas phase mobility in a weak electric field. The mass, size, and shape of these ions will govern the mobility through a voltage gradient, and this can be measured as time required to traverse a fixed distance. Thus IMS detectors yield a drift time or mobility value which is characteristic of certain ions (i.e., chemicals) and provide specificity of response. The initial step of ion formation is common to all ion mobility spectrometers. In order to achieve this, sample molecules must in some way be transported from a suspected item into the IMS instrument. All mobility spectrometers available commercially are based upon manual or automated collection of the contraband on a cloth wipe, a paper filter, or a glove, drawn over the surface of an item to be screened such as a purse, suitcase, or bag. With some systems, a sample can also be collected without physically contacting the screened item by drawing material into a pad in a small vacuuming device. Once the sample collection has been performed, the wipe or filter is transported to the IMS for analysis. In most cases, the wipe is inserted into a sampling port, where it is heated to desorb sample molecules into the instrument's ionization region. Once sample vapors are introduced into the ion mobility spectrometer, molecules are ionized via gas phase chemical reactions through charge transfers or association reactions with negatively charged ions (the reactant ions). These reactant ions originate from a radioactive beta emitter (usually 10 mCi of (63)Ni). The beta particle is a high-energy electron which initiates a series of gas phase reactions ending with low-energy ions including (in air) O(2)(-), CO(4)(-) and others. Nearly all manufacturers of IMS detectors add a chemical reagent gas into the ionization region, so that the naturally abundant reactant ions are replaced with a single reactant ion, usually, Cl(-) or Cl(2)(-). The reactions observed with explosives are shown in equations (1-3), and the mechanisms governing these reactions are still under investigation. Nonetheless, the known reactions between explosive molecules and a chloride ion at ambient pressure and 150 degrees C to 200 degrees C include charge transfer (1), formation of adduct or association reactions (2), and proton abstraction reactions (3). Usually only a few different ions are produced from explosives in an IMS, making ion characterization relatively straightforward in most cases. Ion characterization in IMS occurs when ions are moved from the reaction region, under the influence of an electric field, into a part of the instrument known as the drift region. The drift region is swept with clean air or an inert gas, and a metal disc or plate is placed at the low potential end of the drift tube. Ions are injected into the drift region using an electronically shuttered gate. Once the ions are in the drift region, a velocity (v(d)) for the ions is established by the strength of the electric field (E) and the structure of the ion reflected in the mobility constant K, per equation (4). The velocity is equal to the distance of the drift region divided by the time necessary for an ion to move through this region. A large ion will take a longer time than a small ion. Thus, the mobility constant will be inversely proportional to ion mass or size. The mobility constant is usually normalized to 0 degrees C and 101.325 kPa (760 Torr, eq. 5) to yield the reduced mobility, K(o). Both the process of forming ions and the procedure of measuring ion speeds occur rapidly and can be totally automated. Moreover, these events of ionization and ion characterization occur in air at ambient pressure, allowing instrumentation to be simple, small, and at comparatively low cost. A.2 The Mobility Spectrum All the information available from an IMS analyzer is provided in the mobility spectrum, which is a plot of ion current at the detector versus ion drift time. The sample mobility spectrum shown in figure 12 contains both qualitative and quantitative information, and also is a summary of all the ion molecule events occurring in an instrument. The analyte being investigated is a thiol, and in this case positive ions are detected, rather than negative ions as in the case of explosives. The reagent ions or reactant ions are evident in the early portions of a mobility spectrum when, as is traditional, these ions are the smallest ions and are present continuously unless the ion source is saturated with high concentrations of sample vapors. The reactant ions shown here for positive polarity include NH(4)(+)(H(2)O)(n), NO(+)(H(2)O)(n), and H(+)(H(2)O)(n), where the values for n are governed by temperature and moisture. These values are normally small numbers in the range of 2 to 5. When the source is unsaturated as shown in figure 12, residual amounts of reactant ions are present along with the product ions, which are formed by reactions between sample and some reactant ions. The product ions in IMS have been subjected to the initial step of ion formation in the ion source region, under conditions where sample vapors and reagent ions undergo mixing at what might be anticipated to be steady state conditions on the millisecond time scale. After this, the ions are extracted into a region of flowing clean gas where clusters and unstable ions undergo decomposition if reaction rates are faster than the drift times. In the instance where product ions are stable on the time scale of ion drift, product ions can successfully traverse the drift region (typically 4 cm to 8 cm at 250 V/cm to 300 V/cm) and strike the detector plate, drawing current and forming the mobility spectrum. Thus the mobility spectrum is a composite result of the ionization chemistry and the ion filtering of the drift region, and no direct link can exist between ionization reactions and the spectrum. Instead, an indirect link through ion stability must be made. Figure 12. Mobilility spectrum for a thiol in positive polarity at 200 degrees C[6]. The mobility spectrum provides quantitative information in the form of peak areas and peak heights. The response in peak area for product ions is proportional to amount or concentration of sample vapors in the ion source, and this is both linear and reasonably precise (5 percent to 8 percent relative standard deviation) from the limit of detection to the point of saturation. Usually this is only a few orders of magnitude and is unlikely to be improved so long as the detection limit is governed by principles not totally understood. In practice, common detection limits are in the picogram (direct mass) or parts per billion (ppb, as sample in airflow) range. Qualitative information is found in the peak drift times, though contemporary IMS has few tools to properly manage either modeling or interpretation of these terms in the absence of authentic samples characterized under identical conditions. A.3 Detection of Explosives and IMS Analyzer Response to Explosives Explosives are chemicals that can decompose or react on the microsecond time scale, with the production of a large volume of gas from a small amount of solid. The formation of the gas is so rapid that the expansion can cause major physical damage to nearby objects, possibly resulting in injury or death of nearby persons. Major potential targets for explosives include public buildings and mass transportation systems such as airplanes and trains, and explosives detection can be part of an integrated approach to security in such settings. Apart from the threat of terrorism, parallel interests in explosives detection exist with forensics and with legal uses of explosives or military applications (including storage and disposal). In short, explosives detection spans a broad range of interests, and several technical approaches or solutions have been proposed or demonstrated. The attractive features of high speed, comparatively low cost, high convenience, strong reliability and sound performance of IMS have favored applications in many explosives detection scenarios. The most difficult challenge of explosives detection is that of sample collection and transport to the detector. This is complicated by the very low vapor pressures of common organic-based explosives such as TNT, RDX, and PETN (especially the last two). In addition to this, the explosive vapors are highly adsorptive and will adhere or stick to nearly all surfaces. Consequently, the engineering of detectors is as important as the actual detection technology. Those sample properties that make explosives difficult to screen using modern chemical instruments also directly affect issues of making and storing authentic solutions of explosives, which are used for standardizing and calibrating detectors. For example, explosives at all concentrations in all solutions will adsorb onto the surfaces of the containers of those solutions. This effect becomes a special problem for dilute aqueous solutions, where the loss of a few hundred nanograms of explosive in a solution may constitute loss of 50 percent or more of the sample. The solution will be unchanged in appearance, but the amount of explosives in the solution will be dramatically lower (the explosives will be found adsorbed to the inner walls of the container). This problem will be aggravated by lower temperatures and longer times since the solution was made, and it is to some degree unavoidable. Consequently, fresh solutions of explosives prepared as highly dilute standards for calibrating analyzers are needed daily. Failures in managing the quality of standard solutions of explosives can be the most important source of error in trace explosives detection. The importance of sample handling and control of standards is noted in the Test Protocol. Endnotes 1. Department of Chemistry and Biochemistry, New Mexico State University, Las Cruces, NM 88003. 2. Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185. 3. The instrument used in this study, the ITI Itemiser, was chosen for evaluation only because it is a representative of the commercially available trace explosives detection system, and because one was readily available to us. Use of this system does not constitute an endorsement, and this report should not be interpreted as a recommendation to purchase or not to purchase this instrument. No bias towards or against Ion Track Instruments vis a vis other manufacturers is intended. 4. Certain commercial equipment, instruments, or materials are identified in this paper to adequately specify the experimental procedure. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology (NIST), nor does it imply that the equipment, instruments, or materials are necessarily the best available for the purpose. 5. D.W. Hannum, "Performance Evaluations of Commercial Explosives Vapor/Particulate Detectors," Sandia National Laboratories, 1996. For information, contact the author at dwhannu@sandia.gov . 6. [Refers to Figure 12] The reactant ion peaks at channel numbers 40 to 100 correspond to drift times of 3 ms to 6 ms, and the product ion peaks at channel numbers of 101 to 180 correspond to drift times of 6 ms to 10 ms. The y-axis is in volts or pA (original signal before 10(10-11) V/amp amplification of signal in current to voltage amplifier). ------------------------------- About the National Institute of Justice The National Institute of Justice (NIJ), a component of the Office of Justice Programs, is the research agency of the U.S. Department of Justice. Created by the Omnibus Crime Control and Safe Streets Act of 1968, as amended, NIJ is authorized to support research, evaluation, and demonstration programs, development of technology, and both national and international information dissemination. Specific mandates of the Act direct NIJ to: --Sponsor special projects, and research and development programs, that will improve and strengthen the criminal justice system and reduce or prevent crime. --Conduct national demonstration projects that employ innovative or promising approaches for improving criminal justice. --Develop new technologies to fight crime and improve criminal justice. --Evaluate the effectiveness of criminal justice programs and identify programs that promise to be successful if continued or repeated. --Recommend actions that can be taken by Federal, State, and local governments as well as by private organizations to improve criminal justice. --Carry out research on criminal behavior. --Develop new methods of crime prevention and reduction of crime and delinquency. In recent years, NIJ has greatly expanded its initiatives, the result of the Violent Crime Control and Law Enforcement Act of 1994 (the Crime Act), partnerships with other Federal agencies and private foundations, advances in technology, and a new international focus. Some examples of these new initiatives: --New research and evaluation are exploring key issues in community policing, violence against women, sentencing reforms, and specialized courts such as drug courts. --Dual-use technologies are being developed to support national defense and local law enforcement needs. --The causes, treatment, and prevention of violence against women and violence within the family are being investigated in cooperation with several agencies of the U.S. Department of Health and Human Services. --NIJ's links with the international community are being strengthened through membership in the United Nations network of criminological institutes; participation in developing the U.N. Criminal Justice Information Network; initiation of UNOJUST (U.N. Online Justice Clearinghouse), which electronically links the institutes to the U.N. network; and establishment of an NIJ International Center. --The NIJ-administered criminal justice information clearinghouse, the world's largest, has improved its online capability. --The Institute's Drug Use Forecasting (DUF) program has been expanded and enhanced. Renamed ADAM (Arrestee Drug Abuse Monitoring), the program will increase the number of drug-testing sites, and its role as a "platform" for studying drug-related crime will grow. --NIJ's new Crime Mapping Research Center will provide training in computer mapping technology, collect and archive geocoded crime data, and develop analytic software. --The Institute's program of intramural research has been expanded and enhanced. The Institute Director, who is appointed by the President and confirmed by the Senate, establishes the Institute's objectives, guided by the priorities of the Office of Justice Programs, the Department of Justice, and the needs of the criminal justice field. The Institute actively solicits the views of criminal justice professionals and researchers in the continuing search for answers that inform public policymaking in crime and justice. ------------------------------- NCJ 178261