Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
FS1: Forensic Statistics
Time:
Wednesday, 04/Sep/2019:
8:30am - 10:35am

Session Chair: Christopher Paul SAUNDERS
Location: HS 402 (Blue Lecture Hall)

Show help for 'Increase or decrease the abstract text size'
Presentations
8:30am - 8:55am

Case Study Validations of Automatic Bullet Matching

Heike HOFMANN, Susan VANDERPLAS

CSAFE, Ames IA, United States of America

Recent advances in microscopy have made it possible to collect 3D topographic data, enabling virtual comparisons based on the collected 3D data next to traditional comparison microscopy. Automatic matching algorithms have been introduced for various scenarios, such as matching cartridge cases (Tai and Eddy 2018) or matching bullet striae (Hare et al. 2017, Chu et al 2013, De Kinder and Bonfanti 1999). One key aspect of validating automatic matching algorithms is to evaluate the performance of the algorithm on external tests. Here, we are presenting a discussion of the performance of the matching algorithm (Hare et al. 2017) in three studies. We are considering matching performance based on the Random forest score, cross correlation, and consecutive matching striae (CMS) at the land-to-land level and, using Sequential Average Maxima scores, also at the bullet-to bullet level. Cross correlation and Random Forest scores both result in perfect discrimination of same-source and different-source bullets. At the land-to-land level, discrimination (based on area under the curve, AUC) is excellent (≥ 0.90).



8:55am - 9:20am

Bayesian Characterizations Of U-processes Used In Pattern Recognition With Application To Forensic Source Identification

Cami Marie FUGLSBY1, Christopher Paul SAUNDERS1, Danica M. OMMEN2, JoAnn BUSCAGLIA3

1South Dakota State University, United States of America; 2Iowa State University, United States of America; 3Federal Bureau of Investigation, Laboratory Division

In forensic science, a typical interpretation task is a common-but-unknown-source identification, where an examiner must summarize and present the evidential value associated with two sets of objects relative to two propositions. The first proposition is that the two sets of objects are two simple random samples from the same, unknown source in a given population of sources; the second proposition is that the two sets of objects are two simple random samples each drawn from two different but unknown sources in a given population of sources. Typically, the examiner has to develop criteria or a rule to compare the two sets of objects; this rule leads to a natural U-process of degree two for assessing the evidence. In this work, we will characterize the U-process and demonstrate how to write a class of approximately admissible decision rules in terms of the U-process. Combining the asymptotic representation of this U-process with an approximate ABC algorithm, we can then provide summary statistics with Bayes factor-like properties for the selection between the two propositions. We will illustrate this method with an application based on recovered aluminum powders associated with IEDs.

For complex evidence forms, we usually have to learn the metric for comparing two samples. Typically, there is not a natural feature space for which modern statistical techniques can be applied to the non-nested models for model selection. In this presentation, a score function has been developed that maps the trace samples from their measured feature space to the real number line. The resulting score for two trace samples can be used as a measure of the atypicality of matching samples, which will be applied to a receiver operating characteristic (ROC) curve and in a score-based likelihood ratio function.



9:20am - 9:45am

Which Forensic Likelihood Ratio Approach is Better?: An Information-Theoretic Comparison

Danica OMMEN1, Peter VERGEER2

1Iowa State University, United States of America; 2Netherlands Forensic Institute, The Netherlands

There are several methods for constructing likelihood ratios (LR) for forensic evidence interpretation. Feature-based LR approaches directly model the measured features of the evidential objects while score-based LR approaches model the similarity (or sometimes the dissimilarity) between two objects instead. The score-based approaches often rely on machine learning methods of producing the similarity scores. In addition to how the evidence is treated, the LR approaches also differ in the type of propositions (or hypotheses) they address. In this presentation, we will only consider source-level propositions that address the origin of a particular set of evidence, regardless of the actions or motivations involved. In particular, we consider the common-source and the specific-source propositions. It has been shown that the different propositions and treatments of the evidence lead to differing values of the computed LR. So, which method is preferred for the interpretation of forensic evidence? We will use methods from information theory to compare the various LR approaches.



9:45am - 10:10am

ROC Curves And Frequentist/Machine-Learning Based Likelihood Ratios For Source Identification

Larry TANG1, Danica OMMEN2, Elham TABASSI3, Xiaochen ZHU1

1george mason university, United States of America; 2Iowa State University, United States of America; 3NIST

The likelihood ratio based on similarity scores recently brings attention to the forensic scientists, especially on the automated facial recognition system scores on faces. National Institute of Standards and Technology publishes comprehensive reports on the performance of the commercial matching algorithms. As the algorithms for matching facial images are largely proprietary, it is easier to obtain the similarity scores than the original configurations used in the algorithms. The purpose of this talk is to introduce the score-based likelihood ratio based on receiver operating characteristic (ROC) curve analysis. The ROC curve is widely used in radiology, psychophysical and medical imaging research for detection performance, military monitoring, and industrial quality control. We will introduce methods for estimating the likelihood ratio from the ROC curve that is estimated with machine learning techniques for source identification, and derive the confidence interval for the likelihood ratio.



10:10am - 10:35am

Discussion Of Presentations In The Forensic Science Session

Sonja MENGES1, Alicia CARRIQUIRY2

1Bundeskriminalamt, Germany; 2Iowa State University, United States of America

Sonja Menges and Alicia Carriquiry are serving as discussants at the forensic science session.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SimStat 2019
Conference Software - ConfTool Pro 2.6.128
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany