Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31917
A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186

References:


[1] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey, “Maximum Entropy Inverse Reinforcement Learning,” in Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008, p. 6.
[2] A. J. Macdonald et al., “Unsupervised behaviour anomaly detection from fixed camera full motion video,” in Artificial Intelligence and Machine Learning in Defense Applications II, Sep. 2020, vol. 11543, p. 115430M, doi: 10.1117/12.2572580.
[3] D. Sivia and J. Skilling, Data Analysis: A Bayesian Tutorial. Oxford University Press, 2006.
[4] J. Pearl, Causality. Cambridge University Press, 2009.
[5] J. Peters, D. Janzing, and B. Schölkopf, Elements of Causal Inference: Foundations and Learning Algorithms. MIT Press, 2017.
[6] “PeMS Data Source | Caltrans.” https://dot.ca.gov/programs/traffic-operations/mpr/pems-source (accessed Apr. 11, 2021).
[7] “Dictionary of Algorithms and Data Structures.” https://xlinux.nist.gov/dads/ (accessed May 24, 2021).
[8] T. H. Cormen, T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction To Algorithms. MIT Press, 2001.
[9] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. MIT Press, 2018.
[10] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley & Sons, 2012.
[11] C. Chen, “Freeway Performance Measurement System (PeMS),” Jul. 2003, Accessed: Apr. 12, 2021. (Online). Available: https://escholarship.org/uc/item/6j93p90t.
[12] B. Yu, H. Yin, and Z. Zhu, “Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting,” in Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, Jul. 2018, pp. 3634–3640, doi: 10.24963/ijcai.2018/505.
[13] C. Snyder and M. N. Do, “STREETS: A novel camera network dataset for traffic flow,” in Advances in Neural Information Processing Systems, 2019, vol. 32, Accessed: Apr. 12, 2021. (Online). Available: https://experts.illinois.edu/en/publications/streets-a-novel-camera-network-dataset-for-traffic-flow.
[14] L. Bai, L. Yao, C. Li, X. Wang, and C. Wang, “Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting,” arXiv:2007.02842 (cs, stat), Oct. 2020, Accessed: Apr. 12, 2021. (Online). Available: http://arxiv.org/abs/2007.02842.
[15] “OpenStreetMap,” OpenStreetMap. https://www.openstreetmap.org/ (accessed Apr. 12, 2021).
[16] “Eclipse SUMO - Simulation of Urban MObility,” Eclipse SUMO - Simulation of Urban MObility. https://www.eclipse.org/sumo/ (accessed Apr. 12, 2021).
[17] L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta, “Robust Adversarial Reinforcement Learning,” in International Conference on Machine Learning, Jul. 2017, pp. 2817–2826, Accessed: Apr. 14, 2021. (Online). Available: http://proceedings.mlr.press/v70/pinto17a.html.