One-Scan Rule Extraction to Explain Significant Vehicle Interactions with Guaranteed Error Value
Originally published by the Association for Computing Machinery (ACM) in Applied Computing Reviews.
Full article available below:
http://dl.acm.org/citation.cfm?id=2340419
Abstract
Counting frequent itemsets allows us to compute the importance of items over a stream of data. Translating this concept to video streams imposes the need of representing activities as a sequence of activities over a video stream. In this paper, we present a model to find approximate co-occurring associations between activities from video stream data considering an unsupervised clustering of activities. We show that a hierarchical Topic Model of two stochastic processes is needed to jointly learn both an unknown number of activities in the video and the visual features that positively correlate for each activity. Unlike most of previous works, we decouple the analysis of associations between multiple moving objects from the discovery of activities. While the discovery of activities is an off-line process in which event distributions are grouped, the discovery of rules is an on-line process that approximates the importance of each rule with guaranteed error value. Our method reduces space complexity by adapting the algorithm to the amount of memory available before any process to update frequency values for itemsets is incrementally performed. The most visible aspect of this effort is the incremental generation of rules that discover the interaction of frequent activities for current scenes. Our experimental results show that our approach efficiently and automatically discovers sets of activities in a video stream coming from surveillance videos containing complex traffic scenes governed by multiple semaphores, while evaluating their frequent occurrence and co-occurring relationships.