AugurSense: Next Generation Multi-Camera Human Movement Analytics — Part 1

A real-time method to analyze time synchronized video feeds obtained from multiple fixed cameras in a monitored environment to generate human movement analytics with respect to ground plane.

A map view of resultant map generated by AugurSense by processing two time synchronized videos from PETS2009 tracking dataset. Paths taken by individuals are shown shown in the map. Circle represents the current location and the small line drawn within the circle represents the head directions

Why Human Analytics on Video Feeds?

Possible applications of the target research

Analytics and Features

A heat map generated by an experiment run on PETS2009 dataset.
  • Coordinate a system of cameras video feeds to detect, identify and track positions, postures and orientation of people observed by a camera network.
    - Human detection and noise filtering.
    - Human pose estimation including head orientation estimation and sitting/standing estimation.
    - Human position mapping to top down view.
    - Human movement tracking with smoothing of results.
  • Aggregate analytical data from multiple cameras into a unified global view (top down view on floor plan) and provide real-time and time-bound statistical maps.
    - Instantaneous View Maps with
    - Human position markers.
    - Movement trails indication.
    - Head direction annotation.
    - Standing/sitting postures annotation.
    - Full body images annotation.
  • Zone based analytics including
    - Average people count in a zone classified as standing/sitting
    - Average time spent by a person in a zone
    - Breakdown of outbound traffic movement fractions to other zones
    - Breakdown of inbound traffic movements from other zones.
  • Statistical Heatmaps
    - Human density Heatmaps
    - Stop point / velocity bound heat maps
    - Traffic flow direction maps.
  • Uniquely identify humans to generate accurate analytical data. (Short term re-identification)
    - Identification of probable previous detections of a person.
    - Long distance routes per person in the monitored environment.
  • Generating quantitative statistics based on analytics.

For years, companies knew how their web visitors behave thanks to Google Analytics. Through our solution, we provide a platform to deliver similar analytics on physical visitors to shopping complexes or offices of companies.

Publications

Architecture

Component organization in AugurSense

Analytics Engine (CRAMP Accumulator)

Technologies used:

  • Java
  • Spring Framework
  • Tensorflow
  • Tensorflow Java
  • REST
  • MySQL
  • Since we are tracking people across cameras, we need a human re-identification technique. Following two implementations can be used out of the box.
  • Open Re-ID — Implementation of our paper “Open Set Person Re-Identification Framework on Closed Set Re-Id Systems
  • TriNet Re-ID

Camera Processing Unit (CRAMP Sense)

Snapshots of a person tracked in the global map (map generated after collecting person locations from multiple processed feeds). His track is shown in yellow color in the map.

Technologies used:

  • Python 3
  • OpenCV
  • OpenPose, Tensorflow Object Detection API, YOLO V3 and etc for human detection (One of these methods required to be chosen before running. Default is OpenPose). Using a Pose Estimation library is recommended to get analytics on posture (stand/sitting), head direction and accurate location tracking). Custom implementations can be written using other human detection/pose estimation frameworks.

Dashboard

Technologies used:

The UI for configuring the mapping between camera views to floor plan.

Thank you! To be continued…

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Imesha Sudasingha

Imesha Sudasingha

251 Followers

Associate Technical Lead @WSO2 | Promoting Opensource | Member @TheASF