The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
ANUBIS Dataset
ANUBIS Dataset
Large-Scale Skeleton-Based Action Recognition
A comprehensive multi-view skeleton action dataset for challenging real-world scenarios
π Paper β’ π Project Website β’ π Download Dataset β’ π» Benchmark Code
π Overview
ANUBIS is a large-scale skeleton-based action recognition dataset designed to address critical gaps in existing benchmarks. The dataset features 102 action categories collected from 80 participants across 80 viewpoints, generating 66,232 skeleton clips with comprehensive multi-modal data (RGB, Depth, 3D Skeleton).
Key Innovations
- π― Multi-View Coverage: 80 viewpoints including challenging back-view perspectives often missing in existing datasets
- π₯ Multi-Person Interactions: Complex social interactions including collaborative and aggressive behaviors
- π Fine-Grained Actions: Detailed hand/object manipulations and contemporary social behaviors
- βοΈ Security-Critical Actions: Violent and aggressive actions essential for surveillance applications
- π¦ Modern Social Behaviors: Pandemic-era gestures and social distancing protocols
π Dataset Statistics
| Attribute | Value |
|---|---|
| Action Categories | 102 |
| Participants | 80 |
| Total Clips | 66,232 |
| Viewpoints | 80 |
| Modalities | RGB, Depth, 3D Skeleton |
| Skeleton Joints | 32 per person |
| Frame Length | 300 frames |
Action Category Distribution
- Independent Actions: 45 classes (44.1%) - Single-person behaviors
- Aggressive Actions: 40 classes (39.2%) - Security-critical interactions
- Social Interactive Actions: 15 classes (14.7%) - Collaborative behaviors
- Other Actions: 2 classes (2.0%) - Spatial positioning changes
If you want to download this dataset to your local machine, we recommend that you download the anubis.zip file.
π― Key Challenges
- Viewpoint Variation: Back-view perspectives with joint occlusions
- Fine-Grained Recognition: Subtle hand gestures and object manipulations
- Multi-Person Dynamics: Complex interpersonal interactions
- Action Similarity: Semantically similar actions with different contexts
- Motion Complexity: Varying temporal scales and movement patterns
π Citation
If you use the ANUBIS dataset in your research, please cite:
@misc{liu2025representationcentricsurveyskeletalaction,
title={Representation-Centric Survey of Skeletal Action Recognition and the ANUBIS Benchmark},
author={Yang Liu and Jiyao Yang and Madhawa Perera and Pan Ji and Dongwoo Kim and Min Xu and Tianyang Wang and Saeed Anwar and Tom Gedeon and Lei Wang and Zhenyue Qin},
year={2025},
eprint={2205.02071},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2205.02071},
}
π License
This dataset is released under [license: cc-by-nd-4.0]. Please ensure compliance with ethical guidelines when using data containing human subjects.
π Acknowledgments
We thank all participants who contributed to data collection and the research community for their valuable feedback in developing this benchmark dataset.
- Downloads last month
- 64