MIT Pedestrian Database MITP,
Online2000
HTML Version.
Dataset, Surveillance.
BibRef
0001
UCF Action Recogniton Dataset 101,
Online2012
WWW Link.
1211
BibRef
Earlier:
UCF Action Recogniton Dataset 50,
Online2010
WWW Link.
Dataset, Surveillance.
1010
101 action categories, consisting of realistic videos taken from youtube.
UCF 101 is an extension of UCF 50.
Categories include:
Baseball Pitch, Basketball Shooting, Bench Press, Biking, Biking,
Billiards Shot,Breaststroke, Clean and Jerk, Diving, Drumming,
Fencing, Golf Swing, Playing Guitar, High Jump, Horse Race, Horse
Riding, Hula Hoop, Javelin Throw, Juggling Balls, Jump Rope, Jumping
Jack, Kayaking, Lunges, Military Parade, Mixing Batter, Nun chucks,
Playing Piano, Pizza Tossing, Pole Vault, Pommel Horse, Pull Ups,
Punch, Push Ups, Rock Climbing Indoor, Rope Climbing, Rowing, Salsa
Spins, Skate Boarding, Skiing, Skijet, Soccer, Juggling, Swing,
Playing Tabla, TaiChi, Tennis Swing, Trampoline Jumping, Playing
Violin, Volleyball Spiking, Walking with a dog, and Yo Yo.
The printed reference:
See also UCF101: A Dataset of 101 Human Action Classes from Videos in The Wild.
BibRef
UCF-iPhone,
Online2012
WWW Link.
Dataset, Surveillance.
1203
Aerobic actions using the Inertial Measurement Unit (IMU) on an Apple iPhone.
Biking, Climbing Stairs, Descending Stairs, Gym Biking, Jump Roping,
Running, Standing, Treadmill Walking and Walking.
See also Macro-Class Selection for Hierarchical K-NN Classification of Inertial Sensor Data. for the paper.
BibRef
Hollywood2 Human Actions and Scenes Dataset,
Online2016
WWW Link.
Dataset, Surveillance.
1608
Part originally from:
See also Actions in context.
BibRef
HMDB: a large human motion database,
Online2016
WWW Link.
Dataset, Surveillance.
Award, ICCV, Helmholtz.
1608
51 actions.
See also HMDB: A large video database for human motion recognition.
BibRef
TRECVID Workshop DAta,
Online2017
HTML Version.
Dataset, Surveillance.
1806
Surveillance datasets from 2001 to 2017.
BibRef
Privacy-Preserving Visual Recognition PA-HMDB51,
Online2019.
WWW Link.
Dataset, Actions.
Dataset, Privacy.
The dataset contains 592 videos selected from the HMDB51 dataset
(
See also HMDB: A large video database for human motion recognition. ).
For each video, we provide with frame-level annotation of five
privacy attributes: skin color, gender, face, nudity, and
relationship.
See also Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study.
BibRef
1900
HVU Dataset,
Online2021
WWW Link.
Dataset, Action. For Holistic Video Understanding workshop
BibRef
2100
EPIC-KITCHENS,
Online2018
WWW Link.
Dataset, Action.
Dataset, Daily Activities.
First-person (egocentric) vision; multi-faceted non-scripted
recordings in native environments - i.e. the wearers' homes, capturing
all daily activities in the kitchen over multiple days.
See also EPIC-KITCHENS Dataset: Collection, Challenges and Baselines, The.
BibRef
1800
Egocentric Live 4D Perception (Ego4D) Dataset:
A large-scale first-person video dataset, supporting research
in multi-modal machine perception for daily life activity,
Online2021
WWW Link.
Dataset, Action.
Dataset, Egocentric. The Ego4D Consortium.
A large-scale first-person video dataset, supporting research in
multi-modal machine perception for daily life activity.
BibRef
2100
Kay, W.[Will],
Carreira, J.[Joao],
Simonyan, K.[Karen],
Zhang, B.[Brian],
Hillier, C.[Chloe],
Vijayanarasimhan, S.[Sudheendra],
Viola, F.[Fabio],
Green, T.[Tim],
Back, T.[Trevor],
Natsev, P.[Paul],
Suleyman, M.[Mustafa],
Zisserman, A.[Andrew],
The Kinetics Human Action Video Dataset,
Online2019.
WWW Link.
WWW Link.
Dataset, Actions.
Dataset, Human Action.
BibRef
1900
Tenorth, M.[Moritz],
Bandouch, J.[Jan],
Beetz, M.[Michael],
The TUM Kitchen Data Set of everyday manipulation activities for motion
tracking and action recognition,
THEMIS09(1089-1096).
IEEE DOI
0910
Dataset, Activity Recognition.
BibRef
Johansson, G.,
Visual Motion Perception,
SciAmer(232), June 1976, pp. 75-88.
BibRef
7606
And:
Visual Perception of Biological Motion and a Model for
Its Analysis,
PandP(14), No. 2 1973, pp. 201-211.
The psychological references that human motion papers use.
BibRef
Badler, N.I., and
Smoliar, S.W.,
Digital Representations of Human Movement,
Surveys(11), No. 1, March 1979, pp. 19-38.
Survey, Motion, Human.
BibRef
7903
Calvert, T.W.,
Chapman, A.E.,
Analysis and Synthesis of Human Movement,
HPRIP-CV94(431-474).
BibRef
9400
Aggarwal, J.K.,
Cai, Q.,
Human Motion Analysis: A Review,
CVIU(73), No. 3, March 1999, pp. 428-440.
DOI Link
BibRef
9903
Gavrila, D.M.[Dariu M.],
The Visual Analysis of Human Movement: A Survey,
CVIU(73), No. 1, January 1999, pp. 82-98.
DOI Link
PDF File.
Survey, Human Motion.
BibRef
9901
Ivancevic, V.G.,
Snoswell, M.,
Fuzzy-stochastic functor machine for general humanoid-robot dynamics,
SMC-B(31), No. 3, June 2001, pp. 319-330.
IEEE Top Reference.
0108
BibRef
Wang, L.[Liang],
Hu, W.M.[Wei-Ming],
Tan, T.N.[Tie-Niu],
Recent developments in human motion analysis,
PR(36), No. 3, March 2003, pp. 585-601.
Elsevier DOI
0301
BibRef
Kojima, A.[Atsuhiro],
Tamura, T.[Takeshi],
Fukunaga, K.[Kunio],
Natural Language Description of Human Activities from Video Images
Based on Concept Hierarchy of Actions,
IJCV(50), No. 2, November 2002, pp. 171-184.
DOI Link
0210
BibRef
And:
Textual description of human activities by tracking head and hand
motions,
ICPR02(II: 1073-1077).
IEEE DOI
0211
BibRef
Kojima, A.,
Izumi, M.,
Tamura, T.,
Fukunaga, K.,
Generating Natural Language Description of Human Behavior from Video
Images,
ICPR00(Vol IV: 728-731).
IEEE DOI
0009
BibRef
Syeda-Mahmood, T.F.[Tanveer F.],
Haritaoglu, I.[Ismail],
Huang, T.S.[Thomas S.],
CVIU special issue on event detection in video,
CVIU(96), No. 2, November 2004, pp. 97-99.
Elsevier DOI
0410
BibRef
Francois, A.R.J.[Alexandre R.J.],
Nevatia, R.[Ram],
Hobbs, J.[Jerry],
Bolles, R.C.[Robert C.],
VERL:
An Ontology Framework for Representing and Annotating Video Events,
MultMedMag(12), No. 4, October-December 2005, pp. 76-86.
IEEE DOI First order logic like syntax for describing composite events. And
set of predicates describing temporal constraints.
An event is typically triggered by a change in state.
VERL is a companion to VEML. Model events as composable, reduce complex
events to simpler using sequencing, iteration, alternation.
BibRef
0510
Nevatia, R.[Ram],
Hobbs, J.[Jerry],
Bolles, R.C.[Robert C.],
An Ontology for Video Event Representation,
EventVideo04(119).
IEEE DOI
0502
BibRef
Davies, E.R.,
Velastin, S.A.[Sergio A.],
Special Issue on Vision for Crime Detection and Prevention,
PRL(27), No. 15, November 2006, pp. 1755-1757.
Elsevier DOI
0609
BibRef
Guerra-Filho, G.,
Aloimonos, Y.,
A Language for Human Action,
Computer(40), No. 5, May 2007, pp. 42-51.
IEEE DOI
0705
BibRef
Chellappa, R.[Rama],
Roy-Chowdhury, A.K.[Amit K.],
Zhou, S.H.K.[Shao-Hua Kevin],
Recognition of Humans and Their Activities Using Video,
Morgan Claypool2005.
Synthesis Lectures on Image, Video, and Multimedia Processing
Survey, Activity Recognition.
DOI Link
BibRef
0500
Chang, S.F.,
Luo, J.,
Maybank, S.J.,
Schonfeld, D.,
Xu, D.,
An Introduction to the Special Issue on Event Analysis in Videos,
CirSysVideo(18), No. 11, November 2008, pp. 1469-1472.
IEEE DOI
0811
BibRef
Turaga, P.K.,
Chellappa, R.,
Subrahmanian, V.S.,
Udrea, O.,
Machine Recognition of Human Activities: A Survey,
CirSysVideo(18), No. 11, November 2008, pp. 1473-1488.
IEEE DOI
0811
Survey, Activity Recognition.
BibRef
Hamid, R.[Raffay],
Maddi, S.[Siddhartha],
Johnson, A.[Amos],
Bobick, A.F.[Aaron F.],
Essa, I.A.[Irfan A.],
Isbell, C.[Charles],
A novel sequence representation for unsupervised analysis of human
activities,
AI(173), No. 14, September 2009, pp. 1221-1244.
Elsevier DOI
0910
Temporal reasoning; Scene analysis; Computer vision
BibRef
Hamid, R.[Raffay],
Johnson, A.[Amos],
Batta, S.[Samir],
Bobick, A.F.[Aaron F.],
Isbell, C.[Charles],
Coleman, G.[Graham],
Detection and Explanation of Anomalous Activities:
Representing Activities as Bags of Event n-Grams,
CVPR05(I: 1031-1038).
IEEE DOI
0507
BibRef
Aslam, S.[Salman],
Barnes, C.[Christopher],
Bobick, A.F.[Aaron F.],
Target Tracking Using Residual Vector Quantization,
DICTA12(1-8).
IEEE DOI
1303
BibRef
Earlier:
Robust Surveillance on Compressed Video:
Uniform Performance from High to Low Bitrates,
AVSBS09(256-261).
IEEE DOI
0909
BibRef
Chen, L.M.[Li-Ming],
Nugent, C.D.[Chris D.],
Biswas, J.[Jit],
Hoey, J.[Jesse],
Activity Recognition in Pervasive Intelligent Environment,
World ScientificSeptember 2010.
ISBN: 978-90-78677-35-2
Buy this book: Activity Recognition in Pervasive Intelligent Environment (Atlantis Ambient and Pervasive Intelligence)
1011
BibRef
Zhang, J.,
Shao, L.,
Zhang, L.,
Jones, G.A, (Eds.)
Intelligent Video Event Analysis and Understanding,
Springer2011, ISBN: 978-3-642-17553-4.
WWW Link.
Buy this book: Intelligent Video Event Analysis and Understanding (Studies in Computational Intelligence)
1102
BibRef
Weinland, D.[Daniel],
Ronfard, R.[Remi],
Boyer, E.[Edmond],
A Survey of Vision-Based Methods for Action Representation,
Segmentation and Recognition,
CVIU(115), No. 2, February 2011, pp. 224-241.
Elsevier DOI
1102
Survey, Activity Recognition.
Award, CVIU, Most Cited. (2010-2012)
Action/activity recognition; Survey; Computer vision
BibRef
Gonzŕlez, J.[Jordi],
Moeslund, T.B.[Thomas B.],
Wang, L.[Liang],
Semantic Understanding of Human Behaviors in Image Sequences:
From video-surveillance to video-hermeneutics,
CVIU(116), No. 3, March 2012, pp. 305-306.
Elsevier DOI
1201
Introduction
BibRef
Guerra-Filho, G.[Gutemberg],
Biswas, A.[Arnab],
The human motion database:
A cognitive and parametric sampling of human motion,
IVC(30), No. 3, March 2012, pp. 251-261.
Elsevier DOI
1204
BibRef
Earlier:
FG11(103-110).
IEEE DOI
1103
Dataset, Activity Recognition. Human motion database; Quantitative evaluation; Parametric and
cognitive sampling; Motion synthesis and analysis
BibRef
Reddy, K.K.[Kishore K.],
Shah, M.[Mubarak],
Recognizing 50 human action categories of web videos,
MVA(24), No. 5, July 2013, pp. 971-981.
WWW Link.
PDF File.
1306
BibRef
Reddy, K.K.[Kishore K.],
Cuntoor, N.[Naresh],
Perera, A.[Amitha],
Hoogs, A.J.[Anthony J.],
Human Action Recognition in Large-Scale Datasets Using Histogram of
Spatiotemporal Gradients,
AVSS12(106-111).
IEEE DOI
1211
BibRef
Chaquet, J.M.[Jose M.],
Carmona, E.J.[Enrique J.],
Fernandez-Caballero, A.[Antonio],
A survey of video datasets for human action and activity recognition,
CVIU(117), No. 6, June 2013, pp. 633-659.
Elsevier DOI
1304
Survey, Activity Recognition.
Dataset, Activity Recognition. Human action recognition; Human activity recognition; Database;
Dataset; Review; Survey
BibRef
Chen, L.[Lulu],
Wei, H.[Hong],
Ferryman, J.M.[James M.],
A survey of human motion analysis using depth imagery,
PRL(34), No. 15, 2013, pp. 1995-2006.
Elsevier DOI
1309
Range data
BibRef
Geiger, A.,
Lenz, P.,
Stiller, C.,
Urtasun, R.,
Vision meets robotics: The KITTI dataset,
IJRR(32), September 2013, pp. 1231-1237.
WWW Link.
PDF File.
See also KITTI Vision Benchmark Suite, The.
BibRef
1309
Chavarriaga, R.[Ricardo],
Sagha, H.[Hesam],
Calatroni, A.[Alberto],
Digumarti, S.T.[Sundara Tejaswi],
Tröster, G.[Gerhard],
del R. Millán, J.[José],
Roggen, D.[Daniel],
The Opportunity challenge: A benchmark database for on-body
sensor-based activity recognition,
PRL(34), No. 15, 2013, pp. 2033-2042.
Elsevier DOI
1309
Dataset, Activity Recognition. Activity recognition
BibRef
Kanade, T.[Takeo],
Keynote lecture 1: 'Video analysis of human body',
AVSS14(XIV-XIV)
IEEE DOI
1411
Keynote, overview of issues.
BibRef
Wu, J.Z.[Jian-Zhai],
Hu, D.[Dewen],
Chen, F.L.[Fang-Lin],
Action recognition by hidden temporal models,
VC(30), No. 12, December 2014, pp. 1395-1404.
Springer DOI
1411
BibRef
Wang, L.[Liang],
Patras, I.[Ioannis],
Zhang, J.[Jian],
Mori, G.[Greg],
Davis, L.S.[Larry S.],
Special Issue on Individual and Group Activities in Video Event
Analysis,
CVIU(144), No. 1, 2016, pp. 1-2.
Elsevier DOI
1604
BibRef
Yuan, J.S.[Jun-Song],
Li, W.Q.[Wan-Qing],
Zhang, Z.Y.[Zheng-You],
Fleet, D.[David],
Shotton, J.[Jamie],
Guest Editorial: Human Activity Understanding from 2D and 3D Data,
IJCV(118), No. 2, June 2016, pp. 113-114.
Springer DOI
1606
BibRef
Barrett, D.P.[Daniel Paul],
Xu, R.[Ran],
Yu, H.N.[Hao-Nan],
Siskind, J.M.[Jeffrey Mark],
Collecting and annotating the large continuous action dataset,
MVA(27), No. 7, October 2016, pp. 983-995.
Springer DOI
1610
Dataset, Actions. LCA Dataset.
BibRef
Hadfield, S.[Simon],
Lebeda, K.[Karel],
Bowden, R.[Richard],
Hollywood 3D: What are the Best 3D Features for Action Recognition?,
IJCV(121), No. 1, January 2017, pp. 95-110.
Springer DOI
1702
BibRef
Earlier: A1, A3, Only:
Hollywood 3D: Recognizing Actions in 3D Natural Scenes,
CVPR13(3398-3405)
IEEE DOI
1309
Dataset, Attion Recognition. Hollywood3D dataset.
3.5d
BibRef
Idrees, H.[Haroon],
Zamir, A.R.[Amir R.],
Jiang, Y.G.[Yu-Gang],
Gorban, A.[Alex],
Laptev, I.[Ivan],
Sukthankar, R.[Rahul],
Shah, M.[Mubarak],
The THUMOS challenge on action recognition for videos 'in the wild',
CVIU(155), No. 1, 2017, pp. 1-23.
Elsevier DOI
1702
Action recognition
BibRef
Monfort, M.[Mathew],
Andonian, A.[Alex],
Zhou, B.L.[Bo-Lei],
Ramakrishnan, K.[Kandan],
Bargal, S.A.[Sarah Adel],
Yan, T.[Tom],
Brown, L.[Lisa],
Fan, Q.F.[Quan-Fu],
Gutfreund, D.[Dan],
Vondrick, C.[Carl],
Oliva, A.[Aude],
Moments in Time Dataset: One Million Videos for Event Understanding,
PAMI(42), No. 2, February 2020, pp. 502-508.
IEEE DOI
2001
WWW Link.
Dataset, Action. Videos, Visualization, Feature extraction, Vocabulary, Animals,
Convolution, Video dataset, event recognition
BibRef
Pal, R.[Ratnabali],
Sekh, A.A.[Arif Ahmed],
Dogra, D.P.[Debi Prosad],
Kar, S.[Samarjit],
Roy, P.P.[Partha Pratim],
Prasad, D.K.[Dilip K.],
Topic-Based Video Analysis: A Survey,
Surveys(54), No. 6, July 2021, pp. xx-yy.
DOI Link
2108
unsupervised learning, topic model, Video analysis
BibRef
Damen, D.[Dima],
Doughty, H.[Hazel],
Farinella, G.M.[Giovanni Maria],
Fidler, S.[Sanja],
Furnari, A.[Antonino],
Kazakos, E.[Evangelos],
Moltisanti, D.[Davide],
Munro, J.[Jonathan],
Perrett, T.[Toby],
Price, W.[Will],
Wray, M.[Michael],
The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines,
PAMI(43), No. 11, November 2021, pp. 4125-4141.
IEEE DOI
2110
Annotations, Cameras, Benchmark testing, Task analysis,
Streaming media, YouTube, Indexes, Egocentric vision,
action recognition and anticipation
See also EPIC-KITCHENS.
BibRef
Fernandes, J.M.[Jose Marcelo],
Silva, J.S.[Jorge Sa],
Rodrigues, A.[Andre],
Boavida, F.[Fernando],
A Survey of Approaches to Unobtrusive Sensing of Humans,
Surveys(55), No. 2, February 2023, pp. xx-yy.
DOI Link
2212
Unobtrusive sensing, signal processing, IoT, data processing, HiTL
BibRef
Yildiz, S.[Serdar],
Kasim, A.N.[Ahmet Nezih],
ENTIRe-ID: An Extensive and Diverse Dataset for Person
Re-Identification,
FG24(1-5)
IEEE DOI Code:
WWW Link.
2408
Training, Ethics, Surveillance, Face recognition,
Computational modeling, Lighting
BibRef
Ong, K.E.[Kian Eng],
Ng, X.L.[Xun Long],
Li, Y.C.[Yan-Chao],
Ai, W.J.[Wen-Jie],
Zhao, K.[Kuangyi],
Yeo, S.Y.[Si Yong],
Liu, J.[Jun],
Chaotic World: A Large and Challenging Benchmark for Human Behavior
Understanding in Chaotic Events,
ICCV23(20156-20166)
IEEE DOI Code:
WWW Link.
2401
BibRef
Video Action Detection: Analysing Limitations and Challenges,
VDU22(4907-4916)
IEEE DOI
2210
No authors listed.
Pattern recognition
BibRef
Xefteris, V.R.[Vasileios-Rafail],
Tsanousa, A.[Athina],
Mavropoulos, T.[Thanassis],
Meditskos, G.[Georgios],
Vrochidis, S.[Stefanos],
Kompatsiaris, I.[Ioannis],
Human Activity Recognition with IMU and Vital Signs Feature Fusion,
MMMod22(I:287-298).
Springer DOI
2203
BibRef
Patino, L.,
Nawaz, T.,
Cane, T.,
Ferryman, J.,
PETS 2017: Dataset and Challenge,
PETS17(2126-2132)
IEEE DOI
1709
Boats, Cameras, Measurement, Mobile communication,
Surveillance, Visualization
BibRef
Patino, L.,
Cane, T.,
Vallee, A.,
Ferryman, J.,
PETS 2016: Dataset and Challenge,
PETS16(1240-1247)
IEEE DOI
1612
BibRef
Patino, L.[Luis],
Ferryman, J.M.[James M.],
PETS 2014: Dataset and challenge,
AVSS14(355-360)
IEEE DOI
1411
Dataset, Surveillance. Cameras
BibRef
Blasch, E.P.,
Wang, Z.H.[Zhong-Hai],
Ling, H.B.[Hai-Bin],
Palaniappan, K.,
Chen, G.[Genshe],
Shen, D.[Dan],
Aved, A.,
Seetharaman, G.,
Video-based activity analysis using the L1 tracker on VIRAT data,
AIPR13(1-8)
IEEE DOI
1408
object detection
BibRef
Hassner, T.[Tal],
A Critical Review of Action Recognition Benchmarks,
ActionSim13(245-250)
IEEE DOI
1309
Survey, Action Recogniton.
BibRef
Soomro, K.[Khurram],
Zamir, A.R.[Amir Roshan],
Shah, M.[Mubarak],
UCF101: A Dataset of 101 Human Action Classes from
Videos in The Wild,
TRCRCV-TR-12-01, November, 2012. UCF.
PDF File.
The dataset:
See also UCF Action Recogniton Dataset 101.
BibRef
1211
Nebel, J.C.[Jean-Christophe],
Lewandowski, M.[Michal],
Thévenon, J.[Jérôme],
Martínez, F.[Francisco],
Velastin, S.A.[Sergio A.],
Are Current Monocular Computer Vision Systems for Human Action
Recognition Suitable for Visual Surveillance Applications?,
ISVC11(II: 290-299).
Springer DOI
1109
BibRef
Velastin, S.A.[Sergio A.],
CCTV Video Analytics: Recent Advances and Limitations,
IVIC09(22-34).
Springer DOI
0911
BibRef
Cowie, R.[Roddy],
Building the databases needed to understand rich, spontaneous human
behaviour,
FG08(1-6).
IEEE DOI
0809
BibRef
Raptis, M.[Michalis],
Wnuk, K.[Kamil],
Soatto, S.[Stefano],
Spike train driven dynamical models for human actions,
CVPR10(2077-2084).
IEEE DOI
1006
BibRef
Earlier:
Flexible dictionaries for action classification,
MLMotion08(xx-yy).
0810
BibRef
Liu, C.[Ce],
Freeman, W.T.[William T.],
Adelson, E.H.[Edward H.],
Weiss, Y.[Yair],
Human-assisted motion annotation,
CVPR08(1-8).
IEEE DOI
0806
Dataset, Motion.
WWW Link. Motion annotation then applied to datasets to provide ground truth.
BibRef
Heckenberg, D.[Daniel],
Performance Evaluation of Vision-Based High DOF Human Movement Tracking:
A Survey And Human Computer Interaction Perspective,
V4HCI06(156).
IEEE DOI
0609
BibRef
Hamid, R.[Raffay],
Maddi, S.[Siddhartha],
Bobick, A.F.[Aaron F.],
Essa, I.A.[Irfan A.],
Structure from Statistics:
Unsupervised Activity Analysis using Suffix Trees,
ICCV07(1-8).
IEEE DOI
0710
BibRef
Earlier:
Unsupervised analysis of activity sequences using event-motifs,
VSSN06(71-78).
WWW Link.
0701
BibRef
Barros, L.,
Evers, T.,
Musse, S.,
A Framework to Investigate Behavioural Models,
WSCG02(40).
HTML Version.
0209
BibRef
Gross, R.,
Shi, J.,
The CMU Motion of Body (MoBo) Database,
CMU-RI-TR-01-18, June, 2001.
PDF File.
0205
BibRef
Chapter on Motion -- Human Motion, Surveillance, Tracking, Surveillance, Activities continues in
Event Descriptions, Understanding Motion and Events .