Gupta, A.[Abhinav],
Kembhavi, A.[Aniruddha],
Davis, L.S.[Larry S.],
Observing Human-Object Interactions:
Using Spatial and Functional Compatibility for Recognition,
PAMI(31), No. 10, October 2009, pp. 1775-1789.
IEEE DOI
0909
Understand events, recognize motions, recognize the objects.
Apply constraints to reduce computation.
BibRef
Richtsfeld, A.[Andreas],
Mörwald, T.[Thomas],
Prankl, J.[Johann],
Zillich, M.[Michael],
Vincze, M.[Markus],
Learning of perceptual grouping for object segmentation on RGB-D data,
JVCIR(25), No. 1, 2014, pp. 64-73.
Elsevier DOI
1502
BibRef
Fäulhammer, T.,
Zillich, M.[Michael],
Prankl, J.[Johann],
Vincze, M.[Markus],
A multi-modal RGB-D object recognizer,
ICPR16(733-738)
IEEE DOI
1705
Cameras, Computational modeling, Feature extraction, Pipelines,
Shape, Training
BibRef
Damen, D.[Dima],
Leelasawassuk, T.[Teesid],
Mayol-Cuevas, W.W.[Walterio W.],
You-Do, I-Learn: Egocentric unsupervised discovery of objects and
their modes of interaction towards video-based guidance,
CVIU(149), No. 1, 2016, pp. 98-112.
Elsevier DOI
1606
Video guidance
BibRef
Lagunes-Fortiz, M.[Miguel],
Damen, D.[Dima],
Mayol-Cuevas, W.W.[Walterio W.],
Instance-level Object Recognition Using Deep Temporal Coherence,
ISVC18(274-285).
Springer DOI
1811
BibRef
Damen, D.[Dima],
Haines, O.[Osian],
Leelasawassuk, T.[Teesid],
Calway, A.D.[Andrew D.],
Mayol-Cuevas, W.W.[Walterio W.],
Multi-User Egocentric Online System for Unsupervised Assistance on
Object Usage,
ACVR14(481-492).
Springer DOI
1504
BibRef
And: A1, A3, A2, A4, A5:
You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of
Interaction from Multi-User Egocentric Video,
BMVC14(xx-yy).
HTML Version.
1410
BibRef
Malmir, M.[Mohsen],
Sikka, K.[Karan],
Forster, D.[Deborah],
Fasel, I.[Ian],
Movellan, J.R.[Javier R.],
Cottrell, G.W.[Garrison W.],
Deep active object recognition by joint label and action prediction,
CVIU(156), No. 1, 2017, pp. 128-137.
Elsevier DOI
1702
BibRef
Earlier: A1, A2, A3, A5, A6, Only:
Deep Q-learning for Active Recognition of GERMS:
Baseline performance on a standardized dataset for active learning,
BMVC15(xx-yy).
DOI Link
1601
Active object recognition in the context of human robot interaction.
BibRef
Chen, C.P.[Cheng-Peng],
Min, W.Q.[Wei-Qing],
Li, X.[Xue],
Jiang, S.Q.[Shu-Qiang],
Hybrid incremental learning of new data and new classes for hand-held
object recognition,
JVCIR(58), 2019, pp. 138-148.
Elsevier DOI
1901
Incremental learning, Object recognition, SVM, Human-machine interaction
BibRef
Cong, Y.[Yang],
Chen, R.H.[Rong-Han],
Ma, B.T.[Bing-Tao],
Liu, H.S.[Hong-Sen],
Hou, D.D.[Dong-Dong],
Yang, C.G.[Chen-Guang],
A Comprehensive Study of 3-D Vision-Based Robot Manipulation,
Cyber(53), No. 3, March 2023, pp. 1682-1698.
IEEE DOI
2302
Robots, Service robots, Grasping, Data acquisition, Pose estimation, Force,
Cameras, 3-D object recognition, grasping estimation, robot manipulation
BibRef
Yoshida, T.[Tomoya],
Kurita, S.[Shuhei],
Nishimura, T.[Taichi],
Mori, S.[Shinsuke],
Generating 6DoF Object Manipulation Trajectories from Action
Description in Egocentric Vision,
CVPR25(17370-17382)
IEEE DOI Code:
WWW Link.
2508
Training, Visualization, Adaptation models, Codes,
Object segmentation, Closed captioning, Trajectory, Videos
BibRef
Pan, M.J.[Ming-Jie],
Zhang, J.[Jiyao],
Wu, T.S.[Tian-Shu],
Zhao, Y.H.[Ying-Hao],
Gao, W.L.[Wen-Long],
Dong, H.[Hao],
OmniManip: Towards General Robotic Manipulation via Object-Centric
Interaction Primitives as Spatial Constraints,
CVPR25(17359-17369)
IEEE DOI
2508
Tracking loops, Solid modeling, Semantics, Data collection,
Rendering (computer graphics), Real-time systems, Planning, Robots,
zero-shot manipulation
BibRef
Jiang, S.J.[Shi-Jian],
Ye, Q.[Qi],
Xie, R.[Rengan],
Huo, Y.[Yuchi],
Chen, J.M.[Ji-Ming],
Hand-held Object Reconstruction from RGB Video with Dynamic
Interaction,
CVPR25(12220-12230)
IEEE DOI Code:
WWW Link.
2508
Hands, Geometry, Solid modeling, Shape, Semantics, Pose estimation,
Image reconstruction, Optimization, Videos, multi-view reconstruction
BibRef
Liu, Y.[Yun],
Zhang, C.W.[Cheng-Wen],
Xing, R.F.[Ruo-Fan],
Tang, B.D.[Bing-Da],
Yang, B.[Bowen],
Yi, L.[Li],
CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative
Object REarrangement,
CVPR25(1769-1782)
IEEE DOI
2508
Geometry, Shape, Collaboration, Human-robot interaction, Focusing,
Iterative methods, Forecasting, dataset, motion capture,
human-object interaction generation
BibRef
Nam, H.[Hyeongjin],
Jung, D.S.[Daniel Sungho],
Moon, G.[Gyeongsik],
Lee, K.M.[Kyoung Mu],
Joint Reconstruction of 3D Human and Object via Contact-Based
Refinement Transformer,
CVPR24(10218-10227)
IEEE DOI Code:
WWW Link.
2410
Accuracy, Correlation, Pipelines, Estimation, Reconstruction algorithms,
Human-object interaction, 3D human and object reconstruction
BibRef
Gu, J.Y.[Jian-Yang],
Wang, K.[Kai],
Luo, H.[Hao],
Chen, C.[Chen],
Jiang, W.[Wei],
Fang, Y.Q.[Yu-Qiang],
Zhang, S.H.[Shang-Hang],
You, Y.[Yang],
Zhao, J.[Jian],
MSINet: Twins Contrastive Search of Multi-Scale Interaction for
Object ReID,
CVPR23(19243-19253)
IEEE DOI
2309
BibRef
Xie, X.H.[Xiang-Hui],
Bhatnagar, B.L.[Bharat Lal],
Pons-Moll, G.[Gerard],
Visibility Aware Human-Object Interaction Tracking from Single RGB
Camera,
CVPR23(4757-4768)
IEEE DOI
2309
BibRef
Earlier:
CHORE: Contact, Human and Object Reconstruction from a Single RGB Image,
ECCV22(II:125-145).
Springer DOI
2211
BibRef
Jian, J.T.[Jun-Tao],
Liu, X.P.[Xiu-Ping],
Li, M.[Manyi],
Hu, R.Z.[Rui-Zhen],
Liu, J.[Jian],
AffordPose: A Large-scale Dataset of Hand-Object Interactions with
Affordance-driven Hand Pose,
ICCV23(14667-14678)
IEEE DOI Code:
WWW Link.
2401
BibRef
Zhang, C.Y.G.[Chen-Yang-Guang],
Jiao, G.L.[Guan-Long],
Di, Y.[Yan],
Wang, G.[Gu],
Huang, Z.Q.[Zi-Qin],
Zhang, R.[Ruida],
Manhardt, F.[Fabian],
Fu, B.[Bowen],
Tombari, F.[Federico],
Ji, X.Y.[Xiang-Yang],
MOHO: Learning Single-View Hand-Held Object Reconstruction with
Multi-View Occlusion-Aware Supervision,
CVPR24(9992-10002)
IEEE DOI
2410
Solid modeling, Shape, Training data, Human-robot interaction,
Resists
BibRef
Pang, Y.L.[Yik Lung],
Oh, C.[Changjae],
Cavallaro, A.[Andrea],
Sparse multi-view hand-object reconstruction for unseen environments,
L3D24(803-810)
IEEE DOI
2410
Shape, Computational modeling, Predictive models, Data collection,
multi-view, 3D reconstruction, hand-object reconstruction
BibRef
Xu, W.Q.[Wen-Qiang],
Yu, Z.J.[Zhen-Jun],
Xue, H.[Han],
Ye, R.L.[Ruo-Lin],
Yao, S.[Siqiong],
Lu, C.W.[Ce-Wu],
Visual-Tactile Sensing for In-Hand Object Reconstruction,
CVPR23(8803-8812)
IEEE DOI
2309
BibRef
Tse, T.H.E.[Tze Ho Elden],
Kim, K.I.[Kwang In],
Leonardis, A.[Aleš],
Chang, H.J.[Hyung Jin],
Collaborative Learning for Hand and Object Reconstruction with
Attention-guided Graph Convolution,
CVPR22(1654-1664)
IEEE DOI
2210
Training, Representation learning, Solid modeling, Shape,
Convolution, Pose estimation, 3D from single images, Representation learning
BibRef
Dabral, R.[Rishabh],
Shimada, S.[Soshi],
Jain, A.[Arjun],
Theobalt, C.[Christian],
Golyanik, V.[Vladislav],
Gravity-Aware Monocular 3D Human-Object Reconstruction,
ICCV21(12345-12354)
IEEE DOI
2203
Meters, Measurement, Estimation, Kinematics, Bones, Linear programming,
3D from a single image and shape-from-x,
Motion and tracking
BibRef
Ragusa, F.[Francesco],
Furnari, A.[Antonino],
Livatino, S.[Salvatore],
Farinella, G.M.[Giovanni Maria],
The MECCANO Dataset: Understanding Human-Object Interactions from
Egocentric Videos in an Industrial-like Domain,
WACV21(1568-1577)
IEEE DOI
WWW Link.
2106
Dataset, Interactions. Taxonomy, Motorcycles,
Object detection, Tools, Object recognition
BibRef
Basit, A.,
Munir, M.A.,
Ali, M.,
Werghi, N.,
Mahmood, A.,
Localizing Firearm Carriers By Identifying Human-Object Pairs,
ICIP20(2031-2035)
IEEE DOI
2011
Adaptation models, Proposals, Task analysis, Detectors,
Pose estimation, Classification algorithms, Object detection,
Gun violence
BibRef
Shen, L.Y.[Li-Yue],
Yeung, S.[Serena],
Hoffman, J.[Judy],
Mori, G.[Greg],
Fei-Fei, L.[Li],
Scaling Human-Object Interaction Recognition Through Zero-Shot
Learning,
WACV18(1568-1576)
IEEE DOI
1806
learning (artificial intelligence), object recognition,
HICODET dataset, HOI recognition, fully-supervised HOI detection,
Visualization
BibRef
Moltisanti, D.[Davide],
Wray, M.[Michael],
Mayol-Cuevas, W.W.[Walterio W.],
Damen, D.[Dima],
Trespassing the Boundaries: Labeling Temporal Bounds for Object
Interactions in Egocentric Video,
ICCV17(2905-2913)
IEEE DOI
1802
convolution, image annotation, neural nets, object detection,
object recognition, video signal processing, Rubicon Boundaries,
BibRef
Kluth, T.[Tobias],
Nakath, D.[David],
Reineking, T.[Thomas],
Zetzsche, C.[Christoph],
Schill, K.[Kerstin],
Affordance-Based Object Recognition Using Interactions Obtained from a
Utility Maximization Principle,
Affordance14(406-412).
Springer DOI
1504
BibRef
Zhang, B.[Bang],
Ye, G.[Getian],
Wang, Y.[Yang],
Wang, W.[Wei],
Xu, J.[Jie],
Herman, G.[Gunawan],
Yang, Y.[Yun],
Multi-Class Graph Boosting with Subgraph Sharing for Object Recognition,
ICPR10(1541-1544).
IEEE DOI
1008
BibRef
Kojima, A.[Atsuhiro],
Miki, H.[Hiroshi],
Kise, K.[Koichi],
Object Recognition Based on n-gram Expression of Human Actions,
ICPR10(372-375).
IEEE DOI
1008
BibRef
Chapter on Motion -- Human Motion, Surveillance, Tracking, Surveillance, Activities continues in
Actions, Grasping, Robot Grasping, Shape for Grasp .