Sudhir, G.,
Lee, J.C.M.,
Video Annotation by Motion Interpretation Using Optical-Flow Streams,
JVCIR(7), No. 4, December 1996, pp. 354-368.
9704
BibRef
Carrer, M.,
Ligresti, L.,
Ahanger, G.,
Little, T.D.C.,
An Annotation Engine for Supporting Video Database Population,
MultToolApp(5), No. 3, November 1997, pp. 233-258.
9712
BibRef
Tan, Y.P.,
Saur, D.D.,
Kulkarni, S.R.,
Ramadge, P.J.,
Rapid Estimation of Camera Motion from Compressed Video with
Application to Video Annotation,
CirSysVideo(10), No. 1, February 2000, pp. 133.
IEEE Top Reference.
0003
BibRef
Purnaveja, A.[Audi],
Chaddha, N.[Navin],
Vellanki, S.P.[Srinivas Prasad],
del Val, D.[David],
Gupta, A.[Anoop],
Wang, E.Y.B.[Edward Yan-Bing],
Production of a video stream with synchronized annotations
over a computer network,
US_Patent6,230,172, May 8, 2001
WWW Link.
BibRef
0105
Chang, E.,
Goh, K.[Kingshy],
Sychay, G.,
Wu, G.[Gang],
CBSA: content-based soft annotation for multimodal image retrieval
using bayes point machines,
CirSysVideo(13), No. 1, January 2003, pp. 26-38.
IEEE Top Reference.
0301
BibRef
Dorado, A.,
Calic, J.[Janko],
Izquierdo, E.[Ebroul],
A rule-based video annotation system,
CirSysVideo(14), No. 5, May 2004, pp. 622-633.
IEEE Abstract.
0407
BibRef
Carneiro, G.[Gustavo],
Chan, A.B.,
Moreno, P.J.,
Vasconcelos, N.M.[Nuno M.],
Supervised Learning of Semantic Classes for Image Annotation and
Retrieval,
PAMI(29), No. 3, March 2007, pp. 394-410.
IEEE DOI
0702
BibRef
Carneiro, G.[Gustavo],
Vasconcelos, N.M.[Nuno M.],
Formulating Semantic Image Annotation as a Supervised Learning Problem,
CVPR05(II: 163-168).
IEEE DOI
0507
BibRef
Anjulan, A.[Arasanathan],
Canagarajah, C.N.[C. Nishan],
Object based video retrieval with local region tracking,
SP:IC(22), No. 7-8, August-September 2007, pp. 607-621.
Elsevier DOI
0710
BibRef
And:
A Novel Video Mining System,
ICIP07(I: 185-188).
IEEE DOI
0709
BibRef
Earlier:
Video Object Mining with Local Region Tracking,
MCAM07(172-183).
Springer DOI
0706
BibRef
Earlier:
Video Scene Retrieval Based on Local Region Features,
ICIP06(3177-3180).
IEEE DOI
0610
BibRef
And:
A Novel Framework for Robust Annotation and Retrieval in Video
Sequences,
CIVR06(183-192).
Springer DOI
0607
Object retrieval; Scene matching; Shot segmentation;
Feature extraction; Feature clustering
BibRef
Ionescu, B.[Bogdan],
Coquin, D.[Didier],
Lambert, P.[Patrick],
Buzuloiu, V.[Vasile],
A Fuzzy Color-Based Approach for Understanding Animated Movies Content
in the Indexing Task,
JIVP(2008), No. 2008, pp. xx-yy.
DOI Link
0804
BibRef
Ionescu, B.[Bogdan],
Seyerlehner, K.[Klaus],
Rasche, C.[Christoph],
Vertan, C.[Constantin],
Lambert, P.[Patrick],
Content-Based Video Description for Automatic Video Genre
Categorization,
MMMod12(51-62).
Springer DOI
1201
BibRef
Anjulan, A.[Arasanathan],
Canagarajah, C.N.[C. Nishan],
A Unified Framework for Object Retrieval and Mining,
CirSysVideo(19), No. 1, January 2009, pp. 63-76.
IEEE DOI
0902
BibRef
Campanella, M.[Marco],
Leonardi, R.[Riccardo],
Migliorati, P.[Pierangelo],
Interactive visualization of video content and associated description
for semantic annotation,
SIViP(3), No. 2, June 2009, pp. xx-yy.
Springer DOI
0903
BibRef
Earlier:
The Future-Viewer Visual Environment for Semantic Characterization of
Video Sequences,
ICIP05(I: 1209-1212).
IEEE DOI
0512
Semi-automatic annotation system.
BibRef
Wang, M.,
Hua, X.S.[Xian-Sheng],
Hong, R.,
Tang, J.,
Qi, G.J.[Guo-Jun],
Song, Y.[Yan],
Unified Video Annotation via Multigraph Learning,
CirSysVideo(19), No. 5, May 2009, pp. 733-746.
IEEE DOI
0906
BibRef
Qi, G.J.[Guo-Jun],
Song, Y.[Yan],
Hua, X.S.[Xian-Sheng],
Zhang, H.J.[Hong-Jiang],
Dai, L.R.[Li-Rong],
Video Annotation by Active Learning and Cluster Tuning,
SLAM06(114).
IEEE DOI
0609
BibRef
You, J.Y.[Jun-Yong],
Liu, G.Z.[Gui-Zhong],
Perkis, A.[Andrew],
A semantic framework for video genre classification and event analysis,
SP:IC(25), No. 4, April 2010, pp. April 2010, 287-302.
Elsevier DOI
1006
Semantic video analysis; Probabilistic model; Video genre
classification; Event analysis
BibRef
Moxley, E.,
Mei, T.,
Manjunath, B.S.,
Video Annotation Through Search and Graph Reinforcement Mining,
MultMed(12), No. 3, March 2010, pp. 184-193.
IEEE DOI
1003
BibRef
Torralba, A.B.,
Russell, B.C.,
Yuen, J.,
LabelMe: Online Image Annotation and Applications,
PIEEE(98), No. 8, August 2010, pp. 1467-1484.
IEEE DOI
1008
Web based annotation tool to allow users to provide info for computer
vision system evaluations.
BibRef
Yuen, J.[Jenny],
Russell, B.C.[Bryan C.],
Liu, C.[Ce],
Torralba, A.B.[Antonio B.],
LabelMe video: Building a video database with human annotations,
ICCV09(1451-1458).
IEEE DOI
0909
Human annotations.
BibRef
Li, Y.,
Tian, Y.,
Duan, L.Y.,
Yang, Y.,
Huang, T.,
Gao, W.,
Sequence Multi-Labeling: A Unified Video Annotation Scheme With Spatial
and Temporal Context,
MultMed(12), No. 8, 2010, pp. 814-828.
IEEE DOI
1011
BibRef
Lee, S.Y.[Sih-Young],
de Neve, W.[Wesley],
Ro, Y.M.[Yong Man],
Tag refinement in an image folksonomy using visual similarity and tag
co-occurrence statistics,
SP:IC(25), No. 10, November 2010, pp. 761-773.
Elsevier DOI
1101
Co-occurrence; Folksonomy; Recommendation; Refinement; Visual similarity
User generated tags, for later retrieval. Noisy tags.
BibRef
Min, H.S.[Hyun-Seok],
Choi, J.Y.[Jae-Young],
de Neve, W.[Wesley],
Ro, Y.M.[Yong Man],
Plataniotis, K.N.[Konstantinos N.],
Semantic annotation of personal video content using an image folksonomy,
ICIP09(257-260).
IEEE DOI
0911
See also Automatic Face Annotation in Personal Photo Collections Using Context-Based Unsupervised Clustering and Face Information Fusion.
BibRef
Diou, C.[Christos],
Stephanopoulos, G.,
Panagiotopoulos, P.,
Papachristou, C.,
Dimitriou, N.,
Delopoulos, A.,
Large-Scale Concept Detection in Multimedia Data Using Small Training
Sets and Cross-Domain Concept Fusion,
CirSysVideo(20), No. 12, December 2010, pp. 1808-1821.
IEEE DOI
1102
BibRef
Tang, J.H.[Jin-Hui],
Hua, X.S.[Xian-Sheng],
Mei, T.[Tao],
Qi, G.J.[Guo-Jun],
Li, S.P.[Shi-Peng],
Wu, X.Q.[Xiu-Qing],
Temporally Consistent Gaussian Random Field for Video Semantic Analysis,
ICIP07(IV: 525-528).
IEEE DOI
0709
BibRef
Yuan, X.[Xun],
Lai, W.[Wei],
Mei, T.[Tao],
Hua, X.S.[Xian-Sheng],
Wu, X.Q.[Xiu-Qing],
Li, S.P.[Shi-Peng],
Automatic Video Genre Categorization using Hierarchical SVM,
ICIP06(2905-2908).
IEEE DOI
0610
BibRef
Paniagua-Martin, F.[Fernando],
Garcia-Crespo, A.[Angel],
Colomo-Palacios, R.[Ricardo],
Ruiz-Mezcua, B.[Belen],
Semantic Annotation Architecture for Accessible Multimedia Resources,
MultMedMag(18), No. 2, April-June 2011, pp. 16-25.
IEEE DOI
1105
BibRef
Cao, J.,
Ngo, C.W.,
Zhang, Y.D.,
Li, J.T.,
Tracking Web Video Topics: Discovery, Visualization, and Monitoring,
CirSysVideo(21), No. 12, December 2011, pp. 1835-1846.
IEEE DOI
1112
BibRef
Lin, L.[Lin],
Chen, C.[Chao],
Shyu, M.L.[Mei-Ling],
Chen, S.C.[Shu-Ching],
Weighted Subspace Filtering and Ranking Algorithms for Video Concept
Retrieval,
MultMedMag(18), No. 3, 2011, pp. 32-43.
IEEE DOI
1108
BibRef
Shao, J.[Jian],
Ma, S.[Shuai],
Lu, W.M.[Wei-Ming],
Zhuang, Y.T.[Yue-Ting],
A unified framework for web video topic discovery and visualization,
PRL(33), No. 4, March 2012, pp. 410-419.
Elsevier DOI
1201
Web video; Topic discovery; Topic visualization; Star-structured
K-partite Graph; Linked cluster network; Co-clustering
BibRef
Wollmer, M.[Martin],
Weninger, F.[Felix],
Knaup, T.[Tobias],
Schuller, B.[Bjorn],
Sun, C.[Congkai],
Sagae, K.[Kenji],
Morency, L.P.[Louis-Philippe],
YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context,
IEEE_Int_Sys(28), No. 3, 2013, pp. 46-53.
IEEE DOI
1309
Context awareness
BibRef
Moran, S.[Sean],
Lavrenko, V.[Victor],
A sparse kernel relevance model for automatic image annotation,
MultInfoRetr(3), No. 4, November 2014, pp. 209-229.
Springer DOI
1411
BibRef
Earlier:
Optimal Tag Sets for Automatic Image Annotation,
BMVC11(xx-yy).
HTML Version.
1110
BibRef
Feng, S.L.,
Manmatha, R.,
Lavrenko, V.,
Multiple Bernoulli relevance models for image and video annotation,
CVPR04(II: 1002-1009).
IEEE DOI
0408
BibRef
Tarvainen, J.,
Sjoberg, M.,
Westman, S.,
Laaksonen, J.,
Oittinen, P.,
Content-Based Prediction of Movie Style, Aesthetics, and Affect:
Data Set and Baseline Experiments,
MultMed(16), No. 8, December 2014, pp. 2085-2098.
IEEE DOI
1502
content-based retrieval
BibRef
Constantin, M.G.[Mihai Gabriel],
Stefan, L.D.[Liviu-Daniel],
Ionescu, B.[Bogdan],
Duong, N.Q.K.[Ngoc Q. K.],
Demarty, C.H.[Claire-Héléne],
Sjöberg, M.[Mats],
Visual Interestingness Prediction: A Benchmark Framework and Literature
Review,
IJCV(129), No. 5, May 2021, pp. 1526-1550.
Springer DOI
2105
BibRef
Lew, M.S.[Michael S.],
Special issue on video retrieval,
MultInfoRetr(4), No. 1, March 2015, pp. 1-2.
WWW Link.
1503
BibRef
Lew, M.S.[Michael S.],
Special issue on visual information retrieval,
MultInfoRetr(5), No. 1, March 2016, pp. 1-2.
Springer DOI
1602
BibRef
Lew, M.S.[Michael S.],
Editorial for the ICMR 2017 special issue,
MultInfoRetr(8), No. 1, March 2018, pp. 1-2.
Springer DOI
1802
BibRef
Lin, T.,
Yang, M.,
Tsai, C.,
Wang, Y.F.,
Query-Adaptive Multiple Instance Learning for Video Instance
Retrieval,
IP(24), No. 4, April 2015, pp. 1330-1340.
IEEE DOI
1503
Detectors
BibRef
Duan, L.J.[Li-Juan],
Xi, T.[Tao],
Cui, S.[Song],
Qi, H.G.[Hong-Gang],
Bovik, A.C.[Alan C.],
A spatiotemporal weighted dissimilarity-based method for video
saliency detection,
SP:IC(38), No. 1, 2015, pp. 45-56.
Elsevier DOI
1512
Saliency detection
BibRef
Tu, Q.[Qin],
Men, A.[Aidong],
Jiang, Z.Q.[Zhu-Qing],
Ye, F.[Feng],
Xu, J.[Jun],
Video saliency detection incorporating temporal information in
compressed domain,
SP:IC(38), No. 1, 2015, pp. 32-44.
Elsevier DOI
1512
Compressed domain
BibRef
Li, C.,
Tu, Q.,
Xu, J.,
Gao, R.,
Wang, Q.,
Chang, Y.,
Ant colony optimization inspired saliency detection using compressed
video information,
VCIP15(1-4)
IEEE DOI
1605
Ant colony optimization
BibRef
Gao, R.,
Tu, Q.,
Xu, J.,
Lu, Y.,
Xie, W.,
Men, A.,
Visual saliency detection based on mutual information in compressed
domain,
VCIP15(1-4)
IEEE DOI
1605
Entropy
BibRef
Qian, X.M.[Xue-Ming],
Liu, X.X.[Xiao-Xiao],
Ma, X.[Xiang],
Lu, D.[Dan],
Xu, C.Y.[Chen-Yang],
What Is Happening in the Video? Annotate Video by Sentence,
CirSysVideo(26), No. 9, September 2016, pp. 1746-1757.
IEEE DOI
1609
Artificial intelligence
BibRef
Liao, H.S.[Hong-Sen],
Chen, L.[Li],
Song, Y.[Yibo],
Ming, H.[Hao],
Visualization-Based Active Learning for Video Annotation,
MultMed(18), No. 11, November 2016, pp. 2196-2205.
IEEE DOI
1609
data visualisation
BibRef
Chou, C.L.,
Chen, H.T.,
Lee, S.Y.,
Multimodal Video-to-Near-Scene Annotation,
MultMed(19), No. 2, February 2017, pp. 354-366.
IEEE DOI
1702
entropy
BibRef
Wang, H.[Han],
Wu, X.X.[Xin-Xiao],
Jia, Y.D.[Yun-De],
Heterogeneous domain adaptation method for video annotation,
IET-CV(11), No. 2, March 2017, pp. 181-187.
DOI Link
1703
BibRef
Li, W.[Wei],
Guo, D.[Dashan],
Fang, X.Z.[Xiang-Zhong],
Multimodal architecture for video captioning with memory networks and
an attention mechanism,
PRL(105), 2018, pp. 23-29.
Elsevier DOI
1804
Video captioning, Memory network, Attention mechanism
BibRef
Protasov, S.[Stanislav],
Khan, A.M.[Adil Mehmood],
Sozykin, K.[Konstantin],
Ahmad, M.[Muhammad],
Using deep features for video scene detection and annotation,
SIViP(12), No. 5, July 2018, pp. 991-999.
Springer DOI
1806
BibRef
Shetty, R.,
Tavakoli, H.R.,
Laaksonen, J.,
Image and Video Captioning with Augmented Neural Architectures,
MultMedMag(25), No. 2, April 2018, pp. 34-46.
IEEE DOI
1808
Feature extraction, Neural networks, Computational modeling,
Multimedia communication, Object recognition, Detectors,
neural networks
BibRef
Yang, Y.,
Zhou, J.,
Ai, J.,
Bin, Y.,
Hanjalic, A.,
Shen, H.T.,
Ji, Y.,
Video Captioning by Adversarial LSTM,
IP(27), No. 11, November 2018, pp. 5600-5611.
IEEE DOI
1809
feature extraction, learning (artificial intelligence),
object detection, recurrent neural nets, video signal processing,
LSTM
BibRef
Gao, L.L.[Lian-Li],
Li, X.P.[Xiang-Peng],
Song, J.K.[Jing-Kuan],
Shen, H.T.[Heng Tao],
Hierarchical LSTMs with Adaptive Attention for Visual Captioning,
PAMI(42), No. 5, May 2020, pp. 1112-1131.
IEEE DOI
2004
Visualization, Feature extraction, Task analysis, Decoding,
Adaptation models, Natural language processing, Video captioning,
hierarchical structure
BibRef
Zhang, X.X.[Xing-Xing],
Zhu, Z.F.[Zhen-Feng],
Zhao, Y.[Yao],
Chang, D.X.[Dong-Xia],
Learning a General Assignment Model for Video Analytics,
CirSysVideo(28), No. 10, October 2018, pp. 3066-3076.
IEEE DOI
1811
Also known as video content analysis.
Hidden Markov models, Analytical models, Mathematical model,
Motion segmentation, Numerical models,
video classification
BibRef
Daskalakis, E.[Eleftherios],
Tzelepi, M.[Maria],
Tefas, A.[Anastasios],
Learning deep spatiotemporal features for video captioning,
PRL(116), 2018, pp. 143-149.
Elsevier DOI
1812
BibRef
Xu, N.,
Liu, A.,
Wong, Y.,
Zhang, Y.,
Nie, W.,
Su, Y.,
Kankanhalli, M.,
Dual-Stream Recurrent Neural Network for Video Captioning,
CirSysVideo(29), No. 8, August 2019, pp. 2482-2493.
IEEE DOI
1908
Semantics, Visualization, Decoding, Streaming media,
Recurrent neural networks, Encoding, Task analysis,
attention module
BibRef
Ren, J.H.[Jun-Hong],
Zhang, W.S.[Wen-Sheng],
CLOSE: Coupled content-semantic embedding,
SIViP(13), No. 6, September 2019, pp. 1087-1095.
Springer DOI
1908
application to video captioning
BibRef
Lee, J.[Jaeyoung],
Kim, J.[Junmo],
Exploring the effects of non-local blocks on video captioning networks,
IJCVR(9), No. 5, 2019, pp. 502-514.
DOI Link
1909
BibRef
Mun, J.[Jonghwan],
Yang, L.J.[Lin-Jie],
Ren, Z.[Zhou],
Xu, N.[Ning],
Han, B.H.[Bo-Hyung],
Streamlined Dense Video Captioning,
CVPR19(6581-6590).
IEEE DOI
2002
BibRef
Wang, H.Y.[Hui-Yun],
Gao, C.Y.[Chong-Yang],
Han, Y.H.[Ya-Hong],
Sequence in sequence for video captioning,
PRL(130), 2020, pp. 327-334.
Elsevier DOI
2002
Video captioning, Encoding, Decoding, Spatio-temporal representation
BibRef
Harwath, D.[David],
Recasens, A.[Adrià],
Surís, D.[Dídac],
Chuang, G.[Galen],
Torralba, A.B.[Antonio B.],
Glass, J.[James],
Jointly Discovering Visual Objects and Spoken Words from Raw Sensory
Input,
IJCV(128), No. 3, March 2020, pp. 620-641.
Springer DOI
2003
BibRef
Earlier:
ECCV18(VI: 659-677).
Springer DOI
1810
Associate spoken captions with relevant portion of the image.
BibRef
Wei, R.[Ran],
Mi, L.[Li],
Hu, Y.S.[Yao-Si],
Chen, Z.Z.[Zhen-Zhong],
Exploiting the local temporal information for video captioning,
JVCIR(67), 2020, pp. 102751.
Elsevier DOI
2004
Local temporal information, Video captioning, Sliding windows,
Reinforcement learning
BibRef
Zhang, J.C.[Jun-Chao],
Peng, Y.X.[Yu-Xin],
Video Captioning With Object-Aware Spatio-Temporal Correlation and
Aggregation,
IP(29), 2020, pp. 6209-6222.
IEEE DOI
2005
BibRef
Earlier:
Object-Aware Aggregation With Bidirectional Temporal Graph for Video
Captioning,
CVPR19(8319-8328).
IEEE DOI
2002
Video captioning, spatio-temporal graph,
bidirectional temporal graph, spatial relation graph,
object-aware feature aggregation
BibRef
Xiao, H.H.[Huan-Hou],
Shi, J.L.[Jing-Lun],
Video captioning with text-based dynamic attention and step-by-step
learning,
PRL(133), 2020, pp. 305-312.
Elsevier DOI
2005
Dynamic attention, Context semantic information, Video captioning
BibRef
Ning, K.[Ke],
Cai, M.[Ming],
Xie, D.[Di],
Wu, F.[Fei],
An Attentive Sequence to Sequence Translator for Localizing Video
Clips by Natural Language,
MultMed(22), No. 9, September 2020, pp. 2434-2443.
IEEE DOI
2008
Natural languages, Visualization, Task analysis, Context modeling,
Semantics, Recurrent neural networks, Distance measurement,
natural language guided detection
BibRef
Wu, A.,
Han, Y.,
Yang, Y.,
Hu, Q.,
Wu, F.,
Convolutional Reconstruction-to-Sequence for Video Captioning,
CirSysVideo(30), No. 11, November 2020, pp. 4299-4308.
IEEE DOI
2011
Decoding, Fuses, Visualization, Convolution, Encoding, History,
Image reconstruction, Video captioning,
hierarchical decoder
BibRef
Tu, Y.B.[Yun-Bin],
Zhou, C.[Chang],
Guo, J.J.[Jun-Jun],
Gao, S.X.[Sheng-Xiang],
Yu, Z.T.[Zheng-Tao],
Enhancing the alignment between target words and corresponding frames
for video captioning,
PR(111), 2021, pp. 107702.
Elsevier DOI
2012
Video captioning, Alignment, Visual tags, Textual-temporal attention
BibRef
Boran, E.[Emre],
Erdem, A.[Aykut],
Ikizler-Cinbis, N.[Nazli],
Erdem, E.[Erkut],
Madhyastha, P.[Pranava],
Specia, L.[Lucia],
Leveraging auxiliary image descriptions for dense video captioning,
PRL(146), 2021, pp. 70-76.
Elsevier DOI
2105
Video captioning, Adversarial training, Attention
BibRef
Wang, T.[Teng],
Zheng, H.[Huicheng],
Yu, M.J.[Ming-Jing],
Tian, Q.[Qian],
Hu, H.F.[Hai-Feng],
Event-Centric Hierarchical Representation for Dense Video Captioning,
CirSysVideo(31), No. 5, 2021, pp. 1890-1900.
IEEE DOI
2105
BibRef
Xu, W.[Wanru],
Yu, J.[Jian],
Miao, Z.J.[Zhen-Jiang],
Wan, L.[Lili],
Tian, Y.[Yi],
Ji, Q.[Qiang],
Deep Reinforcement Polishing Network for Video Captioning,
MultMed(23), 2021, pp. 1772-1784.
IEEE DOI
2106
Grammar, Visualization, Steel, Clamps, Task analysis, Decoding,
Semantics, Video captioning, deep reinforcement learning,
grammar polishing
BibRef
Zhang, Z.W.[Zhi-Wang],
Xu, D.[Dong],
Ouyang, W.L.[Wan-Li],
Zhou, L.P.[Lu-Ping],
Dense Video Captioning Using Graph-Based Sentence Summarization,
MultMed(23), 2021, pp. 1799-1810.
IEEE DOI
2106
Proposals, Visualization, Semantics, Decoding,
Microprocessors, Feature extraction, Dense video captioning,
graph convolutional network
BibRef
Liu, S.[Sheng],
Ren, Z.[Zhou],
Yuan, J.S.[Jun-Song],
SibNet: Sibling Convolutional Encoder for Video Captioning,
PAMI(43), No. 9, September 2021, pp. 3259-3272.
IEEE DOI
2108
Visualization, Decoding, Semantics, Task analysis,
Feature extraction, Pipelines, Natural languages, SibNet,
convolutional encoder
BibRef
Yan, Y.C.[Yi-Chao],
Zhuang, N.[Ning],
Ni, B.B.[Bing-Bing],
Zhang, J.[Jian],
Xu, M.H.[Ming-Hao],
Zhang, Q.[Qiang],
Zhang, Z.[Zheng],
Cheng, S.[Shuo],
Tian, Q.[Qi],
Xu, Y.[Yi],
Yang, X.K.[Xiao-Kang],
Zhang, W.J.[Wen-Jun],
Fine-Grained Video Captioning via Graph-based Multi-Granularity
Interaction Learning,
PAMI(44), No. 2, February 2022, pp. 666-683.
IEEE DOI
2201
Sports, Task analysis, Feature extraction, Linguistics, Games,
Measurement, Video caption,
multiple granularity
BibRef
Deng, J.[Jincan],
Li, L.[Liang],
Zhang, B.[Beichen],
Wang, S.H.[Shu-Hui],
Zha, Z.J.[Zheng-Jun],
Huang, Q.M.[Qing-Ming],
Syntax-Guided Hierarchical Attention Network for Video Captioning,
CirSysVideo(32), No. 2, February 2022, pp. 880-892.
IEEE DOI
2202
Syntactics, Feature extraction, Visualization, Generators, Semantics,
Video captioning, syntax attention, content attention, global sentence-context
BibRef
Hua, X.[Xia],
Wang, X.Q.[Xin-Qing],
Rui, T.[Ting],
Shao, F.[Faming],
Wang, D.[Dong],
Adversarial Reinforcement Learning With Object-Scene Relational Graph
for Video Captioning,
IP(31), 2022, pp. 2004-2016.
IEEE DOI
2203
Semantics, Feature extraction, Visualization,
Reinforcement learning, Convolution, Training, Trajectory,
video understanding
BibRef
Wang, L.X.[Lan-Xiao],
Li, H.L.[Hong-Liang],
Qiu, H.Q.[He-Qian],
Wu, Q.B.[Qing-Bo],
Meng, F.M.[Fan-Man],
Ngan, K.N.[King Ngi],
POS-Trends Dynamic-Aware Model for Video Caption,
CirSysVideo(32), No. 7, July 2022, pp. 4751-4764.
IEEE DOI
2207
Visualization, Feature extraction, Task analysis, Decoding,
Handheld computers, Encoding, Syntactics, Video caption, sentence components
BibRef
Xue, P.[Ping],
Zhou, B.[Bing],
Exploring the Spatio-Temporal Aware Graph for video captioning,
IET-CV(16), No. 5, 2022, pp. 456-467.
DOI Link
2207
BibRef
Niu, T.Z.[Tian-Zi],
Dong, S.S.[Shan-Shan],
Chen, Z.D.[Zhen-Duo],
Luo, X.[Xin],
Huang, Z.[Zi],
Guo, S.[Shanqing],
Xu, X.S.[Xin-Shun],
A multi-layer memory sharing network for video captioning,
PR(136), 2023, pp. 109202.
Elsevier DOI
2301
Video captioning, Multi-layer network, Memory sharing,
Enhanced gated recurrent unit
BibRef
Tu, Y.[Yunbin],
Zhou, C.[Chang],
Guo, J.J.[Jun-Jun],
Li, H.F.[Hua-Feng],
Gao, S.X.[Sheng-Xiang],
Yu, Z.T.[Zheng-Tao],
Relation-aware attention for video captioning via graph learning,
PR(136), 2023, pp. 109204.
Elsevier DOI
2301
Video captioning, Relation-aware attention, Graph learning
BibRef
Guo, Z.X.[Zi-Xin],
Wang, T.J.J.[Tzu-Jui Julius],
Laaksonen, J.[Jorma],
Post-Attention Modulator for Dense Video Captioning,
ICPR22(1536-1542)
IEEE DOI
2212
Measurement, Correlation, Modulation, Benchmark testing, Transformers
BibRef
Yamazaki, K.[Kashu],
Truong, S.[Sang],
Vo, K.[Khoa],
Kidd, M.[Michael],
Rainwater, C.[Chase],
Luu, K.[Khoa],
Le, N.[Ngan],
VLCAP: Vision-Language with Contrastive Learning for Coherent Video
Paragraph Captioning,
ICIP22(3656-3661)
IEEE DOI
2211
Measurement, Learning systems, Visualization, Codes, Animals,
Coherence, Linguistics, Contrastive Learning, Video Captioning,
Language
BibRef
Lebron, L.[Luis],
Graham, Y.[Yvette],
O'Connor, N.E.[Noel E.],
McGuinness, K.[Kevin],
Evaluation of Automatically Generated Video Captions Using Vision and
Language Models,
ICIP22(2416-2420)
IEEE DOI
2211
Measurement, Adaptation models, Correlation, Filtering, Annotations,
Computational modeling, Video Captioning Evaluation,
Video Captioning
BibRef
Chatzikonstantinou, C.[Christos],
Valasidis, G.G.[Georgios Grigorios],
Stavridis, K.[Konstantinos],
Malogiannis, G.[Georgios],
Axenopoulos, A.[Apostolos],
Daras, P.[Petros],
UCF-CAP, Video Captioning in the Wild,
ICIP22(1386-1390)
IEEE DOI
2211
Deep learning, Computer science, Grounding, Annotations,
Transformers, captioning dataset, crime, transformer
BibRef
Zhang, Q.[Qi],
Song, Y.Q.[Yu-Qing],
Jin, Q.[Qin],
Unifying Event Detection and Captioning as Sequence Generation via
Pre-training,
ECCV22(XXXVI:363-379).
Springer DOI
2211
BibRef
Bi, T.Y.[Tian-Yu],
Jarnikov, D.[Dimitri],
Lukkien, J.[Johan],
Shot-Based Hybrid Fusion for Movie Genre Classification,
CIAP22(I:257-269).
Springer DOI
2205
BibRef
Fish, E.[Edward],
Weinbren, J.[Jon],
Gilbert, A.[Andrew],
Rethinking Genre Classification With Fine Grained Semantic Clustering,
ICIP21(1274-1278)
IEEE DOI
2201
Training, Deep learning, Visualization, Image processing, Semantics,
Logic gates
BibRef
Zhu, M.J.[Ming-Jian],
Video Captioning in Compressed Video,
ICIVC21(336-341)
IEEE DOI
2112
Visualization, Resists, Logic gates, Feature extraction,
Noise measurement, video captioning, compressed video,
video analysis and processing
BibRef
Lin, X.D.[Xu-Dong],
Bertasius, G.[Gedas],
Wang, J.[Jue],
Chang, S.F.[Shih-Fu],
Parikh, D.[Devi],
Torresani, L.[Lorenzo],
VX2TEXT: End-to-End Learning of Video-Based Text Generation From
Multimodal Inputs,
CVPR21(7001-7011)
IEEE DOI
2111
Training, Computational modeling, Semantics,
Fasteners, Transformers, Knowledge discovery
BibRef
Liao, Y.H.[Yuan-Hong],
Kar, A.[Amlan],
Fidler, S.[Sanja],
Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets,
CVPR21(4348-4357)
IEEE DOI
2111
Analytical models, Annotations, Manuals,
Semisupervised learning, Probabilistic logic, Pattern recognition
BibRef
Song, Y.Q.[Yu-Qing],
Chen, S.Z.[Shi-Zhe],
Jin, Q.[Qin],
Towards Diverse Paragraph Captioning for Untrimmed Videos,
CVPR21(11240-11249)
IEEE DOI
2111
Training, Measurement, Visualization,
Event detection, Computational modeling, Memory management
BibRef
Chen, S.[Shaoxiang],
Jiang, Y.G.[Yu-Gang],
Towards Bridging Event Captioner and Sentence Localizer for Weakly
Supervised Dense Event Captioning,
CVPR21(8421-8431)
IEEE DOI
2111
Location awareness, Bridges, Training data,
Pattern recognition, Feeds, Task analysis
BibRef
Deng, C.R.[Chao-Rui],
Chen, S.Z.[Shi-Zhe],
Chen, D.[Da],
He, Y.[Yuan],
Wu, Q.[Qi],
Sketch, Ground, and Refine: Top-Down Dense Video Captioning,
CVPR21(234-243)
IEEE DOI
2111
Training, Codes, Grounding, Computational modeling,
Semantics, Benchmark testing
BibRef
Zhang, Z.[Ziqi],
Qi, Z.A.[Zhong-Ang],
Yuan, C.F.[Chun-Feng],
Shan, Y.[Ying],
Li, B.[Bing],
Deng, Y.[Ying],
Hu, W.M.[Wei-Ming],
Open-book Video Captioning with Retrieve-Copy-Generate Network,
CVPR21(9832-9841)
IEEE DOI
2111
Training, Computational modeling,
Natural languages, Benchmark testing, Generators, Pattern recognition
BibRef
Perez-Martin, J.[Jesus],
Bustos, B.[Benjamin],
Pérez, J.[Jorge],
Improving Video Captioning with Temporal Composition of a
Visual-Syntactic Embedding*,
WACV21(3038-3048)
IEEE DOI
2106
Visualization, Video description, Semantics, Syntactics, Tagging
BibRef
Müller-Budack, E.[Eric],
Springstein, M.[Matthias],
Hakimov, S.[Sherzod],
Mrutzek, K.[Kevin],
Ewerth, R.[Ralph],
Ontology-driven Event Type Classification in Images,
WACV21(2927-2937)
IEEE DOI
2106
Event types such as natural disasters, sports events, or elections.
Training, Knowledge engineering, Semantic search,
Voting, Neural networks
BibRef
Hosseinzadeh, M.[Mehrdad],
Wang, Y.[Yang],
Video Captioning of Future Frames,
WACV21(979-988)
IEEE DOI
2106
Fuses, Semantics, Task analysis
BibRef
Knights, J.[Joshua],
Harwood, B.[Ben],
Ward, D.[Daniel],
Vanderkop, A.[Anthony],
Mackenzie-Ross, O.[Olivia],
Moghadam, P.[Peyman],
Temporally Coherent Embeddings for Self-Supervised Video
Representation Learning,
ICPR21(8914-8921)
IEEE DOI
2105
Training, Visualization, Benchmark testing, Network architecture,
Hardware, Data models, Spatiotemporal phenomena
BibRef
Rimle, P.[Philipp],
Dogan-Schönberger, P.[Pelin],
Gross, M.[Markus],
Enriching Video Captions With Contextual Text,
ICPR21(5474-5481)
IEEE DOI
2105
Knowledge engineering, Visualization, Vocabulary, Tools, Generators,
Data models, Data mining
BibRef
Bi, T.Y.[Tian-Yu],
Jarnikov, D.[Dmitri],
Lukkien, J.[Johan],
Video Representation Fusion Network For Multi-Label Movie Genre
Classification,
ICPR21(9386-9391)
IEEE DOI
2105
Training, Fuses, Motion pictures, Spatiotemporal phenomena,
Pattern recognition, Task analysis
BibRef
Poorgholi, S.[Soroosh],
Kayhan, O.S.[Osman Semih],
van Gemert, J.C.[Jan C.],
t-eva: Time-efficient t-sne Video Annotation,
HCAU20(153-169).
Springer DOI
2103
BibRef
Ai, J.B.[Jiang-Bo],
Yang, Y.[Yang],
Xu, X.[Xing],
Zhou, J.[Jie],
Shen, H.T.[Heng Tao],
CC-LSTM: Cross and Conditional Long-short Time Memory for Video
Captioning,
MMDLCA20(353-365).
Springer DOI
2103
BibRef
Zheng, Q.,
Wang, C.,
Tao, D.,
Syntax-Aware Action Targeting for Video Captioning,
CVPR20(13093-13102)
IEEE DOI
2008
Feature extraction, Syntactics, Visualization, Automobiles, Decoding,
Object recognition, Semantics
BibRef
Zhang, Z.,
Shi, Y.,
Yuan, C.,
Li, B.,
Wang, P.,
Hu, W.,
Zha, Z.,
Object Relational Graph With Teacher-Recommended Learning for Video
Captioning,
CVPR20(13275-13285)
IEEE DOI
2008
Task analysis, Visualization, Feature extraction, Cognition,
Linguistics, Training, Proposals
BibRef
lashin, V.,
Rahtu, E.,
Multi-modal Dense Video Captioning,
MULWS20(4117-4126)
IEEE DOI
2008
Feature extraction, Proposals, Visualization, Task analysis,
Decoding, Natural languages, Generators
BibRef
Pan, B.,
Cai, H.,
Huang, D.,
Lee, K.,
Gaidon, A.,
Adeli, E.,
Niebles, J.C.,
Spatio-Temporal Graph for Video Captioning With Knowledge
Distillation,
CVPR20(10867-10876)
IEEE DOI
2008
Task analysis, Feature extraction, Cats, Correlation,
Noise measurement, Training, Visualization
BibRef
Liu, J.Z.[Jing-Zhou],
Chen, W.[Wenhu],
Cheng, Y.[Yu],
Gan, Z.[Zhe],
Yu, L.C.[Li-Cheng],
Yang, Y.M.[Yi-Ming],
Liu, J.J.[Jing-Jing],
Violin: A Large-Scale Dataset for Video-and-Language Inference,
CVPR20(10897-10907)
IEEE DOI
2008
Dataset, Video. Task analysis, Visualization, Cognition, Natural languages, TV,
Motion pictures, Benchmark testing
BibRef
da Silva, J.L.[Joed Lopes],
Tabata, A.N.[Alan Naoto],
Broto, L.C.[Lucas Cardoso],
Cocron, M.P.[Marta Pereira],
Zimmer, A.[Alessandro],
Brandmeier, T.[Thomas],
Open Source Multipurpose Multimedia Annotation Tool,
ICIAR20(I:356-367).
Springer DOI
2007
BibRef
Cherian, A.,
Wang, J.,
Hori, C.,
Marks, T.K.,
Spatio-Temporal Ranked-Attention Networks for Video Captioning,
WACV20(1606-1615)
IEEE DOI
2006
Visualization, Computational modeling, Feature extraction,
Training, Measurement, Task analysis
BibRef
Hemalatha, M.,
Sekhar, C.C.,
Domain-Specific Semantics Guided Approach to Video Captioning,
WACV20(1576-1585)
IEEE DOI
2006
Semantics, Feature extraction, Decoding, Visualization, Training,
Vegetable oils, Streaming media
BibRef
Wang, B.,
Ma, L.,
Zhang, W.,
Jiang, W.,
Wang, J.,
Liu, W.,
Controllable Video Captioning With POS Sequence Guidance Based on
Gated Fusion Network,
ICCV19(2641-2650)
IEEE DOI
2004
Code, Captioning.
WWW Link. image fusion, image representation, image sequences,
learning (artificial intelligence), Encoding
BibRef
Hou, J.,
Wu, X.,
Zhao, W.,
Luo, J.,
Jia, Y.,
Joint Syntax Representation Learning and Visual Cue Translation for
Video Captioning,
ICCV19(8917-8926)
IEEE DOI
2004
computational linguistics, feature extraction,
learning (artificial intelligence), Mixture models
BibRef
Pei, W.J.[Wen-Jie],
Zhang, J.Y.[Ji-Yuan],
Wang, X.R.[Xiang-Rong],
Ke, L.[Lei],
Shen, X.Y.[Xiao-Yong],
Tai, Y.W.[Yu-Wing],
Memory-Attended Recurrent Network for Video Captioning,
CVPR19(8339-8348).
IEEE DOI
2002
BibRef
Aafaq, N.[Nayyer],
Akhtar, N.[Naveed],
Liu, W.[Wei],
Gilani, S.Z.[Syed Zulqarnain],
Mian, A.[Ajmal],
Spatio-Temporal Dynamics and Semantic Attribute Enriched Visual
Encoding for Video Captioning,
CVPR19(12479-12488).
IEEE DOI
2002
BibRef
Fuhl, W.[Wolfgang],
Castner, N.[Nora],
Zhuang, L.[Lin],
Holzer, M.[Markus],
Rosenstiel, W.[Wolfgang],
Kasneci, E.[Enkelejda],
MAM: Transfer Learning for Fully Automatic Video Annotation and
Specialized Detector Creation,
Egocentric18(V:375-388).
Springer DOI
1905
BibRef
Sun, X.Y.[Xin-Yu],
Chen, P.H.[Pei-Hao],
Chen, L.W.[Liang-Wei],
Li, C.H.[Chang-Hao],
Li, T.H.[Thomas H.],
Tan, M.K.[Ming-Kui],
Gan, C.[Chuang],
Masked Motion Encoding for Self-Supervised Video Representation
Learning,
CVPR23(2235-2245)
IEEE DOI
2309
BibRef
Huang, D.[Deng],
Wu, W.H.[Wen-Hao],
Hu, W.W.[Wei-Wen],
Liu, X.[Xu],
He, D.L.[Dong-Liang],
Wu, Z.H.[Zhi-Hua],
Wu, X.M.[Xiang-Miao],
Tan, M.K.[Ming-Kui],
Ding, E.[Errui],
ASCNet: Self-Supervised Video Representation Learning with
Appearance-Speed Consistency,
ICCV21(8076-8085)
IEEE DOI
2203
Representation learning, Visualization, Image recognition, Codes,
Noise measurement, Data mining, Video analysis and understanding,
Transfer/Low-shot/Semi/Unsupervised Learning
BibRef
Gan, C.[Chuang],
Gong, B.Q.[Bo-Qing],
Liu, K.[Kun],
Su, H.[Hao],
Guibas, L.J.[Leonidas J.],
Geometry Guided Convolutional Neural Networks for Self-Supervised
Video Representation Learning,
CVPR18(5589-5597)
IEEE DOI
1812
Geometry, Motion pictures,
Task analysis, Semantics, Training, Feature extraction
BibRef
Liu, D.,
Zhou, Y.,
Sun, X.,
Zha, Z.,
Zeng, W.,
Adaptive Pooling in Multi-instance Learning for Web Video Annotation,
WSM17(318-327)
IEEE DOI
1802
Correlation, Feature extraction, Fuses, Tagging, Training, Visualization
BibRef
Marwah, T.,
Mittal, G.,
Balasubramanian, V.N.,
Attentive Semantic Video Generation Using Captions,
ICCV17(1435-1443)
IEEE DOI
1802
image representation, video signal processing,
attentive semantic video generation, latent representation,
Semantics
BibRef
Krishna, R.,
Hata, K.,
Ren, F.,
Fei-Fei, L.,
Niebles, J.C.,
Dense-Captioning Events in Videos,
ICCV17(706-715)
IEEE DOI
1802
image motion analysis, video signal processing,
ActivityNet Captions, captioning module, dense-captioning events,
Windows
BibRef
Kalboussi, R.[Rahma],
Abdellaoui, M.[Mehrez],
Douik, A.[Ali],
Video Saliency Detection Based on Boolean Map Theory,
CIAP17(I:119-128).
Springer DOI
1711
BibRef
Pobar, M.[Miran],
Ivasic-Kos, M.[Marina],
Multi-label Poster Classification into Genres Using Different Problem
Transformation Methods,
CAIP17(II: 367-378).
Springer DOI
1708
Classify movies by the poster.
BibRef
Sageder, G.[Gerhard],
Zaharieva, M.[Maia],
Breiteneder, C.[Christian],
Group Feature Selection for Audio-Based Video Genre Classification,
MMMod16(I: 29-41).
Springer DOI
1601
BibRef
Mori, M.[Minoru],
Kimiyama, H.[Hiroyuki],
Ogawara, M.[Masanori],
Search-Based Content Analysis System on Online Collaborative Platform
for Film Production,
ICPR14(1091-1096)
IEEE DOI
1412
Accuracy
BibRef
Jang, W.D.[Won-Dong],
Lee, C.[Chulwoo],
Sim, J.Y.[Jae-Young],
Kim, C.S.[Chang-Su],
Automatic Video Genre Classification Using Multiple SVM Votes,
ICPR14(2655-2660)
IEEE DOI
1412
Accuracy
BibRef
Almeida, J.[Jurandy],
Guimarães Pedronette, D.C.[Daniel Carlos],
Penatti, O.A.B.[Otávio A.B.],
Unsupervised Manifold Learning for Video Genre Retrieval,
CIARP14(604-612).
Springer DOI
1411
BibRef
Ding, X.M.[Xin-Miao],
Li, B.[Bing],
Hu, W.M.[Wei-Ming],
Xiong, W.H.[Wei-Hua],
Wang, Z.C.[Zhen-Chong],
Context-aware horror video scene recognition via cost-sensitive sparse
coding,
ICPR12(1904-1907).
WWW Link.
1302
BibRef
Ionescu, B.[Bogdan],
Vertan, C.[Constantin],
Lambert, P.[Patrick],
Benoit, A.[Alexandre],
A color-action perceptual approach to the classification of animated
movies,
ICMR11(10).
DOI Link
1301
two categories of content descriptors, temporal and color based
BibRef
Strat, S.T.[Sabin Tiberius],
Benoit, A.[Alexandre],
Bredin, H.[Hervé],
Quénot, G.[Georges],
Lambert, P.[Patrick],
Hierarchical Late Fusion for Concept Detection in Videos,
Concept12(III: 335-344).
Springer DOI
1210
BibRef
Nagaraja, N.S.[Naveen Shankar],
Ochs, P.[Peter],
Liu, K.[Kun],
Brox, T.[Thomas],
Hierarchy of Localized Random Forests for Video Annotation,
DAGM12(21-30).
Springer DOI
1209
BibRef
Tsapanos, N.[Nikolaos],
Nikolaidis, N.[Nikolaos],
Pitas, I.[Ioannis],
Towards automated post-production and semantic annotation of films,
ICIIP11(1-4).
IEEE DOI
1112
BibRef
Li, B.[Bing],
Hu, W.M.[Wei-Ming],
Xiong, W.H.[Wei-Hua],
Wu, O.[Ou],
Li, W.[Wei],
Horror Image Recognition Based on Emotional Attention,
ACCV10(II: 594-605).
Springer DOI
1011
For filtering videos on the web.
BibRef
Wang, J.C.[Jian-Chao],
Li, B.[Bing],
Hu, W.M.[Wei-Ming],
Wu, O.[Ou],
Horror movie scene recognition based on emotional perception,
ICIP10(1489-1492).
IEEE DOI
1009
BibRef
Chen, J.F.[Jian-Feng],
Lu, H.[Hong],
Wei, R.Z.[Ren-Zhong],
Jin, C.[Cheng],
Xue, X.Y.[Xiang-Yang],
An effective method for video genre classification,
CIVR10(97-104).
DOI Link
1007
BibRef
Kowdle, A.[Adarsh],
Chang, K.W.[Kuo-Wei],
Chen, T.H.[Tsu-Han],
Video categorization using object of interest detection,
ICIP10(4569-4572).
IEEE DOI
1009
BibRef
Petersohn, C.[Christian],
Temporal video structuring for preservation and annotation of video
content,
ICIP09(93-96).
IEEE DOI
0911
BibRef
Wang, F.S.[Fang-Shi],
Lu, W.[Wei],
Liu, J.G.[Jin-Gen],
Shah, M.[Mubarak],
Xu, D.[De],
Automatic video annotation with adaptive number of key words,
ICPR08(1-4).
IEEE DOI
0812
BibRef
Kutics, A.[Andrea],
Nakagawa, A.[Akihiko],
Shindoh, K.[Kazuhiro],
Use of Adaptive Still Image Descriptors for Annotation of Video Frames,
ICIAR07(686-697).
Springer DOI
0708
BibRef
Wang, F.S.[Fang-Shi],
Xu, D.[De],
Lu, W.[Wei],
Xu, H.L.[Hong-Li],
Automatic Annotation and Retrieval for Videos,
PSIVT06(1030-1040).
Springer DOI
0612
BibRef
Rosten, E.[Edward],
Reitmayr, G.[Gerhard],
Drummond, T.W.[Tom W.],
Real-Time Video Annotations for Augmented Reality,
ISVC05(294-302).
Springer DOI
0512
BibRef
Caspi, Y.,
Bargeron, D.,
Sharing video annotations,
ICIP04(IV: 2227-2230).
IEEE DOI
0505
BibRef
Wang, M.[Mei],
Zhou, X.D.[Xiang-Dong],
Chua, T.S.[Tat-Seng],
Automatic image annotation via local multi-label classification,
CIVR08(17-26).
0807
BibRef
Shi, R.[Rui],
Feng, H.M.[Hua-Min],
Chua, T.S.[Tat-Seng],
Lee, C.H.[Chin-Hui],
An Adaptive Image Content Representation and Segmentation Approach to
Automatic Image Annotation,
CIVR04(545-554).
Springer DOI
0505
BibRef
Jeon, J.[Jiwoon],
Manmatha, R.,
Using Maximum Entropy for Automatic Image Annotation,
CIVR04(24-32).
Springer DOI
0505
BibRef
Feng, S.L.[Shao-Lei],
Manmatha, R.,
A discrete direct retrieval model for image and video retrieval,
CIVR08(427-436).
0807
BibRef
Petkovic, M.,
Mihajlovic, V.,
Jonker, W.,
Techniques for automatic video content derivation,
ICIP03(II: 611-614).
IEEE DOI
0312
BibRef
Zhang, C.[Cha],
Chen, T.H.[Tsu-Han],
Annotating retrieval database with active learning,
ICIP03(II: 595-598).
IEEE DOI
0312
BibRef
Naphade, M.R.,
Smith, J.R.,
Learning visual models of semantic concepts,
ICIP03(II: 531-534).
IEEE DOI
0312
BibRef
And:
Learning regional semantic concepts from incomplete annotation,
ICIP03(II: 603-606).
IEEE DOI
0312
BibRef
Hoogs, A.,
Rittscher, J.,
Stein, G.,
Schmiederer, J.,
Video content annotation using visual analysis and a large semantic
knowledgebase,
CVPR03(II: 327-334).
IEEE DOI
0307
BibRef
del Bimbo, A.,
Expressive Semantics for automatic annotation and retrieval of video
streams,
ICME00(TAS3).
0007
BibRef
Lienhart, R.,
A system for effortless content annotation to untold the semantics in
videos,
CBAIVL00(45-49).
0008
BibRef
Madrane, N.,
Goldberg, M.,
Towards Automatic Annotation of Video Documents,
ICPR94(A:773-776).
IEEE DOI
BibRef
9400
Chapter on Implementations and Applications, Databases, QBIC, Video Analysis, Hardware and Software, Inspection continues in
Video Analysis, Key Frames, Keyframe .