13.6.10.1 LSTM: Long Short-Term Memory for Captioning, Image Captioning

Chapter Contents (Back)
Long Short-Term Memory. LSTM. General method:
See also LSTM: Long Short-Term Memory.

Gao, L.L.[Lian-Li], Guo, Z.[Zhao], Zhang, H.W.[Han-Wang], Xu, X.[Xing], Shen, H.T.[Heng Tao],
Video Captioning With Attention-Based LSTM and Semantic Consistency,
MultMed(19), No. 9, September 2017, pp. 2045-2055.
IEEE DOI 1708
Computational modeling, Correlation, Feature extraction, Neural networks, Semantics, Visualization, Attention mechanism, embedding, long short-term memory (LSTM), video, captioning BibRef

Bin, Y., Yang, Y., Shen, F., Xie, N., Shen, H.T., Li, X.,
Describing Video With Attention-Based Bidirectional LSTM,
Cyber(49), No. 7, July 2019, pp. 2631-2641.
IEEE DOI 1905
Visualization, Semantics, Decoding, Feature extraction, Natural languages, Recurrent neural networks, Grammar, video captioning BibRef

Fu, K.[Kun], Jin, J.Q.[Jun-Qi], Cui, R.P.[Run-Peng], Sha, F.[Fei], Zhang, C.S.[Chang-Shui],
Aligning Where to See and What to Tell: Image Captioning with Region-Based Attention and Scene-Specific Contexts,
PAMI(39), No. 12, December 2017, pp. 2321-2334.
IEEE DOI 1711
Adaptation models, Computational modeling, Context modeling, Data mining, Feature extraction, Image classification, Visualization, Image captioning, LSTM, visual attention. BibRef

Xiao, C.M.[Chang-Ming], Yang, Q.[Qi], Xu, X.Q.[Xiao-Qiang], Zhang, J.W.[Jian-Wei], Zhou, F.[Feng], Zhang, C.S.[Chang-Shui],
Where you edit is what you get: Text-guided image editing with region-based attention,
PR(139), 2023, pp. 109458.
Elsevier DOI 2304
Generative adversarial networks, Text-guided image editing, Spatial disentanglement BibRef

Nian, F.D.[Fu-Dong], Li, T.[Teng], Wang, Y.[Yan], Wu, X.Y.[Xin-Yu], Ni, B.B.[Bing-Bing], Xu, C.S.[Chang-Sheng],
Learning explicit video attributes from mid-level representation for video captioning,
CVIU(163), No. 1, 2017, pp. 126-138.
Elsevier DOI 1712
Mid-level video representation BibRef

Ye, S., Han, J., Liu, N.,
Attentive Linear Transformation for Image Captioning,
IP(27), No. 11, November 2018, pp. 5514-5524.
IEEE DOI 1809
feature extraction, image classification, learning (artificial intelligence), matrix algebra, probability, LSTM BibRef

Xian, Y., Tian, Y.,
Self-Guiding Multimodal LSTM: When We Do Not Have a Perfect Training Dataset for Image Captioning,
IP(28), No. 11, November 2019, pp. 5241-5252.
IEEE DOI 1909
Task analysis, Visualization, Training, Semantics, Flickr, Urban areas, Training data, Image captioning, self-guiding, real-world dataset, recurrent neural network BibRef

Peng, Y.Q.[Yu-Qing], Liu, X.[Xuan], Wang, W.H.[Wei-Hua], Zhao, X.S.[Xiao-Song], Wei, M.[Ming],
Image caption model of double LSTM with scene factors,
IVC(86), 2019, pp. 38-44.
Elsevier DOI 1906
Image caption, Deep neural network, Scene recognition, Semantic information BibRef

Wu, L., Xu, M., Wang, J., Perry, S.,
Recall What You See Continually Using GridLSTM in Image Captioning,
MultMed(22), No. 3, March 2020, pp. 808-818.
IEEE DOI 2003
Visualization, Decoding, Task analysis, Neural networks, Training, Computational modeling, Logic gates, Image captioning, GridLSTM, recurrent neural network BibRef

Deng, Z.R.[Zhen-Rong], Jiang, Z.Q.[Zhou-Qin], Lan, R.[Rushi], Huang, W.M.[Wen-Ming], Luo, X.N.[Xiao-Nan],
Image captioning using DenseNet network and adaptive attention,
SP:IC(85), 2020, pp. 115836.
Elsevier DOI 2005
Image captioning, DenseNet, LSTM, Adaptive attention mechanism BibRef

Ji, J., Xu, C., Zhang, X., Wang, B., Song, X.,
Spatio-Temporal Memory Attention for Image Captioning,
IP(29), 2020, pp. 7615-7628.
IEEE DOI 2007
Image captioning, spatio-temporal relationship, attention transmission, memory attention, LSTM BibRef

Che, W.B.[Wen-Bin], Fan, X.P.[Xiao-Peng], Xiong, R.Q.[Rui-Qin], Zhao, D.B.[De-Bin],
Visual Relationship Embedding Network for Image Paragraph Generation,
MultMed(22), No. 9, September 2020, pp. 2307-2320.
IEEE DOI 2008
Visualization, Semantics, Task analysis, Proposals, Automobiles, Buildings, Paragraph generation, image caption, LSTM BibRef

Zhang, J.[Jing], Li, K.K.[Kang-Kang], Wang, Z.[Zhe],
Parallel-fusion LSTM with synchronous semantic and visual information for image captioning,
JVCIR(75), 2021, pp. 103044.
Elsevier DOI 2103
Image captioning, Parallel-fusion LSTM, Attention mechanism, Guiding LSTM BibRef

He, S.[Shan], Lu, Y.Y.[Yuan-Yao], Chen, S.N.[Sheng-Nan],
Image Captioning Algorithm Based on Multi-Branch CNN and Bi-LSTM,
IEICE(E104-D), No. 7, July 2021, pp. 941-947.
WWW Link. 2107
BibRef

Yuan, J.[Jin], Zhu, S.[Shuai], Huang, S.Y.[Shu-Yin], Zhang, H.W.[Han-Wang], Xiao, Y.Q.[Yao-Qiang], Li, Z.Y.[Zhi-Yong], Wang, M.[Meng],
Discriminative Style Learning for Cross-Domain Image Captioning,
IP(31), 2022, pp. 1723-1736.
IEEE DOI 2202
Decoding, Visualization, Syntactics, Semantics, Training, Logic gates, Birds, Cross-domain, image captioning, style, instruction-based LSTM BibRef

Zhou, Y.[Yuanen], Zhang, Y.[Yong], Hu, Z.Z.[Zhen-Zhen], Wang, M.[Meng],
Semi-Autoregressive Transformer for Image Captioning,
CLVL21(3132-3136)
IEEE DOI 2112
Training, Degradation, Codes, Benchmark testing, Transformers BibRef

Lv, G.[Gang], Sun, Y.N.[Yi-Ning], Nian, F.[Fudong], Zhu, M.F.[Mao-Fei], Tang, W.L.[Wen-Liang], Hu, Z.Z.[Zhen-Zhen],
COME: Clip-OCR and Master ObjEct for text image captioning,
IVC(136), 2023, pp. 104751.
Elsevier DOI 2308
Image captioning, Graph convolution network, OCR, LSTM BibRef


Niu, Z.X.[Zhen-Xing], Zhou, M.[Mo], Wang, L.[Le], Gao, X.B.[Xin-Bo], Hua, G.[Gang],
Hierarchical Multimodal LSTM for Dense Visual-Semantic Embedding,
ICCV17(1899-1907)
IEEE DOI 1802
map sentences and images.document image processing, image representation, recurrent neural nets, HM-LSTM, Hierarchical Multimodal LSTM, Recurrent neural networks BibRef

Tan, Y.H.[Ying Hua], Chan, C.S.[Chee Seng],
phi-LSTM: A Phrase-Based Hierarchical LSTM Model for Image Captioning,
ACCV16(V: 101-117).
Springer DOI 1704
BibRef

Wang, M.[Minsi], Song, L.[Li], Yang, X.K.[Xiao-Kang], Luo, C.F.[Chuan-Fei],
A parallel-fusion RNN-LSTM architecture for image caption generation,
ICIP16(4448-4452)
IEEE DOI 1610
Computational modeling deep convolutional networks and recurrent neural networks. BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Multi-Modal, Cross-Modal Captioning, Image Captioning .


Last update:Apr 18, 2024 at 11:38:49