13.4.1.7 Surveys, Comparisons, Evaluations, Principal Components

Chapter Contents (Back)
PCA. Principal Components. Evaluation, Principal Components. Survey, PCA. Survey, Pricnipal Components.

Gerbrands, J.J.[Jan J.],
On the relationships between SVD, KLT and PCA,
PR(14), No. 1-6, 1981, pp. 375-381.
Elsevier DOI 0309
BibRef

Zhao, L.[Li], Yang, Y.H.[Yee-Hong],
Theoretical analysis of illumination in PCA-based vision systems,
PR(32), No. 4, April 1999, pp. 547-564.
Elsevier DOI BibRef 9904

Martínez, A.M.[Aleix M.], Kak, A.C.[Avinash C.],
PCA versus LDA,
PAMI(23), No. 2, February 2001, pp. 228-233.
IEEE DOI 0102
When the training set is small, PCA (Principal Components Analysis) outperforms LDA (Linear Discriminant Analysis) and is less sensitive to different training sets. Applied to faces with occlusions. BibRef

Ramamoorthi, R.[Ravi],
Analytic PCA Construction for Theoretical Analysis of Lighting Variability in Images of a Lambertian Object,
PAMI(24), No. 10, October 2002, pp. 1322-1333.
IEEE Abstract. 0210
Analysis of the process by changing lighting. BibRef

Guillamet, D., Vitriŕ, J.,
Evaluation of distance metrics for recognition based on non-negative matrix factorization,
PRL(24), No. 9-10, June 2003, pp. 1599-1605.
Elsevier DOI 0304
BibRef
Earlier:
Determining a suitable metric when using non-negative matrix factorization,
ICPR02(II: 128-131).
IEEE DOI 0211
BibRef

Guillamet, D., Vitriŕ, J., Schiele, B.,
Introducing a weighted non-negative matrix factorization for image classification,
PRL(24), No. 14, October 2003, pp. 2447-2454.
Elsevier DOI 0307
BibRef
Earlier: A1, A3, A2:
Analyzing non-negative matrix factorization for image classification,
ICPR02(II: 116-119).
IEEE DOI 0211
BibRef

Guillamet, D., Vitria, J.,
Discriminant basis for object classification,
CIAP01(256-261).
IEEE DOI 0210
BibRef

Guillamet, D.[David], Bressan, M.[Marco], and Vitriŕ, J.[Jordi],
A Weighted Non-negative Matrix Factorization for Local Representations,
CVPR01(I:942-947).
IEEE DOI 0110
Deal with problems of the original formulation to get better representations. BibRef

Bressan, M.[Marco], Guillamet, D.[David], and Vitriŕ, J.[Jordi],
Using an ICA Representation of High Dimensional Data for Object Recognition and Classification,
CVPR01(I:1004-1009).
IEEE DOI 0110
BibRef

Bressan, M.[Marco], Vitriŕ, J.[Jordi],
Independent Modes of Variation in Point Distribution Models,
VF01(123 ff.).
Springer DOI 0209
BibRef

Guillamet, D., Moghaddam, B., Vitria, J.,
Higher-order dependencies in local appearance models,
ICIP03(I: 213-216).
IEEE DOI 0312
BibRef
Earlier: A2, A1, A3:
Local appearance-based models using high-order statistics of image features,
CVPR03(I: 729-735).
IEEE DOI 0307
BibRef

Wang, L.W.[Li-Wei], Wang, X.[Xiao], Zhang, X.R.[Xue-Rong], Feng, J.F.[Ju-Fu],
The equivalence of two-dimensional PCA to line-based PCA,
PRL(26), No. 1, 1 January 2005, pp. 57-60.
Elsevier DOI 0501
BibRef

Martínez, A.M.[Aleix M.], Zhu, M.L.[Man-Li],
Where Are Linear Feature Extraction Methods Applicable?,
PAMI(27), No. 12, December 2005, pp. 1934-1944.
IEEE DOI 0512
Analyze where and why eigen-based linear equations do not work. When the smallest angle between the ith eigenvector and the first i eigenvectors is close to zero, there are problems. BibRef

Gao, H.[Hui], Davis, J.W.[James W.],
Why direct LDA is not equivalent to LDA,
PR(39), No. 5, May 2006, pp. 1002-1006.
Elsevier DOI 0604
BibRef
Earlier:
Sampling Representative Examples for Dimensionality Reduction and Recognition: Bumping LDA,
ECCV06(III: 275-287).
Springer DOI 0608
Linear discriminant analysis; Direct LDA; Small sample size problem BibRef

Chen, P.[Pei], Suter, D.[David],
An Analysis of Linear Subspace Approaches for Computer Vision and Pattern Recognition,
IJCV(68), No. 1, June 2006, pp. 83-106.
Springer DOI 0605
Such as PCA or SVD. BibRef

Bethge, M.[Matthias],
Factorial coding of natural images: how effective are linear models in removing higher-order dependencies?,
JOSA-A(23), No. 6, June 2006, pp. 1253-1268.
WWW Link. 0610
BibRef

Vicente, M.A.[M. Asuncion], Hoyer, P.O.[Patrik O.], Hyvarinen, A.[Aapo],
Equivalence of Some Common Linear Feature Extraction Techniques for Appearance-Based Object Recognition Tasks,
PAMI(29), No. 5, May 2007, pp. 896-900.
IEEE DOI 0704
Contradictory evaluations of PCA vs. ICA. Whitened PCA may yield identical results to ICA in some cases. Describe the situations where ICA improves on PCA. BibRef

Vicente, M.A.[M. Asunción], Fernández, C.[Cesar], Reinoso, O.[Oscar], Payá, L.[Luis],
3D Object Recognition from Appearance: PCA Versus ICA Approaches,
ICIAR04(I: 547-555).
Springer DOI 0409
BibRef

Gao, Q.X.[Quan-Xue],
Is two-dimensional PCA equivalent to a special case of modular PCA?,
PRL(28), No. 10, 15 July 2007, pp. 1250-1251.
Elsevier DOI 0706
Modular PCA; Two-dimensional PCA BibRef

Shih, F.Y.[Frank Y.], Zhang, K.[Kai],
A distance-based separator representation for pattern classification,
IVC(26), No. 5, May 2008, pp. 667-672.
Elsevier DOI 0803
Pattern representation; Classification; Support vector machine; PCA; LDA BibRef

Zheng, W.S.[Wei-Shi], Lai, J.H.[Jian-Huang], Li, S.Z.[Stan Z.],
1D-LDA vs. 2D-LDA: When is vector-based linear discriminant analysis better than matrix-based?,
PR(41), No. 7, July 2008, pp. 2156-2172.
Elsevier DOI 0804
Fisher's linear discriminant analysis (LDA); Matrix-based representation; Vector-based representation; Pattern recognition BibRef

Elad, M.[Michael],
Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing,
Springer2010, ISBN: 978-1-4419-7010-7
WWW Link. Survey, Invariants. Buy this book: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing 1010
BibRef

Elad, M.,
Sparse and Redundant Representation Modeling: What Next?,
SPLetters(19), No. 12, December 2012, pp. 922-928.
IEEE DOI 1212
BibRef

Eriksson, A.P.[Anders P.], van den Hengel, A.J.[Anton J.],
Efficient Computation of Robust Weighted Low-Rank Matrix Approximations Using the L_1 Norm,
PAMI(34), No. 9, September 2012, pp. 1681-1690.
IEEE DOI 1208
BibRef
Earlier:
Efficient computation of robust low-rank matrix approximations in the presence of missing data using the L1 norm,
CVPR10(771-778).
IEEE DOI 1006
Award, CVPR. BibRef

Chojnacki, W.[Wojciech], van den Hengel, A.J.[Anton J.], Brooks, M.J.[Michael J.],
Generalised Principal Component Analysis: Exploiting Inherent Parameter Constraints,
VISAPP06(217-228).
Springer DOI 0711
BibRef

Oreifej, O.[Omar], Shah, M.[Mubarak],
Robust Subspace Estimation Using Low-Rank Optimization: Theory and Applications,

Springer2014. ISBN 978-3-319-04184-1
WWW Link. 1404
Survey, Low-Rank Optimization. BibRef

Du, H.S.[Hai-Shun], Hu, Q.P.[Qing-Pu], Jiang, M.[Manman], Zhang, F.[Fan],
Two-dimensional principal component analysis based on Schatten p-norm for image feature extraction,
JVCIR(32), No. 1, 2015, pp. 55-62.
Elsevier DOI 1511
Schatten p-norm BibRef

Du, H.S.[Hai-Shun], Zhao, Z.L.[Zhao-Long], Wang, S.[Sheng], Hu, Q.P.[Qing-Pu],
Two-dimensional discriminant analysis based on Schatten p-norm for image feature extraction,
JVCIR(45), No. 1, 2017, pp. 87-94.
Elsevier DOI 1704
Schatten p-norm BibRef

Martín-Clemente, R.[Rubén], Zarzoso, V.[Vicente],
On the Link Between L1-PCA and ICA,
PAMI(39), No. 3, March 2017, pp. 515-528.
IEEE DOI 1702
Algorithm design and analysis BibRef

Lisani, J.L.[Jose-Luis], Morel, J.M.[Jean-Michel],
Exploring Patch Similarity in an Image,
IPOL(11), 2021, pp. 284-316.
DOI Link 2109
Code, Matching. Compare using PCA
See also On lines and planes of closest fit to systems of points in space. or Gaussian mixture model
See also Maximum Likelihood from Incomplete Data via the EM Algorithm. BibRef


Kong, L.D.[Ling-Dong], Liu, Y.Q.[You-Quan], Li, X.[Xin], Chen, R.[Runnan], Zhang, W.W.[Wen-Wei], Ren, J.W.[Jia-Wei], Pan, L.[Liang], Chen, K.[Kai], Liu, Z.W.[Zi-Wei],
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions,
ICCV23(19937-19949)
IEEE DOI 2401
BibRef

Li, Y.L.[Yong-Lu], Xu, Y.[Yue], Xu, X.Y.[Xin-Yu], Mao, X.H.[Xiao-Han], Yao, Y.[Yuan], Liu, S.Q.[Si-Qi], Lu, C.[Cewu],
Beyond Object Recognition: A New Benchmark towards Object Concept Learning,
ICCV23(19972-19983)
IEEE DOI Code:
WWW Link. 2401
BibRef

Chung, H.[Hyunhee], Park, K.H.[Kyung Ho], Seo, T.[Taewon], Cho, S.[Sungwoo],
Phantom of Benchmark Dataset: Resolving Label Ambiguity Problem on Image Recognition in the Wild,
Novelty23(1-10)
IEEE DOI 2302
Training, Deep learning, Image recognition, Image resolution, Conferences, Semantics, Neural networks BibRef

Song, Y.[Yue], Sebe, N.[Nicu], Wang, W.[Wei],
Why Approximate Matrix Square Root Outperforms Accurate SVD in Global Covariance Pooling?,
ICCV21(1095-1103)
IEEE DOI 2203
Training, Backpropagation, Protocols, Computational modeling, Boosting, Matrix decomposition, Recognition and classification, Optimization and learning methods BibRef

Sidibé, D., Rastgoo, M., Mériaudeau, F.,
On spatio-temporal saliency detection in videos using multilinear PCA,
ICPR16(1876-1880)
IEEE DOI 1705
Feature extraction, Image color analysis, Principal component analysis, Tensile stress, Videos, Visualization BibRef

Hsu, G.S.[Gee-Sern], Loc, T.T.[Truong Tan], Chung, S.L.[Sheng-Lun],
A comparison study on appearance-based object recognition,
ICPR12(3500-3503).
WWW Link. 1302
BibRef

Qi, H.C.[Han-Chao], Hughes, S.M.[Shannon M.],
Invariance of principal components under low-dimensional random projection of the data,
ICIP12(937-940).
IEEE DOI 1302
BibRef

Sakano, H.[Hitoshi],
A Brief History of the Subspace Methods,
Subspace10(434-435).
Springer DOI 1109
BibRef

Garg, R.[Rahul], Du, H.[Hao], Seitz, S.M.[Steven M.], Snavely, N.[Noah],
The dimensionality of scene appearance,
ICCV09(1917-1924).
IEEE DOI 0909
Analysis of assumptions of PCA type representations. BibRef

Salgian, A.S.[Andrea Selinger],
Combining local descriptors for 3D object recognition and categorization,
ICPR08(1-4).
IEEE DOI 0812
BibRef
Earlier:
Using Multiple Patches for 3D Object Recognition,
BP07(1-6).
IEEE DOI 0706
BibRef
Earlier:
Object Recognition Using Local Descriptors: A Comparison,
ISVC06(II: 709-717).
Springer DOI 0611
Build on:
See also Scale and Affine Invariant Interest Point Detectors. SIFT (
See also Distinctive Image Features from Scale-Invariant Keypoints. ), PCA-SIFT (
See also PCA-SIFT: a more distinctive representation for local image descriptors. ) and keyed context patches (
See also Perceptual Grouping Hierarchy for Appearance-Based 3D Object Recognition, A. ). BibRef

Choksuriwong, A., Laurent, H., Emile, B.,
Comparison of Invariant Descriptors for Object Recognition,
ICIP05(I: 377-380).
IEEE DOI 0512
BibRef

Brand, M.,
From Subspace to Submanifold Methods,
BMVC04(xx-yy).
HTML Version. 0508
BibRef

Lenz, R., Bui, T.H.[Thanh Hai],
Recognition of non-negative patterns,
ICPR04(III: 498-501).
IEEE DOI 0409
PCA analysis. Prove that the non-negative vauues are right. BibRef

Fortuna, J., Quick, P., Capson, D.W.[David W.],
A comparison of subspace methods for accurate position measurement,
Southwest04(16-20).
IEEE DOI 0411
BibRef

Fortuna, J., Schuurman, D.C., Capson, D.W.[David W.],
A comparison of PCA and ICA for object recognition under varying illumination,
ICPR02(III: 11-15).
IEEE DOI 0211
BibRef

Fortuna, J.[Jeff], Capson, D.W.[David W.],
Improved support vector classification using PCA and ICA feature space modification,
PR(37), No. 6, June 2004, pp. 1117-1129.
Elsevier DOI 0405
BibRef
And:
ICA filters for lighting invariant face recognition,
ICPR04(I: 334-337).
IEEE DOI 0409
BibRef

Wu, Q.A., Liu, Z., Xiong, Z.X., Wang, Y., Chen, T., Castleman, K.R.,
On optimal subspaces for appearance-based object recognition,
ICIP02(III: 885-888).
IEEE DOI 0210
BibRef

Pedersen, F.[Finn], Andersson, L.[Leif], and Bengtsson, E.[Ewert],
Investigating Preprocessing of Multivariate Images in Combination with Principal Component Analysis,
SCIA97(xx-yy)
HTML Version. 9705
BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Learning for Principal Components, Eigen Representations .


Last update:Nov 26, 2024 at 16:40:19