TY - GEN
T1 - Ternary sparse coding
AU - Exarchakis, Georgios
AU - Henniges, Marc
AU - Eggert, Julian
AU - Lücke, Jörg
PY - 2012/12/31
Y1 - 2012/12/31
N2 - We study a novel sparse coding model with discrete and symmetric prior distribution. Instead of using continuous latent variables distributed according to heavy tail distributions, the latent variables of our approach are discrete. In contrast to approaches using binary latents, we use latents with three states (-1, 0, and 1) following a symmetric and zero-mean distribution. While using discrete latents, the model thus maintains important properties of standard sparse coding models and of its recent variants. To efficiently train the parameters of our probabilistic generative model, we apply a truncated variational EM approach (Expectation Truncation). The resulting learning algorithm infers all model parameters including the variance of data noise and data sparsity. In numerical experiments on artificial data, we show that the algorithm efficiently recovers the generating parameters, and we find that the applied variational approach helps in avoiding local optima. Using experiments on natural image patches, we demonstrate large-scale applicability of the approach and study the obtained Gabor-like basis functions.
AB - We study a novel sparse coding model with discrete and symmetric prior distribution. Instead of using continuous latent variables distributed according to heavy tail distributions, the latent variables of our approach are discrete. In contrast to approaches using binary latents, we use latents with three states (-1, 0, and 1) following a symmetric and zero-mean distribution. While using discrete latents, the model thus maintains important properties of standard sparse coding models and of its recent variants. To efficiently train the parameters of our probabilistic generative model, we apply a truncated variational EM approach (Expectation Truncation). The resulting learning algorithm infers all model parameters including the variance of data noise and data sparsity. In numerical experiments on artificial data, we show that the algorithm efficiently recovers the generating parameters, and we find that the applied variational approach helps in avoiding local optima. Using experiments on natural image patches, we demonstrate large-scale applicability of the approach and study the obtained Gabor-like basis functions.
UR - http://www.scopus.com/inward/record.url?scp=84857314079&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-28551-6_26
DO - 10.1007/978-3-642-28551-6_26
M3 - Chapter in a published conference proceeding
AN - SCOPUS:84857314079
SN - 9783642285509
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 204
EP - 212
BT - Latent Variable Analysis and Signal Separation - 10th International Conference, LVA/ICA 2012, Proceedings
A2 - Theis, F.
A2 - Chichoki, A.
A2 - Yeredor, A.
A2 - Zibulevsky, M.
CY - Berlin, Germany
T2 - 10th International Conference on Latent Variable Analysis and Signal Separation, LVA/ICA 2012
Y2 - 12 March 2012 through 15 March 2012
ER -