Where to start if you are new to the area?

We recommend to start the exploration of the area by reading the introductory papers below:

  1. P. Neubert, S. Schubert, and P. Protzel, “An Introduction to Hyperdimensional Computing for Robotics,” KI - Künstliche Intelligenz, vol. 33, no. 4, pp. 319–330, 2019.

  2. P. Kanerva, “Computing with High-Dimensional Vectors,” IEEE Design & Test, vol. 36, no. 3, pp. 7–14, 2019.

  3. P. Kanerva, “Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors,” Cognitive Computation, vol. 1, no. 2, pp. 139–159, 2009.

Please also check the page with videos. There are many recordings of great introductory lectures by P. Kanerva. Video is a high-level overview of VSAs/Hyperdimensional Computing implemented in hardware.

If you would like to look for papers within a specific direction of VSAs/Hyperdimensional Computing research consider searching the collection of publications.

What type of data structures can be represented by VSAs/Hyperdimensional Computing?

VSAs/Hyperdimensional Computing can represent and query an impressive range of data structures. Examples of such structures are key-value pairs also known as role-filler bindings, sets, histograms, sequences, graphs, trees, stacks, state automata, and others. There is a comprehensive write-up, which introduces transformation of a plethora of data structures into high-dimensional vectors. Informally, we refer to to this write-up as “cookbook”:

  1. D. Kleyko, M. Davies, E. P. Frady, P. Kanerva, S. J. Kent, B. A. Olshausen, E. Osipov, J. M. Rabaey, D. A. Rachkovskij, A. Rahimi, F. T. Sommer, Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware,” arXiv:2106.05268, pp. 1-28, 2021.

What is the information capacity of VSAs/Hyperdimensional Computing representations?

It is a very common practice to ask about the information capacity of distributed representations used in VSAs/Hyperdimensional Computing representations. Some early results in this direction were already obtained in [1] when studying Holographic Reduced Representations model. Later, some results for Matrix Binding of Additive Terms and Binary Spatter Codes models were presented in [2] and [3], respectively. The most general and comprehensive analysis of the information capacity of different VSAs/Hyperdimensional Computing models (and also some classes of recurrent neural networks) has been provided in [4] and exnteded further in [5]. So we highly recommend studying the theory in [4] to get a strong grasp on information capacity of VSAs/Hyperdimensional Computing models.

  1. T. A. Plate, Distributed Representations and Nested Compositional Structure,” Doctoral Thesis, University of Toronto, p. 202, 1994.

  2. S. I. Gallant, T. W. Okaywe, “Representing Objects, Relations, and Sequences,” Neural Computation, vol. 25, no. 8, pp. 2038-2078, 2013.

  3. D. Kleyko, E. Osipov, A. Senior, A. I. Khan, Y. A. Şekerciogğlu, Holographic Graph Neuron: A Bioinspired Architecture for Pattern Processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 6, pp. 1250-1262, 2017.

  4. E. P. Frady, D. Kleyko, F. T. Sommer, A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks,” Neural Computation, vol. 30, no. 6, pp. 1449-1513, 2018.

  5. D. Kleyko, A. Rosato, E. P. Frady, M. Panella, F. T. Sommer, “Perceptron Theory for Predicting the Accuracy of Neural Networks,” arXiv:2012.07881, pp. 1-12, 2020.

What is the "Blessing of Dimensionality"?

This is the term, which is often used to name phenomena opposite to the well-known Curse of Dimensionality. Different areas of mathematics and computer science assign different meanings to the Blessing of Dimensionality.

In VSAs/Hyperdimensional Computing, the Blessing of Dimensionality manifistates itself in the increased capacity [1] of distributed representations for vectors of higher dimensionality.

In Stochastic computing, the Blessing of Dimensionality manifistates itself in the progressive precision [2] of computations for vectors of higher dimensionality.

In Compressed sensing, the Blessing of Dimensionality manifistates itself in the improved quality of the reconstructed signal [3] for the projection matrices with larger number of dimensions.

In Concentration of measure, the Blessing of Dimensionality manifistates itself in stochastic separation theorems [4], which suggest that “If the dimension n of the underlying topological vector space is large, then random finite but exponentially large in n samples are linearly separable, with high probability, for a range of practically relevant classes of distributions”.

Literature:

  1. E. P. Frady, D. Kleyko, F. T. Sommer, “A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks,” Neural Computation, vol. 30, no. 6, pp. 1449-1513, 2018.

  2. A. Alaghi, J. P. Hayes, “Computing with Randomness,” IEEE Spectrum, vol. 55, no. 3, pp. 46-51, 2018.

  3. D. L. Donoho, “Compressed Sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, 2006.

  4. A. N. Gorban, I. Y. Tyukin, “Blessing of Dimensionality: Mathematical Foundations of the Statistical Physics of Data,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 376, no. 2118, pp. 1-18, 2018.