Science

VMind software leverages structure in real-world inputs to accelerate the bottleneck operations of AI.

The manifold hypothesis

AI inputs are more structured than they look. They don't span their entire set of possible values.

For instance, while there are around 171,476 words in common use in the English language, not every word appears next to every other word. As a result, there are only around 314 million word bigrams in the English language, not 29 billion (as one would naïvely expect if every word could possibly appear next to every other word). And there are only around 977 million word trigrams in English, far less than the over 5 quadrillion possible triplets of English words.

Thus, real-world AI inputs only span a small set of possible values. This is known as the manifold hypothesis in the empirical AI literature.

Read more »

Structured compute means faster compute

Structure-unaware algorithms, such as naïve matrix multiplication, work for every possible input— including inputs overwhelmingly unlikely to be encountered.

Structure-aware algorithms, however, can leverage the much smaller set of possible inputs afforded by the manifold hypothesis to find faster ways to compute.

For instance, while multiplying two 1,000 by 1,000 arbitrary matrices together using naïve matrix multiplication takes on the order of one billion operations, if the matrices are circulant (for instance, if they are adjacency matrices of circulant graphs such as Möbius ladders or Paley graphs of prime order), the extra structure affords a self-reducing algorithm that takes on the order of merely 10 million operations.

VMind leverages existing structure on AI inputs to afford much faster algorithms than naïve AI compute.

See frequently asked questions »

Read the latest from VMind

You have been subscribed to VMind updates. Thank you.
Could not subscribe your email. Please try again.