VMind software leverages structure in real-world inputs to accelerate AI.

Structured compute
means faster compute

Structure-naïve algorithms like direct matrix multiplication work for every possible input— including inputs overwhelmingly unlikely to be encountered.

Structure-aware algorithms, however, leverage the smaller set of possible inputs afforded by the manifold hypothesis to find faster ways to compute.

VMind software leverages latent structure on AI inputs to afford much faster AI compute.

Read more »

The manifold hypothesis
means faster AI compute

AI inputs are more structured than they look, thus affording much faster AI compute.

In language, while there are around 170,000+ words in common English usage, not every word appears next to every other word. As a result, there are 1/100 less pairs of words than there could possibly be.

This phenomenon is more extreme in biology. Of the tens of billions of trillions of possible small molecule drug candidates, there are only 160+ billion actually synthesizable ones in practice (Reymond et al. 2015).

This is known as the manifold hypothesis. It is what enables VMind to accelerate AI compute.

See frequently asked questions »

Read the latest from VMind

You have been subscribed to VMind updates. Thank you.
Could not subscribe your email. Please try again.