01 December 2021

Introducing ThirdAI’s BOLT Engine

To cope with the exponentially growing demand for computing, ThirdAI has come up with an exponentially cheaper solution - BOLT (Big O’l Layered Training).

AI models have grown tremendously in the past few years. We have reached a stage where training one AI model can emit as much carbon as five cars in their lifetimes.


AI is becoming a more mainstream technology. However, extracting benefit from it in decision-making requires extensive investment in terms of cost and resources. If we look at the life cycle of an AI model in production, the most time and resource-consuming step is training and fine-tuning. It is not unusual for corporations to spend 100s of millions of dollars in computing resources to train their AI models. According to OpenAI, the computing cost associated with AI doubles every 3.4 months.


Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a fringilla tortor, et porttitor tort. Vestibulum non nisi interdum, blandit dolor in. laoreet magna. Suspendisse sit amet elit sit amet nisl. semper imperdiet. Suspendisse

Enter ThirdAI's BOLT Engine

To cope with the exponentially growing demand for computing, ThirdAI has come up with an exponentially cheaper solution- BOLT (Big O’l Layered Training). The popular algorithms for training AI models developed during the ’80s can be rewritten as dense matrix multiplications. For these specific operations, CPUs are slower when compared with GPUs and other specialized processors such as TPUs. As a result, the trend in AI is moving towards 3x more expensive hardware. However, the chip shortage and exponential blowup in the computing demand make the solution unsustainable. In addition to the cost, GPUs and TPUs have further disadvantages than CPUs, including a small main memory and hard to virtualize (vendor-optimized proprietary software stack).

ThirdAI has worked on these issues and demonstrated that matrix multiplications are overkill for large models. ThirdAI uses sparse coding, which is a more efficient alternative than full matrix multiplications. This makes existing computers a perfect machine for running large AI models and eliminates the need for specialized infrastructure investment. 

BOLT's design is based on brain-like efficient sparse coding to train AI models. Our algorithms resulted in exponentially fewer computations to train neural networks while getting the same accuracy.

ThirdAI's benchmark evaluation shows the BOLT algorithm achieves neural-network training in 1% or fewer FLOPS. Our gains are unlike those achieved by quantization, pruning, and structured sparsity, which only offer a slight constant factor improvement. As a result of significant savings, we don't have to rely on specialized instructions, and the speedups are naturally observed on any commodity CPU, including Intel, AMD, or ARM. Even older versions of commodity CPUs can be made equally capable of training a billion parameter models faster than A100 GPUs.

BOLT can be implemented in any workflow (Tensorflow, PyTorch, etc.). BOLT is arguably the most consistent, cheapest, and fully virtualized compute option available. ThirdAI is converting existing CPUs into AI powerhouses, changing the AI industry, and expanding the reach of AI.