When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.

Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power.

This decentralized approach shares similarities with the SETI@home project, which distributed computing tasks across volunteer machines.

“The fundamental constraint with AI is compute,” argues Alex Cheema, co-founder of EXO Labs.

“If you dont have the compute, you cant compete.

But if you create this distributed internet, maybe we can.”

Supported LLMs include LLaMA, Mistral, LlaVA, Qwen, and DeepSeek.

A minimum Python version of 3.12.0 is required, along with additional dependencies for systems running Linux fitted withNVIDIAGPUs.

For example, an AI model requiring 16GB of RAM can run on two 8GB laptops working together.

Security risks also arise when multiple machines share workloads, requiring safeguards to prevent data leaks and unauthorized access.

Adoption is another hurdle, as developers ofAI toolscurrently rely on large-scale data centers.

The low-cost of Exo’s approach may appeal.

but Exo’s approach simply wont match the speed of those high-end AI clusters.