This New AI Supercomputer Outperforms NVIDIA! 🤯 (with CEO Andrew Feldman) - Summary

Summary

Cerebras Systems, a start-up, has developed the Wafer Scale Engine 2, an AI chip that outperforms Nvidia GPUs and is about 56 times larger. They have built an AI supercomputer called "Condor Galaxy one," capable of four exaflops of compute power, using 64 of these chips. Cerebras plans to build nine such supercomputers and interconnect them to create a constellation of 36 exaflops. The company has developed a custom compiler that can take PyTorch code written for a GPU and run it on their hardware without changes. Its main customer is g42, who plans to buy over 500 wafer scale engines to build and train medical AGI. Cerebras' technology significantly reduces the time from the idea stage to the start of the machine training. The supercomputer is costly, with each Wafer Scale Engine 2 chip costing between $1.2m and $1.7m, making each Condor Galaxy supercomputer cost around $100m.

Facts

Here are the key facts extracted from the provided text:

1. Top tech companies like Google, Tesla, and Microsoft are buying GPUs, but the supply is limited.
2. A startup called Cerebras offers AI chips with better performance than Nvidia GPUs.
3. Cerebras announced a new Contour Galaxy one supercomputer capable of four exaflops of AI compute.
4. Each Cerebras chip is approximately 56 times larger than a single Nvidia GPU.
5. Cerebras uses wafer scale engine chips, which are among the largest chips possible.
6. The large size of the Cerebras chip allows it to maintain entire network parameters, speeding up training.
7. Condor Galaxy one supercomputer consumes about 1.75 megawatts of power.
8. Cerebras plans to build a total of nine supercomputers interconnected like one big cloud.
9. The main customer for the supercomputer is g42, a company based in the United Arab Emirates.
10. Cerebras is working on the next generation of its chip, Wafer Scale Engine 3, in a five-nanometer process.
11. Publications suggest that Cerebras' chip outperforms Nvidia GPUs in certain applications, such as stencil computations.
12. Cerebras' system simplifies distributed compute, reducing the time needed to start training large models.

Please note that these facts are extracted directly from the text without including opinions or additional information.