By removing compute as the bottleneck in AI, the CS-1 enables AI practitioners to answer more questions and explore more ideas in less time. The CS-1 delivers record-breaking performance and scale to AI compute, and its deployment across national laboratories enable the largest supercomputer sites in the world to achieve 100- to 1000-fold improvement over existing AI accelerators. By pairing supercompute power with the CS-1's AI processing capabilities, Argonne can now accelerate research and development of deep learning models to solve science problems not achievable with existing systems.
"We've partnered with Cerebras for more than two years and are extremely pleased to have brought the new AI system to Argonne", stated Rick Stevens, Argonne Associate Laboratory Director for Computing, Environment and Life Sciences. "By deploying the CS-1, we have dramatically shrunk training time across neural networks, allowing our researchers to be vastly more productive to make strong advances across deep learning research in cancer, traumatic brain injury and many other areas important to society today and in the years to come."
AI touches our lives in subtle and numerous ways every day. From recommending music or clothes, protecting your credit card from fraud, or helping you navigate with maps and directions, AI is running quietly in the background. In science and health, AI is being used to help researchers better understand a broad range of topics ranging from molecular dynamics to cosmology, and help physicians better diagnose and treat disease.
A subset of AI called deep learning allows computer networks to learn from large amounts of unstructured data. However, deep learning models require massive amounts of computing power and are pushing the limits of what current computer systems can handle - until now, with the introduction of Cerebras CS-1.
"We are proud to collaborate with Argonne National Laboratory and leverage the massive computational power of the CS-1 to help solve many of the world's scientific problems", stated Andrew Feldman, Founder and Chief Executive Officer, Cerebras. "At Argonne, the Cerebras CS-1 is being used to better understand everything from cancer drug interactions to the properties of black holes. To see the CS-1 used by leading researchers to solve problems in health care and in basic sciences is enormously rewarding - it is the reason we invent technology."
As a major driver of AI for Science, Argonne deployed the CS-1 to enhance scientific AI models. Its first application area is cancer drug response prediction, a project that is part of a DOE and National Cancer Institute collaboration aimed at employing advanced computing and AI to solve grand challenge problems in cancer research. The addition of the Cerebras CS-1 supports efforts to extend Argonne's major initiatives in advanced computing, which also leverages the AI capabilities of the Aurora exascale system expected in 2021.
Argonne's deployment of the CS-1 is the first part of a multi-laboratory partnership between the DOE and Cerebras Systems. Cerebras has also partnered with DOE's Lawrence Livermore National Laboratory to accelerate its AI initiatives and further enhance its simulation strengths with the machine learning capabilities of the CS-1.
"At the Department of Energy, we believe public-private partnerships are an essential part of accelerating AI research in the United States", stated Dr. Dimitri Kusnezov, DOEs Deputy Under Secretary for Artificial Intelligence & Technology. "We look forward to a long and productive partnership with Cerebras that will help define the next generation of AI technologies and transform the landscape of DOE's operations, business and missions."
The Cerebras CS-1 contains the Wafer Scale Engine (WSE), the industry's only trillion transistor processor. The WSE is the largest chip ever made at 46.225 square millimeters in area, it is 56,7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3000 times more high speed, on-chip memory, 10.000 times more memory bandwidth, and 33.000 times more communication bandwidth. In AI compute large chips process information more quickly, producing answers in less time.