• CodeLetter
  • Posts
  • Amazon's Zoox Robotaxis Make Headway; A New AI Chip on the Block, Can Nvidia Keep Up? 

Amazon's Zoox Robotaxis Make Headway; A New AI Chip on the Block, Can Nvidia Keep Up? 

Amazon's Zoox Robotaxis Make Headway

They may not be the most attractive vehicles–perhaps not even well-known, but it can’t be denied that Amazon’s newly created self-driving car, ‘Zoox’, as it’s called, is making headway in a new post-modern world. Amazon's ambitions for it don’t stop merely in a few cities.

The manual-less taxi remains in its infancy with defects, but that’s not deterring Amazon from making technological progress. At present, they are working on expanding the ‘robotaxi’s’ range, nighttime vision and general speed. 

The news came last Thursday that testing would expand in California and Nevada. Incidentally, that same day, Alphabet’s Waymo, another driverless car, started to operate as taxis in Las Vegas. This has led to speculation that Amazon is purposive and adamant that it will be the top-dog in a future driverless world against competition.

Previously, Zoox vehicles could only reach speeds up to 35 mph. Significant changes have recently brought that all the way up to 75 mph.

Additionally, its range has expanded from a mere one mile to five.

One of the current concerns that not only Amazon faces, but other self-driving car entrepreneurs, is potential public hesitation to put their lives on the road in the hands of robots. 

All that notwithstanding, robotaxis will likely be a reality in most major cities, whether sooner, or later.

A New AI Chip on the Block

A new AI chip has just been released to the market–a bulldog ready to give Nvidia a run for its money.

Cerebras Systems, an innovative platform dedicated to training AI models, has just announced its latest in utopic chips: Wafer Scale Engine 3 (WSE-3), now called, “the fastest chip in the world”.

"WSE-3 is the fastest AI chip in the world, purpose-built for the latest cutting-edge AI work, from a mixture of experts to 24 trillion parameter models,” commented Andrew Feldman, Cerebras co-founder and CEO. “We are thrilled to bring WSE-3 and CS-3 to market to help solve today’s biggest AI challenges.” 

The fastest chip in existence? That’s quite a proclamation; such a chip must perform outstandingly, and WSE-3 can do just that.

The WSE-3 plays a crucial role (indeed fully commanding) the Cerebras CS-3 supercomputer, which stores around 44 gigabytes of SRAM (Static Random Access Memory) and four trillion transistors.

If a computer science enthusiast finds that astonishing, they’d be shocked to hear that the supercomputer possesses nearly one million AI computer cores. Ultimately, the computer, thanks to the chip, can process a whopping 125 petaflops.

Nvidia’s New Platform: Blackwell

Conversely, Nvidia is releasing the Nvidia Blackwell Platform—allowing users to use AI on a trillion-parameter large language model. Not only that, but this new release cost 25x less than previous models.

Already, Cerebras is being bombarded with requests for commercialized CS-3. Mayo Clinic is just one of countless examples.

“As part of our multi-year strategic collaboration with Cerebras to develop AI models that improve patient outcomes and diagnoses, we are excited to see advancements being made on the technology capabilities to enhance our efforts,” said Dr. Matthew Callstrom, M.D., Mayo Clinic’s medical director. 

Meanwhile, along with the development of CS-3, Cerebras is working on Condor Galaxy 3: a supercomputer so large, it can produce 8 exaFLOPs of AI compute.

Every computer scientist taking an interest in AI processing power and capabilities should undoubtedly direct their attention to the advancements being made in companies outside of even esteemed Nvidia, such as Cerebras. If one thing’s for certain, Nvidia has a new challenger.

In Other News