Google’s Gemma 2 series launches with not one, but two lightweight model options—a 9B and 27B


Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More


Google says Gemma 2, its open lightweight model series, will be available to researchers and developers through Vertex AI starting next month. But while it initially only contained a 27-billion parameter member, the company surprised us by also including a 9-billion one.

Gemma 2 was introduced back in May at Google I/O as the successor to Gemma’s 2-billion and 7-billion parameter models, which debuted in February. The next-gen Gemma model is designed to run on Nvidia’s latest GPUs or a single TPU host in Vertex AI. It targets developers who want to incorporate AI into their apps or edge devices such as smartphones, IoT devices, and personal computers.

The two Gemma 2 model members follow their predecessors and reflect the current AI landscape. Technological innovations allow for smaller and lighter models to tackle different user requests. With a 9-billion and 27-billion parameter option, Google gives developers a choice in how they might want to use these models—either on-device or through the cloud. Because it’s open-sourced, it can be easily customized and integrated into various projects that Google may not usually foresee.

It will be worth seeing if the existing Gemma variants—CodeGemma, RecurrentGemma and PaliGemma—will benefit from these two Gemma 2 models.



Source link

About The Author