Google Cloud brings tech behind Search and YouTube to enterprise gen AI apps


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


As the generative AI continues to progress, having a simple chatbot may no longer be enough for many enterprises.

Cloud hyperscalers are racing to build up their databases and tools to help enterprises deploy operational data quickly and efficiently, letting them build applications that are both intelligent and contextually aware.

Case in point: Google Cloud’s recent barrage of updates for multiple database offerings, starting with AlloyDB.

According to a blog post from the company, the fully managed PostgreSQL-compatible database now supports ScaNN (scalable nearest neighbor) vector index in general availability. The technology powers its Search and YouTube services and paves the way for faster index creation and vector queries while consuming far less memory.

In addition, the company also announced a partnership with Aiven for the managed deployment of AlloyDB as well as updates for Memorystore for Valkey and Firebase.

Understanding the value of ScaNN for AlloyDB

Vector databases are critical to power advanced AI workloads, right from RAG chatbots to recommender systems.

At the heart of these systems sit key capabilities like storing and managing vector embeddings (numerical representation of data) and conducting similarity searches needed for the targeted applications. 

As most developers in the world prefer PostgreSQL as the go-to operational database, its extension for vector search, pgvector, has become highly popular. Google Cloud already supports it on AlloyDB for PostgreSQL, with a state-of-the-art graph-based algorithm called Hierarchical Navigable Small World (HNSW) handling vector jobs.

However, on occasions where the vector workload is too large, the performance of the algorithm may decline, leading to application latencies and high memory usage.

To address this, Google Cloud is making ScaNN vector index in AlloyDB generally available. This new index uses the same technology that powers Google Search and YouTube to deliver up to four times faster vector queries and up to eight-fold faster index build times, with a 3-4x smaller memory footprint than the HNSW index in standard PostgreSQL. 

“The ScaNN index is the first PostgreSQL-compatible index that can scale to support more than one billion vectors while maintaining state-of-the-art query performance — enabling high-performance workloads for every enterprise,” Andi Gutmans, the GM and VP of engineering for Databases at Google Cloud, wrote in a blog post.

Gutmans also announced a partnership with Aiven to make AlloyDB Omni, the downloadable edition of AlloyDB, available as a managed service that runs anywhere, including on-premises or on the cloud.

“You can now run transactional, analytical, and vector workloads across clouds on a single platform, and easily get started building gen AI applications, also on any cloud. This is the first partnership that adds an administration and management layer for AlloyDB Omni,” he added.

What’s new in Memorystore for Valkey and Firebase?

In addition to AlloyDB, Google Cloud announced enhancements for Memorystore for Valkey, the fully managed cluster for the Valkey in-memory database, and the Firebase application development platform. 

For the Valkey offering, the company said it is adding vector search capabilities. Gutmans noted that a single Memorystore for Valkey instance can now perform similarity search at single-digit millisecond latency on over a billion vectors, with more than 99% recall. 

He also added that the next version of Memorystore for Valkey, 8.0, is now in public preview with 2x faster querying speed as compared to Memorystore for Redist Cluster, a new replication scheme, networking enhancements and detailed visibility into performance and resource usage. 

As for Firebase, Google Cloud is adding Data Connect, a new backend-as-a-service that will be integrated with a fully managed PostgreSQL database powered by Cloud SQL. It will go into public preview later this year.

With these developments, Google Cloud hopes developers will have a broader selection of infrastructure and database capabilities — along with powerful language models – to build intelligent applications for their organizations. It remains to be seen how these new advancements are deployed to real use cases, but the general trend indicates the volume of gen AI applications is expected to soar significantly.

Omdia estimates that the market for generative AI applications will grow from $6.2 billion in 2023 to $58.5 billion in 2028, marking a CAGR of 56%.



Source link

About The Author