Enterprise AI gets closer to data with Couchbase’s new Capella AI services


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Database platform developer Couchbase is looking to help solve an increasingly common problem for enterprise AI deployments. Namely how to get data closer to AI in as fast and as secure an approach as possible. The end goal is to make it simpler and more operationally efficient to build and deploy enterprise AI.

Couchbase today announced Capella AI Services, a suite of capabilities designed to help enterprises build and deploy AI applications while maintaining data security and streamlining development workflows. Among the new offerings is the model service for secure hosting of AI models within organizational boundaries. The vectorization service automates vector operations for efficient AI processing. AI functions simplify AI integration through SQL++ queries while the new agent catalog centralizes AI development resources and templates.

The announcement comes as organizations grapple with integrating AI into their existing applications while managing concerns about data privacy, operational complexity and development efficiency. According to the company, the Capella AI Services will enable enterprises to build and deploy AI applications more efficiently with lower latency leading to improved business outcomes.

This expansion builds upon Couchbase’s existing strengths in NoSQL database technology and its cloud-to-edge capabilities. Couchbase is among the early pioneers in the NoSQL database world with the company going public back in 2021. Over the past year, the company has increasingly focussed on building out vector database capabilities. Those capabilities have included an assistive gen AI feature known as Cappella IQ in 2023 and expanded vector search this year.

“We’re focusing on building a developer data platform for critical applications in our AI world today,” Matt McDonough, SVP of product and partners at Couchbase, told VentureBeat. “Traditional applications are designed for humans to input data. AI really flips that on the head, the emphasis moves from the UI or front end application to the database and making it as efficient as possible for AI agents to work with.”

How Couchbase aims to differentiate in an increasingly crowded database market

As has been the case in the database market for decades, there is a healthy amount of competition. 

Just as NoSQL database capabilities have become increasingly common, the same is now also true of vector database functionality. NoSQL vendors such as MongoDB, DataStax and Neo4j, as well as traditional database vendors like Oracle all have vector capabilities today.

“Everyone has vector capabilities today,  I think that’s probably an accurate statement,” McDonough admitted.

That said, he noted that even before the new Capella AI services, Couchbase does aim to have a somewhat differentiated offering. In particular, Couchbase has long had mobile and edge deployment capabilities. The database also provides in-memory capabilities that help to accelerate all types of queries, including vector search. 

Couchbase is also notable for its SQL++ query language. SQL++ allows developers to query and manipulate JSON data stored in Couchbase using familiar SQL syntax. This helps bridge the gap between relational and NoSQL data models. With the new Capella AI services, SQL++ functionality is being extended to make it easier for application developers to directly query AI models with standard database queries.

Mohan Varthakavi, VP of Software Development, AI and Edge at Couchbase explained to VentureBeat that AI functions enable developers to easily execute common AI functions on data. For example, he noted that an organization might already have a large volume of data in Couchbase. With the new AI functions, the organization can simply use SQL++ to summarize data, or executive any other AI function directly on the data. That can be done without needing to host a separate AI model, connect data stores or learn different syntax to execute the AI function.

How Capella AI brings semantic context to accelerate enterprise deployments

The new Capella AI Services suite introduces several key components that address common enterprise AI challenges

One of the new components is the model service which addresses enterprise security concerns by enabling AI model hosting within organizational boundaries. As such a model can be hosted for example within the same virtual private cloud (VPC).

“Our customers consistently told us that they are concerned about data going across the wire to foundational models sourced outside,” Varthakavi said. 

The service supports both open source models and commercial offerings, with value-added features including request batching and semantic caching. Varthakavi explained that semantic caching provides the ability to cache not just the literal responses to queries, but the semantic meaning and context behind those responses. He noted that by caching semantically relevant responses, Couchbase can provide more contextual and meaningful information to the AI models or applications consuming the data. The semantic caching can help reduce the number of calls needed to AI models, as Couchbase can often provide relevant responses from its own cache. This can lower the operational costs and latency associated with making calls to AI services.

McDonough emphasized that the core focus for Couchbase overall with the new AI services is to make it simpler for developers to build, test and deploy AI, without having to use a bunch of different platforms.

“Ultimately we believe that is going to reduce latency operational cost, by keeping these models and the data together throughout the entire software development life cycle for AI applications,” he said.



Source link

About The Author