Financial Times
Content discovery
Keyword search missed context in 1M+ assets. Now, vector analysis runs directly at the source—launching hybrid search in 18 weeks.
- 18-week deployment for AI hybrid search
Fragmented pipelines slowed cross-site suggestions. A unified AI vector database cut latency 90%, processing 1,500 queries per second.
A global media publisher managing a vast repository of text, audio, video, and images across more than 70 websites.
The engineering team originally built three separate recommendation pipelines on Elasticsearch to power cross-site "read more" suggestions, but...
“We needed a solution that was going to scale much better. We already had storage on MongoDB, and it made sense to simply use MongoDB to process vectors instead of sending the data to an external pipeline.”
Global media company publishing iconic magazine, digital, and video content.
Multi-cloud developer data platform for building and scaling applications.
Condé Nast's Content recommendations is part of this use case:
Related implementations across industries and use cases
Keyword search missed context in 1M+ assets. Now, vector analysis runs directly at the source—launching hybrid search in 18 weeks.
Adding one attribute to a 300M-item catalog took days of rebuilds. Unifying vector and product data unlocked real-time hybrid search.
Proprietary APIs offered zero control. Moving to managed open models secured 2-second answers and fixed pricing for news search.
Adding one attribute to a 300M-item catalog took days of rebuilds. Unifying vector and product data unlocked real-time hybrid search.
Processing trillion-token datasets took months. A native vector engine cut deduplication costs 5x and doubled processing speed.
Manually illustrating vintage assets would have tripled production time. Now, a 3-person team uses AI to generate and animate the visuals.
Custom analytics required months of full-stack development. Now, self-serve AI apps connect analysts directly to data models.
Experts spent 15 minutes pulling data from scattered systems. Natural language prompts now generate detailed reports instantly.
Protecting users from harmful on-device AI required internet. A powerful safety AI now runs directly on the PC, guarding users even when offline.
Fragmented pipelines slowed cross-site suggestions. A unified AI vector database cut latency 90%, processing 1,500 queries per second.
A global media publisher managing a vast repository of text, audio, video, and images across more than 70 websites.
The engineering team originally built three separate recommendation pipelines on Elasticsearch to power cross-site "read more" suggestions, but...
“We needed a solution that was going to scale much better. We already had storage on MongoDB, and it made sense to simply use MongoDB to process vectors instead of sending the data to an external pipeline.”
Global media company publishing iconic magazine, digital, and video content.
Multi-cloud developer data platform for building and scaling applications.
Condé Nast's Content recommendations is part of this use case:
Related implementations across industries and use cases
Keyword search missed context in 1M+ assets. Now, vector analysis runs directly at the source—launching hybrid search in 18 weeks.
Adding one attribute to a 300M-item catalog took days of rebuilds. Unifying vector and product data unlocked real-time hybrid search.
Proprietary APIs offered zero control. Moving to managed open models secured 2-second answers and fixed pricing for news search.
Adding one attribute to a 300M-item catalog took days of rebuilds. Unifying vector and product data unlocked real-time hybrid search.
Processing trillion-token datasets took months. A native vector engine cut deduplication costs 5x and doubled processing speed.
Manually illustrating vintage assets would have tripled production time. Now, a 3-person team uses AI to generate and animate the visuals.
Custom analytics required months of full-stack development. Now, self-serve AI apps connect analysts directly to data models.
Experts spent 15 minutes pulling data from scattered systems. Natural language prompts now generate detailed reports instantly.
Protecting users from harmful on-device AI required internet. A powerful safety AI now runs directly on the PC, guarding users even when offline.