Introduction
The search technology landscape continues to evolve rapidly in 2025-2026, driven by changing user expectations, new use cases, and the integration of artificial intelligence. Meilisearch has kept pace with these changes, introducing new features and capabilities that address modern search requirements.
This article explores the latest developments in Meilisearch, including new features, cloud evolution, AI integration capabilities, and broader ecosystem trends that are shaping the future of search technology.
Meilisearch 1.x Evolution
The Meilisearch 1.x series has brought significant improvements and new capabilities.
Version 1.12-1.14 Highlights
Recent Meilisearch versions have introduced:
- Enhanced Vector Search - Native support for vector embeddings
- Improved Performance - Optimized indexing and search algorithms
- Better Language Support - Extended tokenizer capabilities
- Cloud Integration - Improved managed service features
Breaking Changes
When upgrading, be aware of potential breaking changes:
# Check current version
curl http://localhost:7700/version
Review the changelog before upgrading to ensure compatibility with your implementation.
Upgrade Considerations
When upgrading Meilisearch:
- Backup your data before upgrading
- Test in staging before production deployment
- Review breaking changes in release notes
- Update client libraries to compatible versions
# Create backup before upgrade
curl -X POST 'http://localhost:7700/snapshots' \
-H 'Authorization: Bearer your_master_key'
Vector Search Capabilities
Vector search has become essential for modern AI-powered applications, and Meilisearch has embraced this trend.
What is Vector Search?
Vector search uses mathematical representations (embeddings) of documents and queries to find similar items. Unlike keyword search which matches exact or similar words, vector search finds semantically similar content.
For example:
- Keyword search: “car” matches only documents containing “car”
- Vector search: “vehicle” matches documents about cars, trucks, and automobiles
Meilisearch Vector Store
Meilisearch now supports storing and searching vectors natively:
# Enable experimental vector store
MEILI_EXPERIMENTAL_VECTOR_STORE=true
Add documents with vectors:
{
"id": "1",
"title": "The Great Gatsby",
"author": "F. Scott Fitzgerald",
"description": "A novel about the American Dream",
"embedding": [0.123, -0.456, 0.789, ...]
}
Search using vectors:
{
"q": "American novel about dreams",
"hybrid": true,
"semanticRatio": 0.7
}
The semanticRatio parameter controls the balance between keyword and vector search.
Hybrid Search
Modern applications often benefit from combining keyword and vector search:
const results = await index.search('American Dream', {
hybrid: true,
semanticRatio: 0.6, // 60% semantic, 40% keyword
attributesToRetrieve: ['*'],
limit: 20
})
This hybrid approach provides:
- Precision from keyword matching
- Recall from semantic understanding
- Flexibility for complex queries
Use Cases for Vector Search
Vector search enables new use cases:
- Semantic Search - Understand query intent, not just keywords
- Recommendations - Find similar products, content, or users
- Deduplication - Identify near-duplicate content
- Anomaly Detection - Find unusual patterns in data
Cloud Offerings
Meilisearch Cloud has matured significantly, offering robust managed search capabilities.
Meilisearch Cloud Features
The managed service provides:
- Fully Managed Infrastructure - No server maintenance required
- Automatic Scaling - Handle traffic spikes effortlessly
- Global Distribution - Deploy close to your users
- Enterprise Security - SOC2 compliance, encryption at rest
- Expert Support - Access to Meilisearch engineers
Cloud vs Self-Hosted
Choose based on your requirements:
| Feature | Cloud | Self-Hosted |
|---|---|---|
| Setup Time | Minutes | Hours |
| Maintenance | Managed | Self |
| Scaling | Automatic | Manual |
| Cost | Usage-based | Infrastructure |
| Customization | Limited | Full |
| Data Control | Full | Full |
Connecting to Cloud
Using Meilisearch Cloud is straightforward:
import { MeiliSearch } from 'meilisearch'
const client = new MeiliSearch({
host: 'https://your-project.meilisearch.cloud',
apiKey: 'your_search_api_key'
})
const index = client.index('products')
const results = await index.search('laptop')
Multi-Region Deployment
For global applications, deploy in multiple regions:
# US East
curl -X POST 'https://us-east-1.meilisearch.cloud/projects' \
-H 'Authorization: Bearer your_master_key' \
-H 'Content-Type: application/json' \
-d '{"name": "us-east", "plan": "starter"}'
# EU West
curl -X POST 'https://eu-west-1.meilisearch.cloud/projects' \
-H 'Authorization: Bearer your_master_key' \
-H 'Content-Type: application/json' \
-d '{"name": "eu-west", "plan": "starter"}'
Configure your application to use the nearest region.
Multi-Language Support
Meilisearch continues to improve its support for diverse languages.
Language-Specific Tokenization
Configure tokenization for specific languages:
# Configure for Japanese
curl -X PATCH 'http://localhost:7700/indexes/documents/settings' \
-H 'Authorization: Bearer your_master_key' \
-H 'Content-Type: application/json' \
-d '{
"tokenizer": {
"normalizer": "ja"
}
}'
CJK Support
Chinese, Japanese, and Korean (CJK) languages have special requirements:
# Enable CJK n-grams
curl -X PATCH 'http://localhost:7700/indexes/documents/settings' \
-H 'Authorization: Bearer your_master_key' \
-H 'Content-Type: application/json' \
-d '{
"separatorTokens": [" ", ",", ".", "?", "!", "\n"],
"ngramsEnabled": true
}'
This enables searching within CJK text without spaces between words.
RTL Language Support
Right-to-left languages are fully supported:
# Arabic support
curl -X PATCH 'http://localhost:7700/indexes/arabic_docs/settings' \
-H 'Authorization: Bearer your_master_key' \
-H 'Content-Type: application/json' \
-d '{
"searchableAttributes": ["title", "content"]
}'
Meilisearch handles Arabic, Hebrew, and other RTL languages correctly.
Integration Ecosystem
The Meilisearch ecosystem continues to expand with new integrations.
LangChain Integration
LangChain, the popular LLM application framework, supports Meilisearch:
from langchain_community.retrievers import MeilisearchRetriever
from langchain_community.embeddings import HuggingFaceEmbeddings
# Set up embeddings
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
# Create retriever
retriever = MeilisearchRetriever(
meilisearch_url="http://localhost:7700",
index_name="documents",
embeddings=embeddings,
search_key="your_api_key"
)
# Use in RAG pipeline
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
qa = RetrievalQA.from_chain_type(
llm=OpenAI(),
chain_type="stuff",
retriever=retriever
)
result = qa.invoke("What is Meilisearch?")
Next.js and React Integration
Build modern search UIs:
// components/Search.tsx
import { MeiliSearch } from 'meilisearch'
import { useState, useEffect } from 'react'
const client = new MeiliSearch({
host: process.env.NEXT_PUBLIC_MEILI_HOST,
apiKey: process.env.NEXT_PUBLIC_MEILI_KEY
})
export default function Search() {
const [query, setQuery] = useState('')
const [results, setResults] = useState([])
const [loading, setLoading] = useState(false)
useEffect(() => {
const search = async () => {
if (query.length < 2) {
setResults([])
return
}
setLoading(true)
const index = client.index('products')
const searchResults = await index.search(query, {
limit: 10,
attributesToHighlight: ['title', 'description']
})
setResults(searchResults.hits)
setLoading(false)
}
const timeout = setTimeout(search, 300)
return () => clearTimeout(timeout)
}, [query])
return (
<div>
<input
type="search"
value={query}
onChange={(e) => setQuery(e.target.value)}
placeholder="Search products..."
/>
{loading && <p>Loading...</p>}
<ul>
{results.map(hit => (
<li key={hit.id}>
<span dangerouslySetInnerHTML={{
__html: hit._formatted?.title || hit.title
}} />
</li>
))}
</ul>
</div>
)
}
Node.js and Express
Build API backends with Meilisearch:
const express = require('express')
const { MeiliSearch } = require('meilisearch')
const app = express()
const client = new MeiliSearch({
host: process.env.MEILI_HOST,
apiKey: process.env.MEILI_MASTER_KEY
})
app.get('/api/search', async (req, res) => {
const { q, limit = 20, offset = 0 } = req.query
try {
const index = client.index('products')
const results = await index.search(q, {
limit: parseInt(limit),
offset: parseInt(offset),
attributesToHighlight: ['title', 'description'],
facets: ['category', 'brand']
})
res.json(results)
} catch (error) {
res.status(500).json({ error: error.message })
}
})
app.listen(3000)
Python FastAPI
Modern Python web frameworks integrate easily:
from fastapi import FastAPI, Query
from meilisearch import Client
from pydantic import BaseModel
app = FastAPI()
client = Client("http://localhost:7700", "your_master_key")
class SearchResponse(BaseModel):
hits: list
query: str
processingTimeMs: int
@app.get("/search", response_model=SearchResponse)
def search(q: str = Query(...), limit: int = 20):
index = client.index("products")
results = index.search(q, {"limit": limit})
return results
Search Trends in 2025-2026
Broader search industry trends are influencing Meilisearch development.
Semantic vs Keyword Search
The industry is moving toward semantic search:
- Vector embeddings capture meaning, not just words
- Transformers power better understanding
- Hybrid approaches combine both worlds
Meilisearch’s hybrid search addresses this trend effectively.
Conversational Search
Search is becoming more conversational:
- Voice queries are increasingly common
- Multi-turn conversations improve context
- AI assistants handle complex queries
Future Meilisearch versions will likely enhance support for conversational interfaces.
Personalization
Search results are becoming more personalized:
- Location-based results
- User history and preferences
- Context-aware rankings
Meilisearch supports personalization through tenant tokens and custom ranking.
Real-Time Search
Users expect immediate results:
- Search-as-you-type experiences
- Instant index updates
- Real-time analytics
Meilisearch’s architecture naturally supports these requirements.
Performance Improvements
Recent versions have focused on performance.
Indexing Speed
Indexing has become faster:
# Benchmark indexing performance
time curl -X POST 'http://localhost:7700/indexes/books/documents' \
-H 'Authorization: Bearer your_master_key' \
-H 'Content-Type: application/json' \
-d @large_dataset.json
Expect 2-3x faster indexing compared to previous versions.
Search Latency
Search is now even faster:
- Median latency: <5ms for typical queries
- P99 latency: <50ms for complex queries
- Improved caching reduces repeated query costs
Memory Efficiency
Memory usage has been optimized:
- Lower RAM requirements for the same dataset
- Better memory-mapped file handling
- Reduced memory fragmentation
# Monitor memory usage
curl -X GET 'http://localhost:7700/stats' \
-H 'Authorization: Bearer your_master_key'
Best Practices for 2026
Apply these best practices in your implementations.
Use Hybrid Search
Combine keyword and vector search for best results:
const results = await index.search(query, {
hybrid: true,
semanticRatio: 0.5,
attributesToRetrieve: ['*']
})
Implement Proper Sampling
For large datasets, implement result sampling:
const results = await index.search('search term', {
limit: 100,
attributesToRetrieve: ['id', 'title'],
// Request facets for filter UI
facets: ['category', 'brand']
})
Monitor Performance
Track key metrics:
// Track query performance
const start = Date.now()
const results = await index.search(query)
const latency = Date.now() - start
console.log(`Search latency: ${latency}ms`)
console.log(`Hits: ${results.hits.length}`)
Plan for Scale
Design for growth:
- Use Meilisearch Cloud for easy scaling
- Implement proper caching at the application level
- Monitor index size and plan capacity
- Consider sharding for very large datasets
Security Enhancements
Security features continue to improve.
Granular Permissions
Create fine-grained API keys:
curl -X POST 'http://localhost:7700/keys' \
-H 'Authorization: Bearer your_master_key' \
-H 'Content-Type: application/json' \
-d '{
"description": "Analytics key",
"actions": ["stats.all"],
"indexes": ["products"],
"expiresAt": "2027-01-01"
}'
Audit Logging
Track access to sensitive operations:
# Enable audit logging (Enterprise)
MEILI_AUDIT_LOG_ENABLED=true
MEILI_AUDIT_LOG_PATH=/var/log/meilisearch/audit.log
Encryption
Data is encrypted at rest:
- Cloud: Always-on encryption
- Self-hosted: Configure encryption at filesystem level
Future Directions
Meilisearch continues to evolve.
Expected Developments
Watch for:
- Enhanced AI features - Better vector search, more integrations
- Improved clustering - Native distributed search
- Better analytics - Built-in search analytics
- More language support - Additional tokenizer improvements
Community Contributions
The open-source community continues to drive innovation:
- New SDKs for emerging languages
- Integration with new frameworks
- Performance optimizations
External Resources
Conclusion
Meilisearch in 2025-2026 represents a mature, capable search engine that has kept pace with industry trends. The addition of vector search, improved cloud offerings, and continued performance optimizations position Meilisearch well for modern search applications.
Key takeaways:
- Vector search and hybrid search capabilities are now mature
- Cloud offerings provide excellent managed options
- Language support continues to improve
- Performance remains excellent
- The ecosystem is well-developed
As search requirements continue to evolve with AI and personalization trends, Meilisearch is well-positioned to meet these challenges while maintaining its simplicity and developer experience.
In the next article, we will explore Meilisearch for AI applications, including vector search, RAG implementations, and semantic caching.
Comments