Integrate AI-driven search into your existing Swift and SwiftUI apps.
Leverage the power of a managed vector database like Weaviate Cloud (WCD) to handle all the complex AI infrastructure, letting us, the developers, focus on what we do best—building amazing user experiences.
Get Hands-On: The Query Helper Application
To truly understand how to harness this power, you need to see the queries in action. That’s why I put together a custom Mac Query Helper Application.
This is a working macOS app that gives you a sandbox environment to test various Weaviate GraphQL queries against a live, pre-loaded dataset. You can change the API keys (you’ll need an OpenAI key for the Generative AI features) and immediately see the results of each query type.
Action Item: Download the source code from the accompanying repository (see Weaviate blog link at the end of this article). Use these query prompts as templates and simply copy/paste them into your own Swift networking layers. It’s the fastest way to bridge the gap between concept and code.
Query Deep Dive: Three Essential GraphQL Patterns
The power of Weaviate lies in its flexible GraphQL interface, allowing you to move beyond simple retrieval into true generative AI capabilities. Here are three critical query patterns every Apple developer should know.
1. The Foundation: Basic Retrieval and Metadata
The most basic query gets your data, but even here, we can optimize. We often need more than just the domain data; we need the system-generated metadata to show things like relevance scores or internal IDs.
Use limit and offset for paging, and access system-level details via the _additional property:
{
Get {
Book(limit: 5, offset: 0) {
title
num_pages
description
_additional {
id # The unique Weaviate ID
distance # How close the vector is (relevance)
}
}
}
}
This is your starting point for any results list, giving you the necessary details to power your SwiftUI List views and show users why a result is relevant (e.g., “95% match”).
2. Prioritization with Reordering (MoveAway and Rerank)
Standard vector search finds similar items. But what if you want to emphasize one concept and de-emphasize another? This is where Reorder queries come in, dramatically improving user intent.
For example, let’s say you are searching for “Mysteries set in Western Europe,” but you want to prioritize thrillers with a “war” theme and deliberately move away from “romance” plots. You can achieve this using moveTo and moveAwayFrom properties:
{
Get {
Book(
nearText: {
concepts: ["Mysteries set in Western Europe"]
distance: 0.6
moveAwayFrom: { concepts: ["romance"], force: 0.45 }
moveTo: { concepts: ["war"], force: 0.85 }
}
) {
title
description
}
}
}
Developer Insight: These operations require access to sophisticated LLMs (like OpenAI or Cohere) to process the concepts, which is why having your API keys ready is essential for these advanced functions.
3. Generative AI with RAG (Retrieval-Augmented Generation)
This is the game-changer. Why return a long description when you can ask the AI to summarize or translate the result before it even hits your app? This is RAG (Retrieval-Augmented Generation).
By adding the generate function to your query, you combine the precision of vector search with the summarization power of an LLM.
Let’s quickly summarize the top 5 relevant books in a single sentence:
{
Get {
Book(
nearText: { concepts: ["Search for mysteries set in Western Europe"] }
limit: 5
) {
_additional {
generate(
singleResult: {
prompt: "Summarize the book description in a single, short sentence: {description}"
}
) {
singleResult
}
}
}
}
}
Instead of your app having to make two separate network calls (one for data, one to a translation service), the database does the work in one step. This dramatically improves performance and simplifies your Swift code. You can use similar patterns for translation, classification, and more.
Conclusion: Focus on Features, Not Infrastructure
The biggest takeaway here is efficiency. As an Apple developer, your time is best spent designing beautiful UIs, managing state with SwiftData, and polishing the user experience. You shouldn’t be spending cycles managing vector index health, scaling embeddings, or worrying about LLM orchestration.
By integrating with a fully managed service like Weaviate Cloud, you offload the entire technical complexity of AI infrastructure. You move from being an infrastructure engineer to being a product developer, using simple, clear GraphQL queries to tap into world-class vector search and generative AI.
Go grab that Query Helper Application, run a few examples, and start thinking about how you’ll integrate this next-generation search into your own products today. Happy coding!
The entire article can be found here on the official Weaviate Blog.
