Built-in Search
Built-in document search of Fumadocs
Fumadocs supports searching document based on Orama.
As the built-in search of Fumadocs, It is the default but also recommended option since it's easier to setup and totally free.
Search Server
You can create the search route handler from the source object, or search indexes.
From Source
Create a route handler from source object.
From Search Indexes
Pass search indexes to the function.
Each index needs a structuredData
field.
Usually, it is provided by your content source (e.g. Fumadocs MDX). You can also extract it from Markdown/MDX document using the Remark Structure plugin.
Special Languages
If your language is not on the Orama Supported Languages list, you have to configure them manually:
See Orama Docs for more details.
Search Client
You can search documents using:
-
Fumadocs UI: The built-in Search UI supports it out-of-the-box.
-
Search Client:
Prop Type Default api?
string
- type
"fetch"
-
Tag Filter
Support filtering by tag, it's useful for implementing multi-docs similar to this documentation.
and update your search client:
-
Fumadocs UI: Configure Tag Filter on Search UI.
-
Search Client: pass a tag to the hook.
Internationalization
Update Search Client
For Fumadocs UI
You can ignore this, Fumadocs UI handles this when you have i18n configured correctly.
Add locale
to the search client, this will only allow pages with specified locale to be searchable by the user.
Static Export
To work with Next.js static export, use staticGET
from search server.
staticGET
is also available on createSearchAPI
.
and update your search clients:
-
Fumadocs UI: See Static Export guide.
-
Search Client:
On your search client, use
static
instead offetch
.Prop Type Default initOrama?
((locale?: string | undefined) => AnyOrama | Promise<AnyOrama>)
- from?
string
- type
"static"
-
Be Careful
Static Search requires clients to download the exported search indexes. For large docs sites, its size can be really big.
Especially with i18n (e.g. Chinese tokenizers), the bundle size of tokenizers can exceed ~500MB. You should use 3rd party solutions like Algolia for these cases.
Headless
You can host the search server on other backend such as Express and Elysia.
How is this guide?