Documentation

tileserver-rs includes a browser-local AI assistant that lets you interact with your maps using natural language. The LLM runs entirely in your browser via WebGPU — no API keys, no cloud services, no token costs. Your data never leaves your machine.

How It Works

The AI chat is powered by WebLLM, which runs open-source language models directly in your browser using WebGPU acceleration. The assistant can:

  • Navigate the map — fly to cities, countries, or coordinates
  • Query features — find what's visible in the viewport or search tile data
  • Modify styles — change colors, opacity, visibility of map layers
  • Highlight features — temporarily highlight features matching a filter
  • Inspect data — get layer schemas, source statistics, and spatial queries

The entire conversation stays in your browser. Chat history is persisted to localStorage via TanStack DB and survives page refreshes.

Requirements

Info

WebGPU is required. Use Chrome 113+, Edge 113+, or any Chromium-based browser with WebGPU support. Firefox and Safari do not fully support WebGPU yet.

RequirementMinimumRecommended
BrowserChrome 113+Chrome 120+
GPU VRAM2 GB6 GB
Free RAM2 GB8 GB
Disk Space1 GB5 GB (for larger models)

Getting Started

  1. Open any style in the map viewer (e.g., http://localhost:8080/styles/protomaps-light/)
  2. Press ⌘K (Mac) or Ctrl+K (Windows/Linux) to open the chat palette
  3. On first use, the model downloads and compiles for your GPU (~30 seconds)
  4. Type a message or select a suggested prompt

Subsequent loads are fast (~2–5 seconds) because the compiled model is cached in IndexedDB.

Suggested Prompts

The chat palette shows suggested prompts when the conversation is empty:

  • "Fly to Paris, France and show me the Eiffel Tower area"
  • "What layers are available on this map?"
  • "Take me to Tokyo, Japan at a good zoom level"
  • "Show me the entire Mediterranean Sea region"

Available Models

tileserver-rs ships with four pre-configured models. Choose based on your hardware and needs:

ModelSizeTool CallingBest For
Hermes 3 8B (default)4.9 GB✅ NativeFull tool support, best accuracy
Hermes 2 Pro 8B5.0 GB✅ NativeAlternative tool-capable model
Qwen 2.5 3B2.0 GB❌ Text fallbackLower VRAM, basic navigation
Qwen 2.5 1.5B1.0 GB❌ Text fallbackMinimal hardware, basic navigation

Tool-capable models (Hermes) use native OpenAI-format function calling — the LLM decides which tool to invoke and the UI executes it automatically. Non-tool models (Qwen) use a text-based fallback where the LLM emits structured action blocks that are parsed and executed client-side.

You can switch models at any time using the model selector dropdown in the chat palette.

Map Tools

The AI assistant has access to 13 tools organized into three categories: navigation, styling, and data queries.

fly_to

Animate the map camera to a specific location.

"Fly to the Colosseum in Rome"
"Show me downtown Manhattan at zoom 15"
"Go to coordinates 139.7, 35.7 with a 45° bearing"
ParameterTypeRequiredDescription
lngnumberLongitude (-180 to 180)
latnumberLatitude (-90 to 90)
zoomnumberZoom level 0–22 (default 12)
bearingnumberBearing in degrees (default 0)
pitchnumberPitch 0–85° (default 0)

fit_bounds

Fit the map camera to a bounding box — useful for countries, regions, or areas.

"Show me all of Japan"
"Zoom to the Mediterranean Sea"
"Fit the map to the continental United States"
ParameterTypeRequiredDescription
westnumberWest longitude
southnumberSouth latitude
eastnumberEast longitude
northnumberNorth latitude
paddingnumberPadding in pixels (default 50)

get_map_state

Get the current map center, zoom, bearing, pitch, and visible layers. The assistant uses this to understand what you're looking at before making changes.

Styling Tools

set_layer_visibility

Show or hide a map layer by its ID.

"Hide the water layer"
"Show the buildings layer"
"Turn off all label layers"

set_layer_paint

Change a paint property of a map layer — color, opacity, width, and more.

"Make the water dark blue"
"Set the building fill opacity to 0.5"
"Change road line width to 3"

set_layer_filter

Apply a MapLibre filter expression to a layer. Only features matching the filter are shown.

"Only show parks in the landuse layer"
"Filter buildings taller than 50 meters"

add_highlight

Temporarily highlight features matching a filter with a colored circle. Highlights auto-remove after 8 seconds.

"Highlight all hospitals"
"Show me where the schools are in red"

generate_style

Apply multiple style changes at once from a natural language description.

"Make this a dark mode map"
"Give me a satellite-style color scheme"
"Make all text labels larger"

Data Query Tools

These tools query the tile data served by tileserver-rs — either from the rendered viewport or directly from the vector tile sources.

query_rendered_features

Query features currently visible in the map viewport. Returns properties and geometry type.

"What features are visible right now?"
"Show me the properties of buildings in view"
"List all points of interest I can see"

get_source_schema

Get the schema of a tile source — available layers, field names and types, zoom range, and bounds.

"What layers are in the openmaptiles source?"
"Show me the fields available in the buildings layer"

get_source_stats

Get statistics for a tile source — bounds, zoom range, layer count, attribution.

"What's the zoom range of this data source?"
"Show me the attribution for the terrain data"

spatial_query

Query features from a tile source within a bounding box. This queries the actual vector tile data on the server, not just what's rendered in the viewport.

"Find all buildings within 1km of the Eiffel Tower"
"What points of interest are in this area?"
ParameterTypeRequiredDescription
sourcestringSource ID to query
bboxnumberBounding box [west, south, east, north]
zoomnumberTile resolution (default 14)
layersstringLayer IDs to query
limitnumberMax features to return (default 100)

get_overlays

List all user-dropped file overlays on the map — file names, formats, feature counts, colors, and visibility. Works with files dragged onto the map viewer (GeoJSON, KML, GPX, CSV, Shapefiles, PMTiles).

Keyboard Shortcuts

ShortcutAction
⌘K / Ctrl+KToggle chat palette
EscClose chat palette
EnterSend message

Architecture

User Input → Chat Palette (Vue)
    ↓
useLlmPanel (composable)
    ↓
useLlmChat → useChat({ connection: stream(adapter) })
    ↓
stream() adapter — converts WebLLM → AG-UI protocol events
    ↓
WebLLM engine (browser-local, WebGPU)
    ├── chat.completions.create({ stream: true })
    └── Tool calls (fly_to, set_layer_paint, etc.)
    ↓
AG-UI events stream back to TanStack AI Vue
    ↓
Tool results auto-executed client-side
    ↓
Chat Palette renders messages + tool results

Key packages:

Chat Persistence

Chat messages are automatically saved to your browser's localStorage via TanStack DB. This includes:

  • All user and assistant messages
  • Tool call records (which tool was called, with what arguments)
  • Spatial query results

Messages persist across page refreshes and browser restarts. They are stored locally and never sent to any server.

Troubleshooting

"WebGPU is not supported"

Your browser doesn't support WebGPU. Use Chrome 113+ or Edge 113+. On macOS, Safari technology previews have partial WebGPU support but may not work reliably with WebLLM.

Model download is slow

The first download transfers 1–5 GB depending on the model. After the first download, the compiled model is cached in IndexedDB and loads in 2–5 seconds. Try a smaller model (Qwen 2.5 1.5B at 1 GB) if bandwidth is limited.

Tool calls don't execute

If the assistant describes an action but doesn't execute it, you may be using a non-tool model (Qwen). Switch to Hermes 3 8B for native tool calling support. The Qwen models use a text-based fallback that only supports basic navigation (fly_to and fit_bounds).

Chat palette won't open

Make sure you're on a style viewer page (/styles/{style}/). The AI chat is only available in the map viewer, not on the home page or data inspector.

GPU out of memory

Try a smaller model. Qwen 2.5 1.5B requires only ~1 GB of VRAM. Close other GPU-intensive tabs or applications. On systems with shared GPU memory (integrated graphics), ensure enough system RAM is available.

Privacy & Security

  • No data leaves your browser — the LLM runs entirely via WebGPU
  • No API keys required — no OpenAI, Anthropic, or cloud AI accounts needed
  • No token costs — inference is free, unlimited, forever
  • Chat history is local — stored in localStorage, never uploaded
  • Server-side tools query your own data — spatial queries go to your tileserver-rs instance, not any external service

Live Demo

Try the AI chat on the live demo at demo.tileserver.app. Open any style, press ⌘K, and start talking to the map.

Next Steps