Free · Private · Goes further than the built-in tool

When the built-in
import tool isn't enough

Anthropic's built-in import covers simple memory transfers. LLMigrate handles the harder cases: large export files, Gemini migrations, and local LLM setup.

100% local — no uploads, no servers, no accounts
🤖
ChatGPT
LLMigrate
Claude
Gemini
LLMigrate
Local LLM

The cases the built-in tool doesn't cover

Claude's native import works well for simple memory transfers. LLMigrate handles large exports, non-Claude destinations, and deeper context extraction.

📂
Large export file splitting

ChatGPT exports from heavy users can run to hundreds of megabytes. LLMigrate streams and splits them into chunks that fit Claude's upload limits — no memory crash, no manual work.

Local LLM destination

Anthropic's tool imports to Claude only. LLMigrate generates a ready-to-use Modelfile or system prompt for Ollama and LM Studio. Your context, running on your hardware.

Gemini full context migration

Google Takeout doesn't export full conversation text. LLMigrate generates a structured extraction prompt that pulls what Gemini actually knows about you — memory, Gems, and usage patterns.

🧠
Deeper extraction prompt

Anthropic's import prompt extracts stored memory entries. LLMigrate's prompt also captures inferred patterns, communication preferences, technical context, and active projects — things not explicitly stored as memories.

📊
Usage profile analysis

Analyses your export to flag whether your session length and patterns are likely to hit Claude's context limits — with specific mitigation strategies for heavy Codex or coding users.

🔒
Fully private, fully auditable

Everything runs in your browser. No files leave your device. The tool is a single HTML file — view source and read every line. No accounts, no backend, no analytics.

Six steps. About 20 minutes.

If you're migrating a simple ChatGPT memory to Claude, use Anthropic's built-in tool — it's faster. LLMigrate is for the cases that need more.

1
Select platforms and focus Choose your source (ChatGPT or Gemini), destination (Claude or local LLM), and whether to emphasise professional or personal context.
2
Request your data export For ChatGPT: trigger an export now — it can take up to 24 hours. The tool tells you exactly where to click. You can continue while you wait.
3
Extract your context Copy a generated extraction prompt and paste it into your source AI. It produces a structured summary of your preferences, projects, and expertise.
4
Prepare your files Optionally upload your raw export files. The tool splits oversized JSON exports into chunks that fit your destination's upload limits.
5
Import into your new AI Detailed instructions for every import mechanism — User Preferences, Memory, Projects knowledge base, or a local Modelfile.
6
Verify A set of suggested verification prompts to confirm your new AI has absorbed your context correctly.

No backend. No compromise.

AI context contains some of the most sensitive professional information you have. LLMigrate is built around a simple constraint: it never touches a server.

🛡
Zero network requests inside the tool The tool page loads once and runs entirely offline. File processing uses the browser's native FileReader and Blob APIs. Nothing is transmitted.
🔍
Auditable by design The tool is a single HTML file. View source and you can read every line of logic. No minified bundles, no external dependencies, no obfuscation.
📵
No accounts required No sign-up, no email, no login. Open the tool, use it, close the tab. No state is retained between sessions.
🇬🇧
UK-GDPR compatible by architecture Because no personal data ever reaches a server, there is no data controller relationship to manage. The privacy guarantee is structural, not contractual.

Get the most out of your new platform

Claude Pro and Local LLM are LLMigrate's primary destinations. Pro unlocks the memory and Projects features that make context migration stick. Local LLM gives you full control.

Claude Pro

Affiliate link · Commission earned

Unlocks persistent Memory, unlimited Projects file storage, and Opus 4.6 access. The Pro tier is the recommended destination for most LLMigrate users — it supports every import mechanism the tool generates.

Try Claude Pro →
🤖

ChatGPT Plus

Affiliate link · Commission earned

If you're migrating from Gemini and considering ChatGPT as a destination, Plus gives you GPT-5.2 full access, persistent Memory, and Projects. ChatGPT-as-destination is coming soon to LLMigrate.

Try ChatGPT Plus →

Mac Mini M4

Affiliate link · Commission earned

Running Ollama locally? The Mac Mini M4 handles 7B–13B models well without a discrete GPU, and the M4 Pro variant runs 32B models comfortably. The most cost-efficient local LLM hardware available.

View Mac Mini M4 →
🖥

Hetzner Cloud VPS

Affiliate link · Commission earned

Want to run Ollama on a server rather than your local machine? Hetzner's GPU-enabled VPS instances are among the cheapest in Europe for running mid-size models 24/7.

View Hetzner plans →

Common questions

Doesn't Claude already have a built-in import tool?
Yes — Anthropic launched claude.com/import-memory in March 2026. It's a two-step copy-paste that works well for straightforward memory transfers from ChatGPT or Gemini to Claude Pro. If that's all you need, use it. LLMigrate is for the cases it doesn't cover: large export files that need splitting, migrations to local LLMs, deeper context extraction beyond stored memory entries, and usage pattern analysis for heavy users.
Why doesn't the simple import prompt work well for large ChatGPT accounts?
OpenAI appears to partially limit how much context the export prompt can retrieve, particularly for accounts with years of history. The more reliable route for heavy users is the full data export (Settings → Data controls → Export data), which produces the complete conversations.json file. LLMigrate handles that file regardless of size.
What does "context" mean — is it my chat history?
No. Raw chat history is rarely what you need to carry across. Context is the accumulated knowledge your AI has about you: your role, communication style, ongoing projects, technical preferences, and behavioural instructions. LLMigrate extracts this as a structured summary, not a log dump.
Does it work if I have years of ChatGPT history?
Yes. Large accounts often receive multiple conversations.json files from ChatGPT exports. LLMigrate streams them in 4 MB slices — the full file is never loaded into memory simultaneously — and splits the output into chunks that fit within Claude's upload limits.
Does Gemini export work the same way?
No. Google Takeout for Gemini exports activity metadata rather than full conversation text. For Gemini migrations, the context summary generated in Step 3 is the primary artefact. LLMigrate's extraction prompt specifically instructs Gemini to check its Memory bank and Gems instructions, which the generic Anthropic prompt does not.
Can I use this with a local LLM?
Yes — and this is the one use case Anthropic's tool doesn't touch at all. LLMigrate supports Ollama and LM Studio as destinations. For Ollama it generates a complete Modelfile with your context in the SYSTEM block. For LM Studio it generates a ready-to-paste system prompt.
Is this actually private? How can I verify it?
The tool page is a single HTML file with no external scripts, fonts, or API calls. Open your browser's developer tools (F12 → Network tab), load the tool, and you'll see zero network requests during use. You can also view the full source — there are no minified bundles, no obfuscation.
Is the tool free?
Yes, entirely free. LLMigrate earns a small commission if you sign up for a paid platform through one of the links on this page. The commission doesn't affect the price you pay.