Anthropic's built-in import covers simple memory transfers. LLMigrate handles the harder cases: large export files, Gemini migrations, and local LLM setup.
Claude's native import works well for simple memory transfers. LLMigrate handles large exports, non-Claude destinations, and deeper context extraction.
ChatGPT exports from heavy users can run to hundreds of megabytes. LLMigrate streams and splits them into chunks that fit Claude's upload limits — no memory crash, no manual work.
Anthropic's tool imports to Claude only. LLMigrate generates a ready-to-use Modelfile or system prompt for Ollama and LM Studio. Your context, running on your hardware.
Google Takeout doesn't export full conversation text. LLMigrate generates a structured extraction prompt that pulls what Gemini actually knows about you — memory, Gems, and usage patterns.
Anthropic's import prompt extracts stored memory entries. LLMigrate's prompt also captures inferred patterns, communication preferences, technical context, and active projects — things not explicitly stored as memories.
Analyses your export to flag whether your session length and patterns are likely to hit Claude's context limits — with specific mitigation strategies for heavy Codex or coding users.
Everything runs in your browser. No files leave your device. The tool is a single HTML file — view source and read every line. No accounts, no backend, no analytics.
If you're migrating a simple ChatGPT memory to Claude, use Anthropic's built-in tool — it's faster. LLMigrate is for the cases that need more.
AI context contains some of the most sensitive professional information you have. LLMigrate is built around a simple constraint: it never touches a server.
If you're using Ollama or LM Studio as your destination, these are the hardware options most people end up with. Affiliate links are labelled — they help cover the domain cost at no extra cost to you.
Silent, fanless, and handles 7B–13B models without a discrete GPU. The M4 Pro variant runs 32B models comfortably. Best price-to-performance for local LLMs if you're in the Apple ecosystem.
View Mac Mini M4 →Mini PC with 32 GB RAM and a 1 TB NVMe is the practical minimum for running 13B models smoothly on Windows or Linux. A reliable entry point if you don't want to commit to the Apple ecosystem.
View on Amazon →Prefer not to run hardware locally? Hetzner's GPU-enabled VPS instances are among the cheapest in Europe for hosting Ollama 24/7. No affiliate programme — recommended because it's genuinely good value.
View Hetzner plans →