The AI model 'MiniMax-M1', which can handle the world's longest context window of 1 million token inputs and 80,000 token outputs, was trained for only 78 million yen and is now available as open source and can be downloaded by anyone

Day 1/5 of #MiniMaxWeek : We're open-sourcing MiniMax-M1, our latest LLM — setting new standards in long-context reasoning.
— MiniMax (official) (@MiniMax__AI) June 16, 2025
- World's longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency:… pic.twitter.com/bGfDlZA54n
GitHub - MiniMax-AI/MiniMax-M1: MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
https://github.com/MiniMax-AI/MiniMax-M1
MiniMax-M1 - a MiniMaxAI Collection
https://huggingface.co/collections/MiniMaxAI/minimax-m1-68502ad9634ec0eeac8cf094
MiniMax claims that MiniMax-M1 is 'the world's first open-weight and large-scale hybrid attention inference model.' MiniMax-M1 combines a hybrid Mixture-of-Experts (MoE) architecture with the Lightning Attention mechanism.
Developed based on MiniMax-Text-01 , MiniMax-M1 contains a total of 456 billion parameters, with 45.9 billion parameters active per token. Like MiniMax-Text-01, MiniMax-M1 natively supports a context window of 1 million tokens, which is eight times larger than DeepSeek R1 .
MiniMax-M1 employs the Lightning Attention mechanism, which allows for efficient scaling of computation during testing. For example, with a context window of 100,000 tokens, MiniMax-M1 consumes only 25% fewer FLOPs compared to DeepSeek R1. This characteristic makes MiniMax-M1 particularly suitable for complex tasks that require long context windows and extensive thinking.
The graphs below compare the performance of major commercial AI models in competitive-level mathematics, coding, software engineering, agent tool use, and text comprehension tasks. The red line shows the MiniMax-M1, which performs comparable to competing AI models in every task.
Examples of how to use MiniMax-M1 include:
Just drop a prompt to instantly generate an HTML page with a canvas-based animated particle background.
1️⃣ UI Components Spotlight
— MiniMax (official) (@MiniMax__AI) June 16, 2025
Just drop in a prompt — it instantly builds an HTML page with a canvas-based animated particle background pic.twitter.com/AMVn5BZDKr
'We asked the MiniMax-M1 to create a typing speed test, and it produced a clean and functional web app that tracks WPM in real time.
No plugins, no setup required. Just practical, production-ready AI. 👇'
2️⃣Interactive Apps
— MiniMax (official) (@MiniMax__AI) June 16, 2025
Asked MiniMax-M1 to build a typing speed test — it generated a clean, functional web app that tracks WPM in real time.
No plugins, no setup — just practical, production-ready AI. 👇 pic.twitter.com/QrNbBp3lo0
The following HTML page was generated with the prompt: 'Create an HTML page with a canvas-based animated particle background. The particles should move smoothly and connect as they approach each other. Add a heading text to the center of the canvas.' It is generated using HTML canvas elements and JavaScript.
3️⃣ Visualizations
— MiniMax (official) (@MiniMax__AI) June 16, 2025
Prompt: Create an HTML page with a canvas-based animated particle background. The particles should move smoothly and connect when close. Add a central heading text over the canvas
Canvas+JS, and the visuals slap.👇 pic.twitter.com/48ZHLHel4j
A maze generator and pathfinding visualization tool was generated using the following query: 'Create a maze generator and pathfinding visualization tool. Randomly generate a maze and visualize how an algorithm solves it step by step. Use canvas and animations to make it visually appealing.'
4️⃣ Game
— MiniMax (official) (@MiniMax__AI) June 16, 2025
Prompt: Create a maze generator and pathfinding visualizer. Randomly generate a maze and visualize A* algorithm solving it step by step. Use canvas and animations. Make it visually appealing. pic.twitter.com/VSeVndWemd
AI instructor Min Choi has also published a use case for the MiniMax-M1. Below is a Netflix clone app that can play trailers.
This is wild.
— Min Choi (@minchoi) June 16, 2025
MiniMax-M1 just dropped.
This AI agent = Manus + Deep Research + Computer Use + Lovable in one.
1M token memory, open weights🤯
10 wild examples + prompts & demo:
1. Netflix clone with playable trailers pic.twitter.com/mfpzrCwlot
AI Strategist David Hendrickson reports, 'We compared the newly released MiiniMax-M1 80B with the Claude Opus 4, and the MiiniMax-M1 outperformed the Claude Opus 4 in several benchmarks, especially the longer context-driven benchmarks.'
Compare the newly released MiniMax-M1 80B against Claude Opus 4. MiniMax outperforms Claude Opus 4 in some benchmarks, especially in long-context driven benchmarks. https://t.co/XIiEovh7wB pic.twitter.com/0PgPi7bk3E
— David Hendrickson (@TeksEdge) June 16, 2025
MiniMax is part of a group of AI startups backed by internet giants Tencent and Alibaba that have raised billions of dollars in funding over the past year, but DeepSeek's success has forced most of them to scale back on basic research and focus on applied research.
However, the announcement of the MiiniMax-M1 suggests that MiniMax has found a different path, AI-related media implicator.ai points out. Rather than retreating from model development, MiiniMax may have found its own path by focusing on efficiency and open access.
In addition, the training budget for the MiiniMax-M1 was only $534,700 (about 78 million yen), suggesting that cutting-edge AI models no longer require huge corporate resources.
Related Posts: