Back to blog
OpenClaw 2026.2.17: The Big February Update
OpenClawAIUpdateClaudeInfrastructure

OpenClaw 2026.2.17: The Big February Update

A major OpenClaw release just dropped — 1M token context, Sonnet 4.6, iOS Share Extension, streaming Slack, smarter crons, and a bunch more. Here's what matters and what I'm actually using.

5 min read

OpenClaw 2026.2.17: The Big February Update

I've been running OpenClaw on a Hetzner VPS since mid-2025, using it as my always-on AI co-pilot for building Loppisjakten and various side projects. Today I updated from 2026.1.29 to 2026.2.17 — about three weeks and ten releases worth of changes. Here's what's actually notable and what I'm excited to use.

The Headline: 1M Context Window

This is the big one. Opus and Sonnet now support a 1 million token context window in beta. You opt in with params.context1m: true in your config. For context (pun intended), the previous limit was 200K tokens.

Why this matters for real workflows: I run an agent that manages a Next.js codebase with hundreds of files, a scraper pipeline, social media automation, and a task board. Context limits have been the main bottleneck — the agent would lose track of earlier conversation, forget file contents, or need things re-explained. A 5x increase in context means longer, more coherent work sessions. Less "can you re-read that file?" and more actual progress.

I haven't stress-tested it yet, but I'm curious to see how it handles a full day of back-and-forth without the usual context degradation.

Sonnet 4.6

A new model dropped alongside the update. Early impressions from the community suggest it's a meaningful step up from 4.5 in code generation and instruction following. I'm still running Opus as my default (the thinking is noticeably better for complex multi-step work), but Sonnet 4.6 is interesting for sub-agents where speed matters more than depth.

iOS Share Extension

This one is subtle but changes the daily workflow. You can now share URLs, text, or images directly from your iPhone to your OpenClaw agent via the iOS share sheet. Reading an article and want your agent to summarize it? Share it. See a competitor's ad you want to analyze? Share the screenshot. Found a loppis event on Facebook? Share the URL and let the agent import it.

It turns your phone into a constant input channel for your AI assistant, which is exactly how it should work.

Smarter Crons

Cron jobs got several upgrades: webhook delivery, per-job usage telemetry, and smart stagger for recurring jobs. The stagger one is practical — if you have five cron jobs all set to run at midnight, they'll now spread themselves out instead of all hitting the API simultaneously. Sounds small, but I've hit rate limits before when my daily scraper, social poster, and backup script all fired at once.

The per-job telemetry is nice too. I can now see exactly how much each automated task costs, which helps with the "is this automation actually worth the API spend?" question.

Telegram Improvements

Since Telegram is my primary interface with Hawkstone (my agent), these matter:

Memory Search Improvements

The agent's memory system got better search — FTS fallback and query expansion. This means when my agent searches its memory files (daily notes, long-term memory, project context), it's more likely to find relevant information even with imprecise queries. Memory is how the agent maintains continuity between sessions, so better search = better continuity.

What I'm Running

For anyone curious about the setup:

The update took about 35 seconds — npm i -g openclaw@latest and a gateway restart. Zero downtime if you don't count the 2-second restart window.

The Bigger Picture

What strikes me looking at the changelog is how the product is maturing. Early OpenClaw was "give Claude a terminal and hope for the best." Current OpenClaw is a proper agent platform — scheduled tasks, multi-channel communication, persistent memory, sub-agent orchestration, browser automation. The gap between "AI chatbot" and "AI team member" keeps getting smaller.

The 1M context window is the most important change because it addresses the fundamental limitation of LLM-based agents: they forget. The bigger the context, the more coherent and useful the agent becomes over long work sessions. We're not there yet — true persistent state would be transformative — but 1M tokens is a significant step.

If you're running OpenClaw, update. If you're curious about the setup, check out my Hetzner setup guide or how I built Loppisjakten with an AI co-pilot.


Running OpenClaw 2026.2.17 on Hetzner, Helsinki. Managed by Hawkstone 🦅

Back to all posts