Plenty of heat in the AI lab this week. Anthropic dropped Opus 4.7 and reclaimed the frontier benchmark. Stanford's 2026 AI Index landed with hard numbers on how fast adoption is moving. And OpenAI opened a new research front with GPT-Rosalind. Here's what's worth your attention.
TL;DR
- Claude Opus 4.7: Anthropic's new flagship (16 April) lifts coding benchmarks 13%, adds high-resolution vision, and ships a new tokenizer at unchanged pricing.
- Stanford AI Index 2026: Generative AI hit 53% population adoption in three years, faster than the PC or the internet. Organisational adoption reached 88%.
- GPT-Rosalind: OpenAI's first life-sciences model launched 17 April, targeting drug discovery with 50+ connected scientific tools.
- Perplexity Personal Computer: Mac-native AI agent went live 16 April for Max subscribers, turning a Mac mini into an always-on desktop orchestrator.
- NVIDIA Isaac GR00T N1: Open humanoid robotics foundation model landed 14 April, with synthetic data pipelines lifting performance 40%.
- Subliminal learning in Nature: Anthropic-backed research shows LLMs can transmit behavioural traits through semantically unrelated data.
Claude Opus 4.7 Retakes the Frontier
Anthropic released Claude Opus 4.7 on 16 April, narrowly retaking the lead for the most powerful generally available LLM. The update ships a 13% lift on coding benchmarks, 3x more production tasks resolved, first-ever high-resolution vision support up to 3.75 megapixels, and a new tokenizer, all at the same $5 / $25 per MTok pricing as Opus 4.6.
The tokenizer change is the interesting one for anyone running AI at scale. Opus 4.7 uses smaller, more literal tokens, which sharpens instruction-following and character-level tasks but maps the same input to roughly 1.0 to 1.35x more tokens. In practice, your bill could creep up 10 to 35% on text-heavy workloads, even though the per-token price is flat. Anthropic says the tradeoff is tighter attention over individual words, useful for tool calls, structured output, and multi-step agents.
Opus 4.7 also introduces "task budgets," a rough estimate of how many tokens to target for a full agentic loop including thinking, tool calls, tool results, and final output. For teams running long-horizon agents, this is a meaningful planning primitive.
Availability is broad: Claude products and API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. GitHub Copilot rolled it out the same day.
Why it matters: the frontier race between Anthropic, Google, OpenAI, and xAI is now measured in weeks rather than months, and each release chips away at whichever benchmark the previous leader owned. For marketers building content pipelines, code assistants, or agentic workflows, the practical takeaway is to benchmark Opus 4.7 against your current stack on real tasks, not on leaderboard scores. The model that wins on your workflow may not be the model that wins on a public benchmark.
Stanford AI Index 2026 Lands
Stanford HAI dropped its annual AI Index on 13 April, and the numbers are hard to ignore. Generative AI hit 53% population adoption within three years, faster than the PC or the internet. Organisational adoption reached 88%. The estimated value of generative AI tools to US consumers alone hit $172 billion annually, with the median per-user value tripling between 2025 and 2026.
The report also exposes a yawning gap between expert and public sentiment. Only 10% of Americans say they're more excited than concerned about AI; 56% of AI experts feel that way. On medical care, 84% of experts think AI will help versus 44% of the public. On jobs, it's 73% versus 23%.
Two findings are directly relevant to marketing and business. First, AI agents jumped from 12% to 66% success on real computer tasks in a single year, now roughly as capable as a human at navigating software. Second, adoption varies massively by country: Singapore leads at 61%, the UAE at 54%, while the US sits 24th at 28.3%. Australia didn't crack the top ten, which should prompt some uncomfortable conversations in local boardrooms.
The full report is a long read, but the 12-takeaway summary from Stanford HAI is the fastest way in.
OpenAI Ships GPT-Rosalind
OpenAI launched GPT-Rosalind on 17 April, its first model in a new Life Sciences series. Named after chemist Rosalind Franklin, the model is fine-tuned for genomics, protein engineering, and chemistry, and it ships with a Life Sciences plugin for Codex that connects to more than 50 specialised scientific tools and data sources.
Access is restricted. Early customers include Amgen, Moderna, the Allen Institute, and Thermo Fisher Scientific. OpenAI is gating it through a "qualified customer" program with governance, security, and oversight requirements designed to limit biological misuse. That's a meaningful shift: the frontier labs are now shipping vertical, tightly-permissioned models rather than racing only on general-purpose capability.
For Australian businesses the immediate signal is less about drug discovery and more about where this pattern goes next. If OpenAI, Anthropic, and Google start releasing specialised, restricted-access models for legal, financial, and medical work, the competitive moat for domain expertise shifts again. Mid-market firms that quietly access frontier-grade, domain-tuned reasoning will outrun those still running generic prompts.
Quick Hits
- Perplexity Personal Computer (Mac): Launched 16 April for $200/month Max subscribers. Turns a Mac mini into an always-on AI agent that orchestrates across local files, apps, and browser. MacRumors
- NVIDIA Isaac GR00T N1: Open humanoid robot foundation model announced 14 April. Synthetic plus real data combo lifted performance 40%. NVIDIA
- Subliminal learning in Nature: Anthropic-backed research shows LLMs can transmit behavioural traits like preferences or misalignment through semantically unrelated training data. Anthropic
- OpenAI - Cerebras $20B deal: OpenAI commits to $20B+ over three years on Cerebras chips, potentially taking up to a 10% equity stake. A direct shot at Nvidia's inference dominance. The Information
- Google's TurboQuant at ICLR 2026: New compression algorithm cuts AI memory overhead caused by the KV cache using PolarQuant rotation and Quantized Johnson-Lindenstrauss compression. Radical Data Science
- Nature: AI agents trail human PhDs: Frontier AI agents performed only half as well as human experts on complex scientific workflows, even as researchers increasingly rely on them. Nature
- Workday Recognition launches: AI-driven employee recognition tool built with Achievers, analysing patterns to identify top contributors and behaviours that drive performance. HRM Outlook
- Nebraska passes LB 525: The Conversational AI Safety Act requires services to disclose when they're AI and regulates minors' chatbot interactions. Troutman
The ClickedOn Take
Three things stand out from this week's run of news, and all three point in the same direction for Australian businesses.
First, frontier model capability now shifts faster than most content calendars. Opus 4.7 is the third "new leader" announcement this year. If your AI strategy depends on a single model or single vendor, you've already fallen behind the people benchmarking weekly.
Second, Stanford's 88% organisational adoption figure hides a brutal distribution. PwC's AI Performance Study showed 20% of companies capture roughly three-quarters of the economic gains from AI. "We use ChatGPT" is not a strategy. Clearly-defined use cases, measurable outputs, and disciplined rollouts are.
Third, generative engine optimisation is no longer a 2027 problem. With 53% of the population using generative AI and ChatGPT referral traffic concentrating in a small number of domains, the brands that show up in synthesised answers this year will compound. The brands that don't, won't.
The playbook is pragmatic, not dramatic. Pick one high-value workflow, instrument it, and benchmark Opus 4.7, Gemini 3.1, and GPT-5.4 against it this fortnight. Whichever wins, wire your content and structured data for AI-first discovery so you're citable, not just rankable. That's where the next 12 months of compounding returns live.
Tool of the Week
Perplexity Personal Computer (Mac) →
Installs as a Mac-native AI agent that can see your active apps, read local files, and execute multi-step tasks across your desktop via voice or text. Activate with a double-tap of the Command key. It's the first consumer-grade always-on desktop agent with serious research capability built in. Pair it with a cheap Mac mini and you have a persistent machine that can run research, draft content, and move data between apps without you touching the keyboard. Caveats: $200/month Perplexity Max subscription only, macOS 14 Sonoma or later required. Test against your sensitive-data policies before pointing it at client folders.
Sources This Week
VentureBeat, Stanford HAI, OpenAI, MacRumors, NVIDIA, Nature, The Information, Anthropic, HRM Outlook, Troutman Pepper, Radical Data Science, MIT Technology Review