China Made Workers Train Their Replacements
While we talk about productivity, a quiet war over professional knowledge has erupted in China. Workers were given new instructions from their managers: document all their knowledge and skills as “Skills” files. The goal was clear — create AI agents capable of doing the employee’s job. In other words, they were asked to clone themselves. And so a race broke out among workers — who would be first to clone their colleague’s skills. This triggered an existential crisis among people who were once AI’s biggest enthusiasts. To fight back, a tool called `anti-distill.skill` was developed – it rewrites the document to look professional, detailed, and impressively thorough, while quietly stripping out all the critical insights and trade secrets that make the employee irreplaceable. Chinese companies are already feeding AI the entire message history of employees who left, to create “digital twins” that preserve and transfer their knowledge to an AI agent. This story raises an ethical question for anyone in tech and management: Is the documentation and knowledge-sharing we encourage today in the name of “efficiency” actually building the infrastructure to replace workers tomorrow?
Family Blames ChatGPT for Murder
Robert Morales was killed at a university in Florida. His family filed a lawsuit in an American court against OpenAI, claiming the shooter had regular conversations with ChatGPT before the massacre, and they believe the chatbot advised him on how to carry out the shooting. If the claims are proven, it would change the legal liability of AI companies for what their models tell people.
AI Predicts Heart Attacks Years Early
Scientists from Oxford built an AI tool that scans regular health data and identifies heart failure risk five years before the disease develops. Tested on 72,000 patients in England, it reached 86% accuracy. No complex tests or expensive equipment needed — the algorithm sees what a doctor can’t. Over 60 million people worldwide live with heart failure. Now there’s a chance to stop it before it strikes.
AI Sees Disasters Weeks Ahead
Today, meteorologists can predict a deadly storm 5 to 7 days out. That’s enough to evacuate a city but not enough to build protective infrastructure, organize mass evacuations, or save an entire farming season. Scientists revealed how AI can predict disasters about three weeks in advance by detecting atmospheric patterns no human can read.
AI Models Inherit Bad Behavior
A study published in Nature revealed that large models can pass unwanted — and even dangerous — behaviors to their offspring through “hidden signals” embedded in the training data itself, without anyone noticing. Researchers found that a model trained on the outputs of another model inherits traits that nobody asked it to. If you’re building an AI model on top of another AI model, you really should read this.
$20 AI Tool Cracks Bank Security
Hackers and criminal networks are bypassing banks’ biometric security systems worldwide using services sold openly on Telegram. Cybercriminals use AI tools to defeat Face ID and biometric authentication in banking apps. These supposedly foolproof systems are being cracked in about 90 seconds using Deepfake images and videos generated in real time.
Snap Fires 1,000, Blames AI
Two weeks ago, 1,000 Snap employees (16% of the company) received layoff notices. The internal memo explained the layoffs weren’t due to declining revenue — but because of AI’s rapid advancement. This isn’t the first time a company has listed AI itself as the formal, public reason for mass layoffs. The paradox: Snap built its product on the same AI it’s now saying replaced the people who created it. And it’s not alone.
The War for Your Mind
AI agents can impersonate humans so convincingly you can’t tell the difference. They join forums, write posts, collaborate with each other, and manufacture the illusion of a public consensus that never existed. Researchers warn that the next elections could be the real test of this technology. We’ve already seen deepfakes and fake social networks — but what’s coming could damage democracy in ways we won’t even feel.
Robots Handle Your Bags in Tokyo
At Tokyo‘s Haneda Airport, the next baggage handler won’t need a briefing — just a charge. Japan Airlines announced it will deploy humanoid robots for a real-world cargo handling trial in May. Japan is facing two challenges: record tourism and a severe labor shortage. The chosen solution isn’t foreign workers or higher wages — it’s robots working alongside humans, with frequent charging breaks. Don’t believe it? Buy a ticket to Tokyo and see for yourself.
Taylor Swift Patents Her Own Voice
On April 24, Taylor Swift’s management company filed three trademark applications — not for a name, not for a logo, but for her voice and likeness. This isn’t a branding move. It’s a declaration of war over her digital DNA, before someone steals it. Behind the decision is a growing fear that AI will replicate her voice, place her in videos she never filmed, and put words in her mouth she never said.
AlphaGo Creator’s Radical New AI
David Silver is the man who built AlphaGo — the AI that beat the world Go champion in 2016 and changed our understanding of what AI can do. Now he’s launched a new lab called Ineffable Intelligence with one goal: build an AI that learns without training on text, images, conversations, or any human-generated data. Every AI we know today was trained on human-made content — that’s the foundational rule of the AI world. Silver wants to break it. He raised $1.1 billion for a company that’s only a few months old. It’ll be fascinating to see whether his vision succeeds and rewrites the rules of the game.
China Blocks Manus Sale
A few months ago, Manus was the Chinese startup that went viral as the first autonomous agents capable of completing complex tasks without human oversight. Mark Zuckerberg saw the potential and offered $2 billion. China then announced a new rule: Chinese tech companies must get explicit government approval before accepting American investment. The AI war is no longer just about chips and sanctions — it’s also about who owns the companies themselves.
OpenAI Updates
- OpenAI publishes a new child safety framework for the AI era — covering legislative updates, collaboration with law enforcement, and detection and prevention mechanisms built directly into the models.
- OpenAI cut Codex quotas for Plus subscribers ($20) and launched a new Pro plan at $100 with 5x the coding usage.
- A Gallup study reveals: 57% of students use AI daily — even at institutions that ban it.
- GPT Image 2 — OpenAI’s new image generation model that understands complex requests and blends text into images, including Hebrew support.
- GPT-5.5 — a new model built for speed and efficiency, with an Auto-review feature that makes it deeply check and examine its own outputs before responding.
- ChatGPT for Google Sheets — an official integration that lets you build, edit, and analyze spreadsheet data in plain language without formulas.
- Codex 5.5 can now control a browser, work directly with Docs, Sheets, Slides, and PDF files, and run automated code reviews (Auto-review). Voice dictation was also added.
- OpenAI reset Codex usage limits for all paying subscribers — just for fun, to let developers keep working with GPT-5.5.
- OpenAI and Microsoft updated their partnership — OpenAI can now offer its products on any cloud platform, not just Azure, while continuing to collaborate with Microsoft through 2032.
- Base44 launches an SEO+AI dashboard that includes automatic generation of an llms.txt file — a summary that tells AI crawlers what your product does, so tools like ChatGPT and Gemini know to recommend it.
- Codex free for teams — OpenAI lets Business and Enterprise subscribers add Codex licenses at no cost through end of June.
Google Updates
- AI Edge Eloquent — Google’s free voice transcription app for iPhone and Mac that runs entirely on-device, filters out filler words and noise, and produces clean text — no cloud storage.
- Google Trends upgrades with Gemini — every trending topic now comes with an AI analysis explaining the context behind the numbers.
- Google Cloud published guides for using AI agents in business — from searching internal data to lead management, automated onboarding, and personalization at scale.
- Agentspace is Google’s new environment built specifically for AI agents.
- Gemini for Mac — a dedicated desktop app for Gemini built from scratch for macOS, letting you work with the model directly from your computer without a browser.
- Personal Intelligence — a Gemini feature that learns your preferences and personal context to deliver tailored responses.
- Google opened access to NotebookLM notebooks directly from within Gemini.
- Jules — Google’s AI agent for developers that manages end-to-end development: reads product context, decides what to build, writes code, and submits a PR automatically.
Anthropic Updates
- Anthropic signed agreements with Google and Broadcom to supply gigawatt-scale compute power (TPUs) starting in 2027 — to train the next generations of Claude. Revenue has since jumped from $9 to $30 billion.
- An opinion piece on the difference between “motion” and “action” when working with AI — spinning up a bot in 90 seconds isn’t value, it’s only value if there’s a real problem being solved behind it.
- Routines — a new Claude Code feature that lets Claude run autonomously in the background: execute scheduled tasks, respond to GitHub events, and open PRs automatically.
- Claude Design — a new tool that lets you build prototypes, presentations, and documents directly from a conversation with Claude, including real-time editing.
- Anthropic Labs — a new development arm of Anthropic that will release experimental tools at high frequency before they become official products, similar to what exists at Google.
- Claude Opus 4.7 — Anthropic’s powerful new model that reviews its answers before returning them, with a 3x improvement in vision resolution and new API control tools for developers.
- A developer guide from Anthropic on properly managing the context window in Claude Code — including when to open a new session, how to proactively compress files, and when to spin up sub-agents to prevent performance degradation.