AI DUNN Right Weekly - Issue #23
- 20 hours ago
- 7 min read
Practical AI insights for business growth

Hey AI Innovators! 👋
This was a heavy week.
Anthropic rewrote their safety policy the day before a Pentagon deadline. AI went from something you talk to into something that just gets on with the job. Notion launched agents that work 24/7. And a company with two developers generated $3M in 48 hours using a no-code AI builder.
Here's what you need to know:
Anthropic quietly changed a core part of their safety policy right before a Pentagon ultimatum - and the timing says everything
Claude Cowork is now doing real agentic work for non-technical users - logging into dashboards and collecting invoices on your behalf
Notion launched custom AI agents that run 24/7 across your connected tools without you lifting a finger
A company built a full premium product with two developers in two weeks and generated $3M at launch
Gemini on Android can now place your food order, book a rideshare, and handle multi-step tasks across apps
Read time: 5 minutes
Big Story
Anthropic blinked. Here's why it matters for all of us.
This one is uncomfortable to write. But you need to know about it.
For two years, Anthropic has been the AI company with a conscience. The one that hired ethicists and philosophers. The one that built safety into their mission from day one. The poster child for responsible AI development.
This week, that reputation took a serious hit.
Here's what happened...
On Tuesday, Anthropic published version three of their Responsible Scaling Policy. The headline change: they removed their commitment to pause training more powerful AI models if those models became too dangerous to control safely.
That commitment was kind of the whole point of the original policy.
The day before they published this, US Defence Secretary Pete Hegseth gave Anthropic's CEO Dario Amodei an ultimatum. Roll back your safety guardrails, or lose a $200 million Pentagon contract and get flagged as a supply chain risk.
The next day, the policy changed.
Anthropic says the timing is a coincidence and the change is about staying competitive. Nobody is really buying that.
Now here's where it gets more nuanced...
There's an argument that the old policy was never actually workable. If Anthropic pauses development because a model becomes too risky, OpenAI and xAI just carry on. If all American labs pause, Chinese labs carry on. The only way it works is if every lab in the world agrees simultaneously, which is not happening.
So maybe the policy was always more symbolic than practical.
But here's the thing. Symbols matter. Timing matters. And the message it sends to the rest of the industry matters even more.
Google, OpenAI, and xAI all took the same government contracts. None of them made a public fuss. None of them drew a line. Their silence tells you something about what they already agreed to behind closed doors.
My take on this? Stay informed. Ask questions. And keep building your own AI skills, because the more you understand these tools, the better equipped you are to think critically about them.
What's New This Week
AI stopped asking for permission this week
We crossed a line this week. Not a dramatic, sci-fi line. A quiet one. The kind you only notice when you look back.
AI stopped being something you talk to. It became something that just... does things.
Kyle Balmer from AI with Kyle said it best in his Saturday newsletter. He gave Claude Cowork a spreadsheet of invoices he needed for his bookkeeper. Some came by email. Others were buried in dashboards of services that don't send invoices at all.
So what did Cowork do?
It went into his Gmail and found the 13 companies that email invoices. Then it logged into the other 20 or so services that don't, pulled the invoices from their dashboards, organized everything into a labeled folder, and sent the whole lot to his bookkeeper.
He didn't touch it.
That's not a chatbot. That's an agent.
And here's the line Kyle used that I haven't been able to stop thinking about...
In 2025, we talked to AI and it told us how to do things.
In 2026, it just does the thing.
Claude Cowork is designed specifically for non-technical people. No code. No complex setup. You describe the task and it works off your computer to get it done.
If you've been watching "agentic AI" as a buzzword from a safe distance... that distance just got a lot smaller.
Notion launched custom agents that run 24/7
Notion dropped Custom Agents this week and it flew under the radar.
These are AI teammates you configure in plain language. They run on schedules or triggers across your connected tools like Slack, email, calendar, and Figma. No manual prompting.
No babysitting.
Here's the number that stopped me: Notion says it now uses more AI agents internally than it has employees.
Business and Enterprise users get access for the next two months. If you're already in Notion, this is worth testing right now.
A no-code app builder just proved the game has changed
Brazil's largest edtech company, Qconcursos, needed a premium product fast.
Two developers. Two weeks. $3 million in 48 hours after launch.
They used Lovable, an AI app builder where you describe what you want in plain English and it generates production-ready code. They built a premium education tier, an AI homework solver, and integrated a 4-million question database.
With two people.
The excuse of "we don't have the resources to build that" is getting harder and harder to defend.
Gemini on Android is now doing things, not just answering things
Google rolled out a beta update to Gemini on Android this week.
You can now tell it to handle multi-step tasks across apps. Order food. Call a rideshare.
Manage grocery delivery based on your past orders. You state what you want, Gemini figures out which apps to open and takes the actions.
Currently in beta on Pixel 10 and Samsung Galaxy S26 in the US and Korea. But the direction is clear: AI assistants are becoming AI executors.
Tool of the Week
Claude Cowork
I'd be doing you a disservice if I highlighted anything else this week. Let's expand...
Claude Cowork is Anthropic's agentic desktop platform built specifically for non-technical users. Think of it as Claude's less intimidating sibling that actually gets on with the work.
Here's what makes it different from just using Claude in the browser...
It works off your actual computer. It can access your files, log into your apps, navigate dashboards, and take actions on your behalf. You don't supervise every step. You give it a task and come back when it's done.
Real use cases that are working right now...
Collecting invoices from email inboxes and service dashboards (yes, like Kyle's bookkeeping story above)
Organizing files and sending them to the right people
Handling repetitive admin tasks that follow a predictable pattern
This is the tool that closes the gap between "AI helps me think" and "AI handles the work."
Worth noting: Anthropic also just expanded Cowork with enterprise plugins connecting Claude directly to Google Workspace, DocuSign, WordPress, and more. Admins can build private plugin marketplaces for their teams.
If you haven't tried Cowork yet, this is the week to start.
Quick Hits Worth Your Time
→ Block (Jack Dorsey's company) cut over 4,000 jobs - 40% of its workforce - replacing them with AI. Their stock jumped 20% the same week. Worth paying attention to what that signals for task-based roles.
→ Someone accidentally gained access to 7,000 robot vacuums and could see through their cameras in 24 countries. He was just trying to control his hoover with a game controller, using Claude Code. A good reminder that powerful tools cut in all directions.
→ Anthropic raised $30 billion in February, the second-largest tech funding round ever. Annual revenue is now $14 billion, with 80% from enterprise clients. The Pentagon story makes a lot more sense in that context.
→ Google acquired ProducerAI and integrated it with their music generation model Lyria 3. You can now generate full songs and custom instruments from a text prompt. Licensed AI music tools are getting very real, very fast.
→ Mercury 2 from Inception Labs runs at over 1,000 tokens per second. That's dramatically faster than most models available today. Speed matters more than people realize when you're running automated workflows.
Prompt of the Week
This one is for anyone who wants to start handing tasks off to AI but doesn't know where to begin...
Act as an operations specialist. Help me identify which
tasks in my workday are best suited for AI automation
right now.
Essential Details:
- My role: [YOUR JOB TITLE OR BUSINESS TYPE]
- Tools I currently use: [LIST YOUR APPS]
- Tasks I do most often: [LIST 5-10 REPEATED TASKS]
- Tasks I dread most: [BE HONEST]
- Time I spend on admin vs. real work: [ROUGH ESTIMATE]
- AI tools I have access to: [ChatGPT / Claude / Gemini]
Analyze my situation and give me:
1. Top 3 tasks I should automate first (and why)
2. The specific AI tool or approach for each one
3. A realistic time estimate for setting each one up
4. What I should NOT automate yet (and why)
5. One thing I can try in the next 30 minutes to get started
Why this works: Most people know AI can help them. They just don't know where to start. This prompt forces specificity and gives you a prioritized action list instead of a vague "use AI more" suggestion. That last question is the most important one. Momentum beats perfection every single time.
My Take
I've been sitting with the Anthropic story all week.
Because here's the thing. When I teach AI to beginners, one of the first things I talk about is trust. Trusting the tool enough to try it. Trusting your own judgment enough to know when something doesn't look right.
This week reminded me that trust works in multiple directions.
We trust these tools. We build our workflows around them. We teach other people to use them. And the companies behind them are making decisions that we don't always get to see clearly.
Anthropic changing their safety policy under government pressure is not a small thing. It's easy to scroll past because it feels like it's happening somewhere far away, in boardrooms and policy documents that don't affect your daily work.
But it's not far away. The values baked into these companies shape what the tools do, who has access to them, and what they're used for.
I'm not saying stop using Claude. I use it every single day and I'm not planning to stop.
What I am saying is this: stay curious. Stay critical. Don't just ask "how do I use this?" Ask "who built this, and what decisions did they make to get here?"
That kind of thinking is exactly what separates someone who uses AI confidently from someone who just goes along for the ride.
Keep learning. Keep asking questions. That's the whole point of this newsletter.
See you next week.
Jackie


Comments