- The Shift
- Posts
- OpenAI Doubles Power With 4.1
OpenAI Doubles Power With 4.1
Plus, 📊 Google Sheets Just Got an “=AI” Formula — Here’s How to Use It, Seaweed: ByteDance’s Efficient Video AI, and more!

Hello there! Ready to dive into another upgrading, Mind-boggling, and Value-filled Shift?
Day 1756 of ChatGPT Trolling me :)
Today we have:
🤖 GPT-4.1: OpenAI’s Latest Leap in AI Coding and Context Mastery
📊 Google Sheets Just Got an “=AI” Formula — Here’s How to Use It
🎥 Seaweed: ByteDance’s Efficient Video AI
🏆 Tools and Shifts you Cannot Miss
🤖 GPT-4.1: OpenAI’s Latest Leap in AI Coding and Context Mastery
OpenAI has officially launched GPT-4.1, a powerful new family of models including GPT-4.1, 4.1 Mini, and 4.1 Nano—designed to outperform GPT-4o in nearly every area. The release promises breakthroughs in large-context comprehension, real-world coding, and affordability for developers and enterprises alike.
The Shift:
Massive Context, Smaller Price - All GPT-4.1 models can handle up to 1 million tokens—roughly 750k words—up from GPT-4o’s 128k, enabling deeper context tracking in long tasks. GPT-4.1 is also 26% cheaper than GPT-4o, with the Nano variant being OpenAI’s most affordable and fastest model to date.
Best-in-Class for Real-World Coding Tasks - GPT-4.1 is optimized for software engineering, from front-end formatting to tool usage and bug testing. It scored 54.6% on SWE-bench Verified, outperforming previous OpenAI models but trailing Google’s Gemini 2.5 Pro and Anthropic’s Claude 3.7 on the same benchmark.
Coding Agents Are the Goal - OpenAI envisions these models as stepping stones toward autonomous software agents that can code apps end-to-end—including QA, documentation, and debugging.
Smarter, Not Perfect - While 4.1 excels at instruction following and coding structure, it becomes less reliable with ultra-long prompts—accuracy drops from 84% at 8k tokens to 50% at 1 million. It also tends to be more literal than GPT-4o, needing highly specific inputs to function best.
GPT-4.1 marks a major leap in scalable AI coding and context comprehension, giving developers more power at lower costs. As OpenAI pivots to “agentic software engineers,” these tools could reshape how we build software, bringing us closer to fully autonomous dev teams.
📊 Google Sheets Just Got an “=AI” Formula — Here’s How to Use It
Insights from Paul Couvert
Google Sheets has quietly introduced a game-changing feature: the =AI formula powered by Gemini. This new function lets you perform high-level data analysis and language processing directly within your spreadsheet — no add-ons, no scripting, just smart, natural language queries.
🔧 What You Can Do with =AI
1. Categorize Text Instantly
Forget manual lookups or logic chains — =AI can classify data by context.
Example:=AI("Is this a basketball or baseball team?", A2)
2. Run Sentiment Analysis
You can now detect the tone of text — something previously outside spreadsheet capabilities.
Example: =AI("Classify this sentence as positive or negative.", A2)
3. Summarize Long Inputs
Turn messy input into clean summaries for reports, updates, or dashboards.
Example: =AI("Summarize this in one short sentence.", A2)
🧠 Why This Matters
This moves Google Sheets from a data tool to a true AI assistant. You can clean, analyze, and understand data contextually — without ever leaving your spreadsheet.
If you work with words, customer feedback, survey responses, or content at scale, =AI is the formula you didn’t know you needed.
🎥 Seaweed: ByteDance’s Efficient Video AI
ByteDance has released Seaweed, a 7 B-parameter video generation model that punches well above its weight—outperforming much larger models like Kling 1.6, Google Veo, and Wan 2.1. Despite using significantly less compute, it delivers high-quality results in text, image, and audio-driven generation.
The Shift:
Small Model, Big Impact - Seaweed uses the compute of just 1,000 H100 GPUs but ranks among the top in human evaluations—especially in image-to-video benchmarks. It can generate native 20-second clips (extendable to one minute) and has been fine-tuned for realistic human motion, lip syncing, and lifelike animation.
Multi-Modal Power & Control - The model supports text-to-video, image-to-video, and audio-driven synthesis, with conditioning options using reference images or frame anchors. It allows precise camera movements and storytelling over multiple scenes, offering creators frame-by-frame control over aesthetics, pacing, and transitions.
Built for Real-Time, Versatile Use - Seaweed runs in real time at 1280×720 resolution and 24fps, making it ideal for live rendering or interactive content. It also supports upscaling to 2K QHD, training on CGI for better physical realism, and synchronized audio generation for full-scene coherence.
China’s AI video labs are dominating innovation, and Seaweed proves that smaller, efficient models can still outperform heavyweights. It opens the door to scalable, real-time, high-fidelity video creation without massive infrastructure—bringing top-tier generative video within reach for creators everywhere.
🚀 AI Tools for the Shift
🚀 Motion Expert Agents– Instantly review creatives with AI workflows built by top-tier ad strategists. Get your early access now!
💬 Askful – Turn your product pages into conversion machines by instantly answering customer questions.
🔍 PeopleAlsoAsk – Discover real questions people ask and turn them into high-ranking, resonant content.
📥 CustomerIQ – Let AI handle your inbox, meeting notes, CRM updates, and email drafts.
🤖 Agionic – Add a smart, conversion-focused AI chat agent to your website in minutes, engage visitors, and drive sales with zero code.
⚡Quick Shifts
🛍️ Apple is testing a new method to improve its AI without directly accessing user data. By using synthetic samples compared against user messages on-device, only selection signals—not actual content—are shared. 95% specificity achieved in prior related AI agent tasks highlights the potential precision of such privacy-focused systems.
🎈 Google has developed an AI model named DolphinGemma, trained on wild Atlantic spotted dolphin sounds, to analyze vocal patterns and predict sequences. The goal is to assist researchers in decoding dolphin communication and understanding the structure behind their acoustic signals.
🎫 Hugging Face has acquired Pollen Robotics, creator of the humanoid robot Reachy 2, to expand its open-source robotics work. Developers will be able to access and contribute to the robot’s code as part of Hugging Face’s growing robotics initiative.
🛒 Nvidia plans to manufacture AI chips in the U.S., commissioning over a million square feet of production space in Arizona and Texas. With mass production starting soon, the company aims to build $500 billion worth of AI infrastructure domestically within four years.
That’s all for today’s edition see you tomorrow as we track down and get you all that matters in the daily AI Shift!
If you loved this edition let us know how much:
How good and useful was today's edition |
Forward it to your pal to give them a daily dose of the shift so they can 👇
Reply