- The Shift
- Posts
- Would you wear ChatGPT on you?
Would you wear ChatGPT on you?
Plus, 🎬 How to Make Pro Videos in One Chat with Medeo, LTX Adds Audio-to-Video Generation, and more!
Welcome back to The Shift. Let’s get straight to what matters in AI today…
Today we have:
🎧 OpenAI’s Wearable Rumors Meet Its 2026 Strategy
🎬 How to Make Pro Videos in One Chat with Medeo
🎵 LTX Adds Audio-to-Video Generation
🔨Tools and Shifts you cannot miss
🎧 OpenAI’s Wearable Rumors Meet Its 2026 Strategy
OpenAI is hinting at a bigger shift than just “better chat.” On one side, it’s building toward a new device with Jony Ive, possibly a screen-free companion. On the other hand, it’s laying out how it plans to scale a full intelligence business through compute, products, and monetization.

The Shift:
1. A Mystery Wearable Is Still Coming - OpenAI says its AI wearable with Jony Ive is on track for a 2026 reveal. Chris Lehane confirmed the timing but shared no details on what it does. That silence is driving speculation fast.
Leaks from China claim Foxconn may build AI-powered wireless earbuds, possibly code-named “Sweet Pea.” The rumored design sits behind the ear like a hearing aid and includes a strong processor. None of this is confirmed.
2. OpenAI Wants Monetization to Feel Native - OpenAI says its business model scales with the value intelligence creates, not time spent. That includes subscriptions, team plans, usage-based APIs, and future commerce and ads near decision moments.
OpenAI ties adoption and revenue directly to compute capacity. Compute scaled from 0.2 GW in 2023 to ~1.9 GW in 2025, while revenue rose from $2B ARR to $20B+ in 2025. More compute equals more users and monetization.
If the wearable lands, it becomes the most direct way to turn ChatGPT into an always-on assistant, not just an app. The business plan is clear: expand compute, ship better products, and then monetize where it helps users act. 2026 is about making AI habitual.
TOGETHER WITH WISPR
Vibe code with your voice
Vibe code by voice. Wispr Flow lets you dictate prompts, PRDs, bug reproductions, and code review notes directly in Cursor, Warp, or your editor of choice. Speak instructions and Flow will auto-tag file names, preserve variable names and inline identifiers, and format lists and steps for immediate pasting into GitHub, Jira, or Docs. That means less retyping, fewer copy and paste errors, and faster triage. Use voice to dictate prompts and directions inside Cursor or Warp and get developer-ready text with file name recognition and variable recognition built in. For deeper context and examples, see our Vibe Coding article on wisprflow.ai. Try Wispr Flow for engineers.
🎬 How to Make Pro Videos in One Chat with Medeo
Medeo lets you create and edit full videos just by texting, so you can go from idea to export without touching complex timelines.

How it works
Chat your video into existence: Describe what you want, and Medeo generates the video for you, fast.
Edit with simple messages: Say things like “make this faster” or “change the music” and it updates instantly.
Lock character consistency: Keep the same character identity across new scenes, actions, and emotions without it drifting.
Export for every platform: One-click export in both 16:9 and 9:16 so it’s ready for YouTube and Reels.
Create once, publish everywhere. You can try it here.
🎵 LTX Adds Audio-to-Video Generation
Lightricks launched Audio-to-Video inside LTX, letting creators start with sound and generate visuals around it. This improves voice consistency, performance control, and timing. ElevenLabs is part of the launch, making speech generation easier inside the tool.

The Shift:
1. Audio-First Workflow - Upload an audio file, record inside LTX, or generate speech using built-in text-to-speech powered by ElevenLabs Scribe V2. Add an image and prompt or go prompt-only, then generate video.
2. Lip Sync, Motion, and Camera Timing - LTX syncs lip movement, motion, and camera pacing directly to the audio track. Actions and gestures follow the performance instead of guessing timing. This keeps dialogue consistent even when you regenerate visuals.
3. Beat Understanding and Multi-Character Scenes - LTX says it understands rhythm and beat, so timing aligns cleanly with music. It also supports multi-character scenes where dialogue drives reactions. LTX calls audio-first creation the “third paradigm” of AI video.
This flips the usual workflow by letting brands build around voiceovers, hooks, and music first. It makes repeatable characters and series content easier to maintain. Lock the audio early, then iterate visuals until the pacing feels right.
TOGETHER WITH FORWARD FUTURE AI
Facts. Without Hyperbole. In One Daily Tech Briefing
Get the AI & tech news that actually matters and stay ahead of updates with one clear, five-minute newsletter.
Forward Future is read by builders, operators, and leaders from NVIDIA, Microsoft, and Salesforce who want signal over noise and context over headlines.
And you get it all for free, every day.
🔨AI Tools for the Shift
🎥 Askruit – Use AI to run first-round interviews automatically with structured video interviews that can save up to 75% of hiring time.
🧠Sorai Academy – Go beyond passive learning with an interactive AI coach that helps you build real manager judgment and master the Manager Mindset through practice.
🎬 TasteRay – Get personalized movie recommendations based on your emotions, mood, and lifestyle, so every pick feels weirdly accurate.
🎶 AIMusixer – Create original music for free with male or female vocals, with support for MP3 songs and MP4 videos for easy sharing.
🧩 Kolva – Tell the AI what you need, and it handles tasks, meeting transcripts, document search, and workflow learning, with no subscriptions and pay-only-when-you-use pricing.
🚀Quick Shifts
🎬 Netflix is going full AI across discovery and ads. It’s using AI to improve subtitle localization, launching AI tools to match members with more relevant titles, and expanding ad tech that lets brands blend Netflix IP directly into campaigns.
📱 Samsung is bringing a new conversational Bixby to phones, combining on-device AI with Perplexity-powered web search. It’s now in One UI 8.5 beta, expected to launch with Galaxy S26 next month.
🤯 Anthropic CEO Dario Amodei publicly slammed Nvidia chip exports to China at Davos, calling it “crazy” and comparing it to selling nukes to North Korea, despite Nvidia being a key partner and investor in Anthropic.
🧒 OpenAI is adding “age prediction” to ChatGPT, using signals like stated age, account history, and usage patterns to spot minors and auto-apply stricter filters for sex and violence, with Persona selfie verification for adults misflagged.
🤝 Humans&, a 3-month-old “human-centric” AI startup, raised a massive $480M seed at a $4.48B valuation, backed by Nvidia, Bezos, and top VCs, aiming to build memory-rich, multi-agent collaboration software.
đź§© Prompt of the Day
How to Generate High-Click CTAs Using One Prompt - Turn generic buttons into 5 sharper CTA variations that drive more clicks in ads, emails, and landing pages.
Paste the prompt: Drop this into ChatGPT, then fill in your product and the action you want people to take right now.
Prompt to paste
Create 5 CTA variations for [Insert product]. Include a mix of direct action CTAs (Buy Now), low-friction CTAs (Learn More), and commitment-light CTAs (Subscribe). Keep them short, specific, and aligned with the product’s main benefit.
That’s all for today’s edition see you tomorrow as we track down and get you all that matters in the daily AI Shift!
If you loved this edition let us know how much:
How good and useful was today's edition |
Forward it to your pal to give them a daily dose of the shift so they can 👇



Reply