- The Shift
- Posts
- Prompt to Project in seconds
Prompt to Project in seconds
Plus, š¤ Meet EVI 3, The Most Expressive Voice AI Ever Built, FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World, and more!

Hello there! Ready to dive into another upgrading, Mind-boggling, and Value-filled Shift?
Some Black Mirror shit :)
Today we have:
š§ Perplexity Labs Turns Prompts into Projects
š¤ Meet EVI 3, The Most Expressive Voice AI Ever Built
š¼ļø FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World
š Tools and Shifts You Cannot Miss:
š§ Perplexity Labs Turns Prompts into Projects
Perplexity has officially launched Labs, a new feature that helps users generate full-scale deliverables like spreadsheets, dashboards, and even mini web apps from a single prompt. Available to Pro subscribers on Web, iOS, and Android (Mac and Windows soon), Labs is designed for tasks that require deeper reasoning, longer runtimes, and multiple toolchains.
The Shift:
1. AI that writes, runs, and builds - Labs can write and execute code to transform data, apply formulas, and generate charts or spreadsheets. It doesnāt just give you static content, it creates functional outputs like dashboards or formatted reports.
2. Auto-organized project assets - Every file Labs generates, CSVs, charts, documents, and code snippets, gets stored in an āAssetsā tab, letting you preview or download all results in one place without searching through threads or chats.
3. Mini apps inside the project - Labs can also deploy simple interactive web apps in an āAppā tab within the same project, including dashboards, slideshows, data tools, or basic websites, allowing users to test or demo functionality without leaving the platform.
4. Clear workflow split vs Research mode - Use Labs when you want execution, something built, visualized, or structured. Use Research (formerly Deep Research) when you need a deep but quick answer.
Perplexity Labs represents the next evolution of AI productivity, from chat-based answers to real, interactive builds. While others promise summarization, this system delivers execution: functional dashboards, apps, and assets without switching tools. Itās the clearest sign yet that AI isnāt just helping you think, itās starting to work alongside you.
š¤ Meet EVI 3, The Most Expressive Voice AI Ever Built
EVI 3 by Hume isnāt just another voice model, itās a fully emotional, streaming speech-language model that understands you, thinks with you, and talks like a real human. From generating voices on the fly to matching your tone mid-conversation, it turns voice into the most natural way to interact with AI.
This is the first step toward voice becoming your main interface with intelligence.
š How to Use EVI 3
Try it live: Head to Humeās website and test the live demo or download their iOS app for full conversations.
Speak, donāt type: Just start talking, EVI 3 transcribes your voice in real time and responds as you speak with natural tone and speed.
Prompt personalities: You can prompt EVI 3 with instructions like āspeak like a pirateā or āsound proudā , it instantly adapts its voice.
Tap custom voices: Use one of 100K+ TTS voices from Humeās library, each comes with its own implied personality and expressive baseline.
No lag, just flow: With sub-300ms latency, EVI 3 matches real conversation speed, while syncing with search tools or reasoning engines mid-reply.
š” Capabilities That Redefine Voice AI
Emotion-rich responses: EVI 3 expresses 30+ styles from sultry whispers to bored monotones, all in real time.
True empathy: It scores higher than GPT-4 on empathy, expressiveness, interruption handling, and naturalness.
Voice-to-voice understanding: EVI 3 doesnāt just hear what you say, it feels how you say it. It accurately identifies emotion in your tone even when the language is identical.
Zero-shot voice generation: No training, no uploads. Prompt a new voice and personality instantly.
āThinking while speakingā: EVI 3 can pull from external tools, APIs, or databases while youāre mid-conversation, just like a real assistant would.
EVI 3 isnāt just a better TTS. Itās a speech-native intelligence that merges language, emotion, and personality into one stream. If AI is going to feel real, not robotic, this is what it looks (and sounds) like. And now, for the first time, you can talk to it.
š¼ļø FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World
Black Forest Labs has launched FLUX.1 Kontext, a multimodal AI model that blends image and text inputs for rapid, precise editing. It introduces two versions, [pro] for fast iteration and [max] for enhanced fidelity, plus a web-based Playground for testing.
The Shift:
1. Text and Image, Together - Users can prompt FLUX.1 Kontext with both visuals and words to generate or edit content in context. It supports multi-step edits while preserving character or scene identity. No need to start from scratch every time.
2. 8x Faster and More Accurate - Kontext delivers state-of-the-art results for local edits, visual style matching, and typography, maintaining fidelity across iterations, minimizing degradation even after multiple changes. Inference speeds beat leading models by up to 8x.
3. Two Models for Different Needs - Kontext [pro] is optimized for fast iterative workflows, while [max] boosts prompt-following and visual detail. A research-ready [dev] model is also available for beta testing. Each suits a different production context.
4. Playground for Live Testing - The FLUX Playground lets teams test models directly via a clean interface, no code required, making it ideal for validating creative concepts or demoing results to stakeholders.
FLUX.1 Kontext offers creative teams a serious upgrade over traditional AI tools, especially for projects requiring speed, accuracy, and visual consistency. As multimodal models evolve, real-time iteration with preserved identity could become standard for commercial media
šAI Tools for the Shift
š Exanimo.ai ā Get your brand recommended by ChatGPT, Claude, and other AI models. Make your product AI-visible and AI-relevant.
š BestBuyClues ā AI-curated gift ideas based on personality, interests, or occasion. Never get stuck picking the perfect present.
š aurumtau ā Smart search engine for both humans and AI agents. Find better answers, faster.
š WriteDoc.ai ā Create beautiful, polished documents with AI. From reports to guides, your writing gets an instant upgrade.
š¢ Buzzwize ā One-click content that sounds just like you. AI studies your post history to generate brand-authentic social content.
šQuick Shifts
āļøGmail now auto-generates AI summaries for complex threads on mobile Workspace accounts, placing them above emails, no prompt needed, with updates as replies come in; rollout may take up to two weeks.
šBy 2025, AI systems could account for 49% of global data center electricity use, potentially surpassing Bitcoin, as researchers call for greater transparency and energy reporting across the rapidly expanding sector.
šDeepSeek released a distilled version of its R1 AI model, DeepSeek-R1-0528-Qwen3-8B, which runs on a single GPU and outperforms similar-sized models like Gemini 2.5 Flash on math benchmarks.
šMeta AI now reaches 1B monthly users across its apps, doubling since late 2024. Meta plans to deepen personalization, voice, and entertainment while exploring future monetization via paid recommendations or subscriptions.
Thatās all for todayās edition see you tomorrow as we track down and get you all that matters in the daily AI Shift!
If you loved this edition let us know how much:
How good and useful was today's edition |
Forward it to your pal to give them a daily dose of the shift so they can š
Reply