Prompt to Project in seconds

Plus, šŸŽ¤ Meet EVI 3, The Most Expressive Voice AI Ever Built, FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World, and more!


Hello there! Ready to dive into another upgrading, Mind-boggling, and Value-filled Shift?

Some Black Mirror shit :)

Today we have:

🧠 Perplexity Labs Turns Prompts into Projects

šŸŽ¤ Meet EVI 3, The Most Expressive Voice AI Ever Built

šŸ–¼ļø FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World

šŸ† Tools and Shifts You Cannot Miss:

🧠 Perplexity Labs Turns Prompts into Projects

Perplexity has officially launched Labs, a new feature that helps users generate full-scale deliverables like spreadsheets, dashboards, and even mini web apps from a single prompt. Available to Pro subscribers on Web, iOS, and Android (Mac and Windows soon), Labs is designed for tasks that require deeper reasoning, longer runtimes, and multiple toolchains. 

The Shift:

1. AI that writes, runs, and builds - Labs can write and execute code to transform data, apply formulas, and generate charts or spreadsheets. It doesn’t just give you static content, it creates functional outputs like dashboards or formatted reports. 

2. Auto-organized project assets - Every file Labs generates, CSVs, charts, documents, and code snippets, gets stored in an ā€œAssetsā€ tab, letting you preview or download all results in one place without searching through threads or chats. 

3. Mini apps inside the project - Labs can also deploy simple interactive web apps in an ā€œAppā€ tab within the same project, including dashboards, slideshows, data tools, or basic websites, allowing users to test or demo functionality without leaving the platform.

4. Clear workflow split vs Research mode - Use Labs when you want execution, something built, visualized, or structured. Use Research (formerly Deep Research) when you need a deep but quick answer.

Perplexity Labs represents the next evolution of AI productivity, from chat-based answers to real, interactive builds. While others promise summarization, this system delivers execution: functional dashboards, apps, and assets without switching tools. It’s the clearest sign yet that AI isn’t just helping you think, it’s starting to work alongside you.


šŸŽ¤ Meet EVI 3, The Most Expressive Voice AI Ever Built

EVI 3 by Hume isn’t just another voice model, it’s a fully emotional, streaming speech-language model that understands you, thinks with you, and talks like a real human. From generating voices on the fly to matching your tone mid-conversation, it turns voice into the most natural way to interact with AI.

This is the first step toward voice becoming your main interface with intelligence.

šŸ›  How to Use EVI 3

  1. Try it live: Head to Hume’s website and test the live demo or download their iOS app for full conversations.

  2. Speak, don’t type: Just start talking, EVI 3 transcribes your voice in real time and responds as you speak with natural tone and speed.

  3. Prompt personalities: You can prompt EVI 3 with instructions like ā€œspeak like a pirateā€ or ā€œsound proudā€ , it instantly adapts its voice.

  4. Tap custom voices: Use one of 100K+ TTS voices from Hume’s library, each comes with its own implied personality and expressive baseline.

  5. No lag, just flow: With sub-300ms latency, EVI 3 matches real conversation speed, while syncing with search tools or reasoning engines mid-reply.

šŸ’” Capabilities That Redefine Voice AI

  • Emotion-rich responses: EVI 3 expresses 30+ styles from sultry whispers to bored monotones, all in real time.

  • True empathy: It scores higher than GPT-4 on empathy, expressiveness, interruption handling, and naturalness.

  • Voice-to-voice understanding: EVI 3 doesn’t just hear what you say, it feels how you say it. It accurately identifies emotion in your tone even when the language is identical.

  • Zero-shot voice generation: No training, no uploads. Prompt a new voice and personality instantly.

  • ā€œThinking while speakingā€: EVI 3 can pull from external tools, APIs, or databases while you’re mid-conversation, just like a real assistant would.

EVI 3 isn’t just a better TTS. It’s a speech-native intelligence that merges language, emotion, and personality into one stream. If AI is going to feel real, not robotic, this is what it looks (and sounds) like. And now, for the first time, you can talk to it.

šŸ–¼ļø FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World

Black Forest Labs has launched FLUX.1 Kontext, a multimodal AI model that blends image and text inputs for rapid, precise editing. It introduces two versions, [pro] for fast iteration and [max] for enhanced fidelity, plus a web-based Playground for testing. 

The Shift:

1. Text and Image, Together - Users can prompt FLUX.1 Kontext with both visuals and words to generate or edit content in context. It supports multi-step edits while preserving character or scene identity. No need to start from scratch every time.

2. 8x Faster and More Accurate - Kontext delivers state-of-the-art results for local edits, visual style matching, and typography, maintaining fidelity across iterations, minimizing degradation even after multiple changes. Inference speeds beat leading models by up to 8x.

3. Two Models for Different Needs - Kontext [pro] is optimized for fast iterative workflows, while [max] boosts prompt-following and visual detail. A research-ready [dev] model is also available for beta testing. Each suits a different production context.

4. Playground for Live Testing - The FLUX Playground lets teams test models directly via a clean interface, no code required, making it ideal for validating creative concepts or demoing results to stakeholders. 

FLUX.1 Kontext offers creative teams a serious upgrade over traditional AI tools, especially for projects requiring speed, accuracy, and visual consistency. As multimodal models evolve, real-time iteration with preserved identity could become standard for commercial media


šŸ†AI Tools for the Shift

šŸš€ Exanimo.ai – Get your brand recommended by ChatGPT, Claude, and other AI models. Make your product AI-visible and AI-relevant.

šŸŽ BestBuyClues – AI-curated gift ideas based on personality, interests, or occasion. Never get stuck picking the perfect present.

šŸ” aurumtau – Smart search engine for both humans and AI agents. Find better answers, faster.

šŸ“„ WriteDoc.ai – Create beautiful, polished documents with AI. From reports to guides, your writing gets an instant upgrade.

šŸ“¢ Buzzwize – One-click content that sounds just like you. AI studies your post history to generate brand-authentic social content.


šŸš€Quick Shifts

āœ‰ļøGmail now auto-generates AI summaries for complex threads on mobile Workspace accounts, placing them above emails, no prompt needed, with updates as replies come in; rollout may take up to two weeks.

šŸŒŽBy 2025, AI systems could account for 49% of global data center electricity use, potentially surpassing Bitcoin, as researchers call for greater transparency and energy reporting across the rapidly expanding sector.

🌊DeepSeek released a distilled version of its R1 AI model, DeepSeek-R1-0528-Qwen3-8B, which runs on a single GPU and outperforms similar-sized models like Gemini 2.5 Flash on math benchmarks.

šŸ“ˆMeta AI now reaches 1B monthly users across its apps, doubling since late 2024. Meta plans to deepen personalization, voice, and entertainment while exploring future monetization via paid recommendations or subscriptions.


That’s all for today’s edition see you tomorrow as we track down and get you all that matters in the daily AI Shift!

If you loved this edition let us know how much:

How good and useful was today's edition

Login or Subscribe to participate in polls.

Forward it to your pal to give them a daily dose of the shift so they can šŸ‘‡

Reply

or to participate.