Intro
If you’ve been using Perplexity lately, you might’ve noticed things feel different. Not in a jarring way, but in that quiet way when someone you trust suddenly gets better at understanding what you really need. This month, Perplexity shipped updates that turn it from a single smart tool into something more like having multiple expert brains working together at once. We’re talking about running three frontier models in parallel, a research mode that’s genuinely accurate enough for serious work, and a memory engine that actually remembers what matters. Whether you’re a developer, researcher, marketer, or just someone tired of AI that makes stuff up, there’s something here that’ll change how you work.
Model Council: Three Brains Are Better Than One
Starting this week, Max subscribers can run Model Council directly inside Perplexity Computer. This is what it does: you ask a question, and three frontier models work on it simultaneously. GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro all tackle your query at the same time, then Perplexity shows you where they agree, where they disagree, and what each one uniquely contributes.[3]
Why does this matter? When you’re making decisions that cost money or time, you want to know if the models are aligned or if they’re pulling in different directions. A developer building an architecture decision might see GPT-5.4 and Claude Opus suggesting different approaches, while Gemini points out a security angle nobody mentioned. Instead of copy-pasting the same question into three tabs like some kind of madman, you get the synthesis in one place.[3]
Marketers analysing campaign strategy, researchers vetting sources, product managers stress-testing ideas, this feature lets you see blind spots instead of just getting one polished answer that might be missing something crucial.
Deep Research Got Its Act Together
Deep Research isn’t new, but Perplexity rebuilt it to actually perform at the level of rigorous research tools.[4] It now runs on Opus 4.5 (Max and Pro users) and will automatically upgrade to newer reasoning models as they’re released. The upgrade pairs the best available models with Perplexity’s search engine and sandbox infrastructure.[4]
The benchmark results speak for themselves: Perplexity tested it against Google DeepMind Deep Search QA and Scale AI Research Rubric, and it outperforms other deep research tools on accuracy and reliability.[4] This is the mode you use when you need report-grade output, not quick answers. If you’re a researcher drafting a white paper, a policy analyst needing citations that hold up to scrutiny, or a consultant building a competitive analysis, Deep Research now gives you something you can actually publish.
The key difference: this isn’t just a longer answer. It’s built from the ground up for accuracy, not speed.
Memory That Actually Works
Perplexity’s memory engine got a serious upgrade in early February, and it rolled out fully by mid-March.[4] Here’s the practical bit: the new version recalls important information in 95% of cases, up from 77% before, while making half as many memories.[4]
Translation: instead of drowning in dozens of half-remembered details, Perplexity now creates fewer but sharper memories that actually matter to you. When you ask it to recall details from past conversations or recommend books based on your preferences, it’s not just pulling random snippets, it’s pulling the right ones.[4]
For developers iterating on a product feature, analysts building on earlier research, or anyone doing ongoing work, this changes the game. Your AI assistant now works like a person who actually listened the first time instead of just taking notes.
GPT-5.4 and GPT-5.4 Thinking Now Available
OpenAI’s latest models landed in Perplexity on March 6.[3] All Pro and Max subscribers now have access to GPT-5.4 across web and mobile, with GPT-5.4 Thinking available for problems that need step-by-step reasoning.[3]
If you’re a developer debugging something gnarly, a marketer needing to work through campaign logic, or a researcher picking apart a complex problem, Thinking gives you the model’s working notes. It’s like watching someone solve a puzzle aloud instead of just getting handed the answer.
GPT-5.3-Codex: Your Dedicated Coding Subagent
Perplexity Computer now has GPT-5.3-Codex built in as a specialised coding agent.[3] When Computer hits a complex coding task, it can delegate to Codex, writing production-quality code, debugging using browser dev tools, and pushing straight to GitHub.[3]
This matters if you’re building something and you’re tired of explaining every line of code to a general-purpose model. Codex speaks your language. It can write thousands of lines, fix bugs properly, and integrate with your actual workflow instead of just sitting in a chat window.
Custom Skills: Automate Your Repeating Tasks
Custom Skills shipped on March 6 for Pro and Max users.[2] If you find yourself asking Perplexity the same thing over and over, analyse a weekly report, format customer feedback, extract data from a dashboard, you can now automate that workflow.
Instead of retyping the same prompt every Monday morning, you create a Skill once and trigger it whenever you need it. For marketing teams running weekly performance reviews, developers checking build logs, or analysts synthesising call transcripts, this saves actual time. It’s not flashy, but it’s the kind of feature that quietly makes your week less repetitive.
Voice Mode
Perplexity added Voice Mode on March 6, letting you speak instead of type.[2] If you’re commuting, cooking, or just thinking out loud, you can talk to Perplexity the way you’d talk to a person. Useful if you’re a developer rubber-ducking a problem, a researcher capturing quick notes, or anyone who thinks faster when they’re vocalising.
What This Actually Means for You
The through-line across all these updates is this: Perplexity stopped being a single clever tool and became an orchestration layer. It’s routing your work to the right model, running multiple approaches in parallel, remembering what actually matters, and automating the stuff you shouldn’t have to repeat.
If you’re using it for quick facts, you might not notice much. But if you’re doing real work, building something, researching something, analysing something, these updates change how much friction there is between you and an answer you can actually trust.
Next Steps
If you’re not subscribed yet, it’s worth trying Pro or Max to access Model Council, Custom Skills, and the upgraded Deep Research. If you’re already in, spend a few minutes setting up a Custom Skill for something you do weekly. Then try Model Council on a decision that actually matters to you and see what three brains catch that one would miss.
Head to Perplexity and poke around. Give them feedback on what works and what doesn’t. The features people use shape what gets built next, and right now there’s real momentum on making AI research and reasoning actually reliable.




