Introduction
Google’s just dropped a heap of updates to Gemini this month, and honestly, they’re worth paying attention to. Whether you’re building something with the API, trying to shift more products online, or just wanting your AI to understand you a bit better, there’s something here for you. January’s been a busy month for the Gemini team, and I reckon these changes are going to ripple through how people work and shop online.
Personal Intelligence: Gemini Finally Understands Your Life
Right, so imagine if your AI assistant actually knew about your Gmail, your photos, your YouTube history and could connect the dots without you having to explain everything from scratch. That’s what Personal Intelligence does. It launched earlier this month, and it’s a proper game changer.
Here’s the thing: instead of Gemini just drawing from the public internet like every other chatbot, it can now tap into your personal stuff (with your permission, of course). You’re no longer that “human router” jumping between five different apps trying to piece together information about your mate’s birthday plans or that invoice from three months ago.
Who benefits here:
- Content creators can ask Gemini to pull together their recent posts and suggest what to write about next, without manually scrolling through YouTube analytics
- Marketers can get Gemini to summarise email threads with clients and auto-generate campaign briefs from past conversations
- Project managers can ask it to surface relevant tasks from Gmail and Google Calendar, then create timelines automatically
- Researchers can ask Gemini to connect photos, emails, and search history to build a proper knowledge base on a topic they’re investigating
Google Trends Gets a Proper AI Makeover
Google Trends has been around for ages, right? But on January 18th, they integrated Gemini straight into the Trends Explorer page, and it’s the biggest update in the tool’s 20 year history. I know that sounds like marketing speak, but stick with me.
Instead of manually typing in search terms and squinting at graphs, you can now click an “Suggest Search Terms” button and describe what you’re after in plain English. Gemini does the heavy lifting and populates related terms automatically. The interface is cleaner too with colours and icons for each term, making it easier to spot patterns.
What’s actually useful about this:
- Analysts can discover emerging trends without spending hours on manual research. Type “sustainable fashion in Gen Z” and Gemini suggests 8+ related terms to track
- Marketing teams can compare campaign performance across related keywords instantly, instead of building comparison charts manually
- Product managers can see which search queries are rising fastest in their category, helping them spot opportunities before competitors do
- Content strategists can identify content gaps by seeing which related searches are trending but underserved
The rollout’s happening gradually through January, starting with English speaking countries first, then spreading globally by early February.
Agentic Commerce: Letting AI Actually Buy Stuff For You
This one’s mental. On January 11th, Google announced that Gemini can now handle actual shopping tasks on your behalf. Not just “here are some product suggestions” but actual discovery, comparison, and purchase.
They’ve launched something called Business Agent too, which lets customers chat directly with brands through Google Search itself. So instead of heading to a retailer’s website, you can ask questions in Google Search and get instant answers in the brand’s voice. Shopify stores and other retailers started testing this on January 12th.
Where this gets practical:
- E-commerce teams need to update how they structure product data. It’s no longer just about keywords, it’s about answering common questions and listing compatible accessories and substitutes
- Shoppers can ask Gemini to find a winter coat that fits their budget and style preferences, and it handles the research and checkout without them opening another tab
- Customer support can now happen through search instead of forcing people to navigate websites or call centres
- Retailers tracking sales can see whether conversational commerce is actually moving the needle compared to traditional search ads
API Improvements for Developers
If you’re building with Gemini, there’s been a few handy updates:
- Cloud Storage buckets and database URLs now work as data sources for the API, which means developers can feed Gemini data directly from their infrastructure without wrestling with file uploads
- Veo (Google’s video generation tool) now supports 4K output and better portrait video handling across all resolutions
- Model lifecycle features launched so you know which models are being phased out and when to migrate your code
These are solid quality-of-life updates for developers, not flashy features but the kind that save hours when you’re building production systems.
Thinking Mode Gets Faster
The Gemini 2.0 Flash Thinking model is still in preview but it’s getting regular updates. Basically, it shows you how the AI actually reasons through a problem instead of just spitting out answers. For researchers, analysts, and anyone who needs to understand the “why” behind AI responses, this is useful stuff.
Call to Action
If any of this sounds interesting, the best way to get your head around it is to jump into Gemini and have a play. Head over to gemini.google.com and spend a few minutes with Personal Intelligence or Trends Explorer. If you’re building on the API, check out the full release notes because there’s probably something in there that’ll make your work easier. And if you’ve got feedback or ideas on what you’d like to see next, Google’s always keen to hear it. These updates are still rolling out, so you might not see everything straight away, but it’s worth checking back regularly as January wraps up.




