Google is making its AI research assistant, Deep Research, available to everyone for free with some limitations. The company announced several Gemini updates on March 13, 2025, including improved reasoning capabilities, personalization features, and expanded app connectivity.
Deep Research, which was previously exclusive to Gemini Advanced subscribers, will now allow free users to try the feature “a few times a month.” This AI tool searches and synthesizes information from across the web, creating detailed reports on complex topics in just minutes.
“Gemini users can try Deep Research a few times a month at no cost, and Gemini Advanced users get expanded access to Deep Research,” Google stated in its announcement. The feature is now available in more than 45 languages and can be accessed by selecting Deep Research in the new prompt bar or model drop-down.
Better Research Through Enhanced Models
Google has upgraded Deep Research to run on its Gemini 2.0 Flash Thinking Experimental model, an improvement from the previous 1.5 Pro model. This reasoning model breaks down prompts into steps, showing its thought process while browsing the web.
“This enhances Gemini’s capabilities across all research stages — from planning and searching to reasoning, analyzing and reporting — creating higher-quality, multi-page reports that are more detailed and insightful,” Google explained.
For Gemini Advanced subscribers paying $20 monthly, the model now supports a 1 million token context window, allowing users to analyze larger amounts of information. The upgraded model also includes file upload capabilities and faster processing speeds.
Similar Posts:
New Personalization Features
Google introduced a new experimental feature called “Personalization,” also powered by Gemini 2.0 Flash Thinking Experimental. With user permission, Gemini can now connect with Google apps and services, starting with Search, to deliver more tailored responses.
For example, when asking for restaurant recommendations, Gemini can reference your recent food-related searches. Similarly, when seeking travel advice, it can respond based on destinations you’ve previously searched.
To enable this feature, users need to select “Personalization (experimental)” from the model drop-down menu. Google plans to expand this feature to include Google Photos, YouTube, and other apps in the “coming months.”
More Apps and Custom AI Assistants
Google is connecting more apps to Gemini, including Calendar, Notes, Tasks, and soon Photos. This allows users to make complex requests involving multiple apps, such as: “Look up an easy cookie recipe on YouTube, add the ingredients to my shopping list and find me grocery stores that are still open nearby.”
The upcoming Google Photos integration will let users ask Gemini to create travel itineraries based on vacation photos or check when their driver’s license expires by looking at the document image.
Additionally, Google’s Gems feature, which lets users create customized AI assistants for specific topics, is now available to all users at no cost. Users can start with premade Gems or create their own custom versions for tasks like translation, meal planning, or math coaching.
These updates are available now at gemini.google.com.
Frequently Asked Questions
Gemini Deep Research is an AI research assistant that searches and synthesizes information from across the web. It works by creating a research plan, browsing multiple websites, and compiling the information into a detailed report. The process mimics how humans conduct research by making new searches based on initial findings until it gathers enough information to answer your question.
Free Gemini users can try Deep Research “a few times a month” at no cost. Google hasn’t specified the exact number of uses. Gemini Advanced subscribers (who pay $20 monthly) get expanded access to the feature.
Gemini 2.0 Flash Thinking Experimental is a reasoning model that breaks down prompts into a series of steps to strengthen its reasoning capabilities. It shows its thought process while browsing the web, making it more transparent. The model offers better efficiency and speed compared to previous versions and now supports features like file upload.
The personalization feature connects Gemini with your Google apps and services, starting with Search history. It uses this information to provide more tailored responses. For example, it can reference your recent food-related searches when recommending restaurants or consider destinations you’ve previously searched when giving travel advice. You can enable this by selecting “Personalization (experimental)” from the model drop-down menu.
Gems are custom versions of Gemini tailored for specific tasks. You can use premade Gems (like Brainstormer, Career guide, Coding partner) or create your own custom Gems for tasks like translation, meal planning, or math coaching. While you can access Gems on mobile, creating new ones is only available on the web through the “Gems manager.” There, you can write instructions, upload reference files, and assign a name to your custom Gem.
Gemini can now connect with several Google apps including Calendar, Notes, Tasks, Gmail, Drive, Messages, and YouTube. Google Photos integration is coming in the next few weeks. This connectivity allows you to make complex requests involving multiple apps, such as looking up recipes on YouTube, adding ingredients to a shopping list, and finding nearby grocery stores that are open—all in a single prompt.