Summary
- Google adds Nano Banana image editing to Lens/Search — useful but feels like bloat in a search tool.
- NotebookLM uses Nano Banana for visual overviews, new artistic styles, contextual images, plus a ‘Brief’ format.
- Nano Banana heads to Google Photos soon — image editing fits better there than in Search.
Google’s AI features, at least many of them, are useful. But oftentimes, they can cross the line into bloatware, as it’s the case with the vast majority of the AI stuff Google has tried to add to Search. Now, it got an image editing feature, but we really didn’t need it here.
Google has announced that it’s integrating its new Nano Banana image model into Google Search, NotebookLM, and, in the near future, Google Photos. The Google Search integration comes as part of Google Lens, which is used to search for images and pictures from your phone. Users on both Android and iOS will find a new “Create” mode in the Lens interface. By either taking a new photo or selecting an existing one from their gallery, you’ll be able to use Nano Banana to instantly modify and transform your images. So not only you can search with an image, but you can also use that image as a canvas for AI-powered creative editing directly within the Google app.
I’m not sure whether this was necessary. Sure, AI image editing can be neat, but within Google Search, it might be a little pointless. Google Lens is used for searching, after all, and if you want to edit an image, you’re going to go to other apps, not Google Search. It’s a bit like putting an ice cream stand in the middle of a hardware store. Sure, it’s okay, and some people are going to get some… But why?
The other Nano Banana integrations that just got announced make a tad more sense. For NotebookLM, the Nano Banana integration operates as a powerful “under the hood” enhancement for the Video Overviews feature. This update introduces several new functionalities aimed at making research and note-taking more visually intuitive. The model now provides six new artistic styles for generating overviews, including watercolor and anime, allowing for more stylized and engaging summaries. It will also power the generation of contextual images based on a user’s source materials. This feature aims to visually summarize complex information, making it easier for users to grasp key concepts from their documents, videos, or other sources. NotebookLM will also gain a new format called “Brief,” designed to provide quick, concise visual insights when a full, detailed overview is not required.
Nano Banana will also be coming to Google Photos in the coming weeks. Specific details of this integration have not yet been released, but if I had to guess, it would be a sort of image editing feature just like the one that just got added to Google Lens. Here, though, it would make a lot more sense, since Google Photos already has a suite of basic but useful, editing features, so this is sort of the next natural step if we’re going to add generative AI to the app.
All of these should be going live soon if you don’t already have them.
Source: Google