NotebookLM arrives inside Gemini notebooks starting today

NotebookLM is now inside Gemini, marking a change in how Google handles personal research in its AI tools. Starting today, users can access existing notebooks directly from the app instead of switching between different products.
This builds on last year’s move where notebooks could be added as sources. With this update, saved materials live next to conversations and commands, making them usable in real-time rather than just being saved.
Past conversations can be pulled into collections and reused, strengthening the link between research and conversation. Gemini starts to feel like a system that keeps context for all tasks.
The rollout is first on the web for Google AI Ultra, Pro, and Plus subscribers, with mobile support and wider access expected soon. Google does not allocate time for free users.
How Gemini Uses Repositories
The biggest change is how Gemini handles saved items. Instead of static references, collections now serve as live context during conversations. Once selected, content automatically generates responses, reducing repetitive input.
That builds on what NotebookLM already does well, which is a results base for user-provided content. The ability now resides in the same interface, keeping responses tied to documents or research sets without additional steps.
Google is also expanding the way sources behave. Existing conversations can be folded into collections, turning past interactions into reusable inputs. Research and discussion now reinforce each other over time.
Why this is changing the AI workflow
This move brings Gemini closer to a full functionality than a simple chatbot. Combining NotebookLM with its core information reduces the conflict between storing information and using it.
It also reflects the broader shift towards memory and continuity in AI tools. Instead of starting over each time, the system can draw from a growing pool of material, changing how long projects are managed.

There is a trade-off, however. The quality of the response still depends on how well the item is programmed, so dirty inputs may limit usability.
What to watch next
The rollout is still limited, focusing on high-end subscribers on the web. Mobile support and wider availability are expected, but the timing is not yet clear.
If Google deepens this integration, Gemini could become a central hub for research-heavy workflows. That can increase the pressure on competitors to match continuous context and document-aware responses.
For now, this update shows a clear direction. Gemini is evolving into a tool for ongoing work, not just quick responses, with its next phase tied to wider releases and feature parity.


