It’s been a while since my last update—and for good reason. I recently returned from a family trip to London, England, and have been focusing on recovery after my second Total Hip Arthroplasty (THA). But yesterday, April 19, 2025, something reignited my curiosity and motivation in a big way.
While browsing YouTube, I came across a video by Tiago Forte introducing Notebook LM, Google’s latest AI-powered tool. Here’s the link. The feature that truly caught my attention? Multimodal input support.
I’ve had disappointing experiences trying to feed multiple types of documents and formats to AI models like ChatGPT or Gemini. Context gets lost. Formats get ignored. Answers? Almost never accurate.
But Notebook LM? This one felt promising—and I had the perfect test cases ready.
✈️ Testing It Out: Planning Our Japan Trip
We’re heading to Japan next month as a family, and I had already gathered a wealth of info:
In total, 11 different source types. I asked Notebook LM to help me plan everything—from a visit to the historic Kodokan (the mecca of Judo) to exploring electronics markets and deciding what gear to bring back home.
The result? Incredibly well-structured, clear, and surprisingly aware of context across sources.
🧠 Putting It to a Harder Test: A Multi-Tab Google Doc
Next, I uploaded a multi-tab Google Doc, several pages long, filled with notes across unrelated sections and Tabs. I asked questions that required cross-referencing info from different tabs—something ChatGPT and Gemini 2.0 have failed at before miserably.
Notebook LM answered accurately every single time.
It didn’t just summarize. It synthesized. It understood what I meant and where to look, even in scattered, non-linear info.
🎧 Audio Overviews & Real-Time Q&A
One unexpected feature I found impressive was the audio overview mode, where Notebook LM creates a podcast with two speakers going over the info and discuss about it like in a podcast! But even more unique—you can interrupt the speakers with live questions, and they will adjust and answer you mid-podcast.
While I’m primarily a visual learner, the potential for this is enormous—especially in accessibility, mobility, and multitasking. (Unbelievable learning tool for the auditory learners or the visually impaired)
🚀 Final Thoughts
Notebook LM has opened up entirely new possibilities for how I manage knowledge. I’m already thinking about how I can apply it to:
-
Course notes
-
Archiving AI experiments
-
Travel itineraries
-
Research projects
-
Structured learning
It’s early days still, but I have to say: Google has truly broken new ground here.
I’ll continue testing and documenting new use cases as I go, especially as I integrate it with my ongoing AI studies.