Saturday, May 17, 2025

🎓 Certified—Just in Time: ADA Course Complete Before Japan

🕐 Saturday, May 17, 2025 – 13:58

This afternoon, I downloaded my certificates of completion for both parts of Vanderbilt University's ChatGPT Advanced Data Analysis course on Coursera. I had only planned to wrap up the AI Prompt Engineering track before our upcoming family trip to Japan—but in the end, I managed to finish both ADA courses just in time.

It’s a satisfying way to close this chapter, and a fitting moment to pause. I’ll likely step away from writing for a couple of weeks while I’m abroad, but this milestone feels worth marking before I go.


Modules 4 & 5: Reinforcing What Experience Had Already Taught

Module 4 introduced a concise model:
Extract – Transform – Analyze – Create. It’s a mental shorthand for how ADA workflows are structured, and it helped me reflect on how I’ve been intuitively operating for some time. The content on human vs. AI planning also reinforced something I’ve come to respect—knowing when to rely on the model and when to step in myself.

Module 5 was the most personally resonant. It tackled:

  • Fixing and recovering from errors

  • Ensuring consistency and reliability

  • Planning complex outputs with outlines

  • Techniques for handling larger documents

What stood out is that these were exactly the issues I’d been battling recently—especially when ChatGPT’s memory and custom instructions interfered with ADA’s behavior. It felt uncanny how the course echoed the problems I described in this earlier post, where I had to debug the tool itself just to keep moving.

At the time, I was frustrated and cursing nonstop. But now, with some hindsight...


“Every Obstacle Is for the Better”

There’s a saying in my country:

“Every obstacle is for the better.”

It fits perfectly here. What felt like a setback—being blocked by bugs, broken memory handling, and missing context—pushed me to find workarounds. Those workarounds, it turns out, were often precisely the techniques this course now teaches.

So no, I didn’t walk away with a mountain of new knowledge. But I did walk away with something more powerful:

  • A mental reorganization of what I’ve already learned

  • A sharper understanding of where I stand

  • Validation that my instinctive solutions weren’t just hacks—they were correct


What’s Next

For now, I’m packing up and shifting focus to Japan and family. The blog will go quiet for a couple of weeks, but I’ll return recharged and ready to dig into the next topic—whatever that turns out to be.

Until then, I’m satisfied. These last modules brought a bit of closure, a bit of clarity, and a sense that I'm no longer just learning—I’m consolidating.

Wednesday, May 14, 2025

🧠 Modules 2 & 3 Complete: ADA Applications, Perspectives, and Problem Fit

I just completed Modules 2 and 3 of Vanderbilt University's ChatGPT Advanced Data Analysis course on Coursera—both with full marks.

These modules focused on:

  • Real-world ADA use cases: working with small documents, structured data, and media

  • Automation with .zip files: using compressed files to scale repetitive tasks

  • Turning conversations into tools: reframing interactions into executable workflows

  • Evaluating problem fit: understanding what types of tasks ADA handles best

While I can’t say I encountered anything radically new or unique, that’s not a complaint. Having worked closely with ChatGPT and ADA for nearly two years, many of these techniques were already familiar. What’s helpful—and genuinely interesting—is seeing alternative angles on problems I’ve already tackled. Sometimes a change in framing opens the door to new efficiencies.

One takeaway that’s been growing clearer for me is this:

The most valuable knowledge isn’t about what ADA can do, but about understanding how LLMs work—and what they can’t do.

Knowing the limits of language models is what allows meaningful, efficient, and realistic problem design. If you understand how these systems generalize, interpret, and infer, then you stop trying to force-fit them into roles they’re not suited for—and start applying them where they shine.

Among the few new things I did learn:

  • ADA can accept and work with .zip files, enabling batch workflows. That’s a game changer I simply hadn’t run into before.

  • The image manipulation capabilities within ADA are more advanced than I expected—definitely worth exploring more.

So while these modules weren’t revelatory, they were solid. I’m continuing with a sense of curiosity, and looking forward to seeing whether Modules 4 and 5 dive into more complex or unexpected use cases.

Sunday, May 11, 2025

🧠 I Just Finished Module 01: ADA Introductions and a Two-Year Milestone

I just finished Module 01 of ChatGPT Advanced Data Analysis (formerly Code Interpreter), part of the Coursera course offered by Vanderbilt University.

This opening module introduced:

  • The evolution from Code Interpreter to Advanced Data Analysis

  • Building tools with ADA, including data exploration and basic visualization

  • Acting as a coding assistant or intern through ChatGPT prompts

  • A surprisingly useful feature I hadn’t used before: ADA can directly process .zip files

I completed all the assignments with full marks. While I didn’t encounter many new concepts, I found this module valuable for a different reason—it validated knowledge I’ve gained through two years of intensive daily interaction with ChatGPT.

In fact, I realized that my ChatGPT Plus subscription turns two this month—I started on May 19, 2023. Since then, I’ve used ChatGPT several times daily, experimenting with data, writing, automation, and more. Naturally, much of what was introduced in Module 01 felt familiar.

But that familiarity doesn’t make the course redundant. If anything, I wish I had access to this kind of guided structure from the beginning. It’s a reassuring milestone to see my self-taught insights mirrored in formal instruction.

I’m looking forward to the next modules, where I expect the material to go deeper and offer fresh angles, even for a seasoned user like me.

Thursday, May 8, 2025

🔧 Cracked: The Hidden Custom Instructions That Crippled ChatGPT's File Handling

This post is a technical and personal milestone. It marks the end of a months-long struggle with a crippling and completely silent failure inside ChatGPT: its inability to read even simple files like .xls, .pdf, or .txt — a problem that nearly broke my workflow and confidence. What I discovered may apply to many users without their knowledge.

Background: Ready for the Next Step

After completing the "Prompt Engineering" course on Coursera (documented here), I was eager to begin the next: "ChatGPT Advanced Data Analysis". But the moment I watched the introductory video, I realized I couldn't go further. Not unless I finally solved a persistent issue that had haunted my entire experience with GPT.

The Symptom: GPT Couldn’t Read My Files

From day one, ChatGPT refused to correctly parse even the most basic .xls, .pdf, or .txt files. Where other users — including my wife using the free version — had no problem uploading and analyzing structured documents, I was stuck. I tried every workaround:

  • Converting files to .csv and stripping whitespace

  • Breaking documents into raw text and pasting them directly into the prompt

  • Manually reformatting data just to get a glimpse of usable output

Even then, results were inconsistent and limited. I reached out to OpenAI forums, read documentation, searched Reddit, wrote bug reports, and scoured the community. I found no answers. Nobody had my problem — or if they did, they didn’t know it.

Breaking Point: Tuesday’s Epiphany

On Tuesday, I decided I would solve this once and for all. I suspected my Custom Instructions and Memory settings might be the culprit. I finally asked ChatGPT directly:

Is it possible my Custom Instructions are interfering with file parsing?

This triggered the revelation.

The Root Cause: My Own Instructions Were Sabotaging Me

With the model’s help, I discovered that:

  • One of my memory items instructed: "Only show steps if asked". This suppressed parsing and previews.

  • Another told GPT "Do not use Python or external tools", which essentially blocked ADA.

  • A custom instruction asked for step-by-step answers in "two steps at a time", which made the model hesitant to fully process large files or show complex outputs.

None of these were ever flagged as problematic.

The Fix: Full Reset, Total Success

I deleted all memory entries.
I rewrote my Custom Instructions from scratch to allow full use of Python, ADA, and internal tools.

Then I uploaded the largest .xlsx file I’ve ever fed ChatGPT.

It parsed it.
It processed it.
It ran Python logic on it.
It used LLM inference on it.

Flawlessly.

I was in tears. For the first time, ChatGPT performed as advertised. A problem that had crippled my work for months was solved in an hour — once I finally learned what to look for.

A Systemic Problem: No Warnings, No Flags

What’s unacceptable is this:

  • ChatGPT allowed these conflicting instructions.

  • It never warned me they might interfere with functionality.

  • The behavior failed silently, with no indication of why.

A particularly ironic case: my instruction to only show "two steps at a time" — designed to make long explanations manageable — turns out to be a likely cause of failure. But this is an entirely reasonable preference. Many users hate long scrolls of 15 steps, especially when step 2 contains an error. Breaking explanations into chunks is logical and environmentally friendly.

Final Thoughts: This Needs to Be Fixed

I will write a full bug report to OpenAI. But for now, this post is here for every other user who might be unknowingly hamstrung by their own Custom Instructions.

If your GPT isn’t working as well as someone else’s — especially for file analysis — do not assume it’s a hardware issue or model limitation. It might be a quiet conflict buried inside your settings.

Check your memory. Check your instructions. Break the code.

Saturday, May 3, 2025

🏁 Finished! AI Prompt Engineering Certification & Final Project Reflections

Today I officially completed Module 6, wrapping up the final requirements for the Prompt Engineering for ChatGPT course by Vanderbilt University on Coursera. All content—videos, readings, and assessments—has been marked complete, and I submitted the final graded assignment with a 100% score.

For the final project, I submitted a prompt-based application that builds on many patterns covered in the course. The result was something I had already started developing independently: a custom ChatGPT instance titled 🧙‍♂️ Dungeon Master Eternal — a text-based RPG engine that simulates persistent fantasy adventures.

The instance supports solo or group play, saves character progress, and includes structured commands for stats, inventory, mounts, abilities, and more. It weaves complex narratives, tracks XP and quests, and even supports image generation with DALL·E for visual immersion.

I’ve already spent several enjoyable hours testing it in action. The gameplay has been engaging and smooth, and I'm currently advancing a character named Lutherion, a Paladin now at Level 2, after completing several quests. The experience of playing the game I designed—especially with features like dynamic XP tracking, divine abilities, and character-driven exploration—has been unexpectedly rewarding.

While much of the course felt introductory, it gave me space to re-evaluate existing practices and sharpen how I design my prompts structurally. I consider it a solid entry-level program for anyone serious about applied prompt engineering.


🔜 What’s Next?
I’m considering taking the next course by Dr. Jules White, titled “ChatGPT Advanced Data Analysis”, to further build on this foundation—most likely starting soon.

Friday, May 2, 2025

📘 Progress Update: Modules 3, 4 & 5 Completed in AI Prompt Engineering Course

Over the past few days, I completed Modules 3, 4, and 5 of the “Prompt Engineering for ChatGPT” course offered by Vanderbilt University on Coursera. All related readings, videos, and graded assignments have been finished—each with a score of 100%.

The content covered various prompt design patterns, including few-shot examples, chain-of-thought prompting, and structured approaches like persona or template-based prompts. While the course is well-organized and clearly explained by Dr. Jules White, most of the material remains introductory.

The few-shot example section stood out slightly, though I believe its practical adoption is limited. Personally, I rely more on custom GPT instances that iterate and refine prompts interactively—something not deeply explored in this course.

Overall, the course continues to serve as a structured recap of basic prompt engineering techniques. I intend to complete the final module soon, maintaining my original plan to finish the course before our family trip to Japan later this month.

🧠 Transfer of Consciousness: Moving Long-Running AI Projects Between Chats

When you work with conversational AI over time, you quickly discover an odd limitation: Your project stays in the old chat. The capabiliti...