Today I completed Module 2 of the Prompt Engineering for ChatGPT course by Vanderbilt University on Coursera.
While the material wasn’t entirely new to me, it was still a valuable experience. The structure helped me revisit essential ideas, clarify terminology, and reframe familiar practices in a more systematic way.
This module covered:
-
What prompts are and how they work
-
Prompt patterns, especially the “Persona Pattern”
-
Introducing new information to the LLM
-
Prompt length limitations and reusability
-
Root prompts and few-shot prompting
All exercises were graded by AI, and I received 100% scores, which I take as a sign that I’m on the right track. It’s early in the course, so I expect more depth in the upcoming modules.
Even though I didn’t encounter anything dramatically new, this has been a useful chance to consolidate what I already know and prepare for more advanced topics ahead.
Still aiming to finish the certification before our upcoming family trip to Japan π―π΅ in May — staying on track so far.
No comments:
Post a Comment