HomeProjectsCertificationsBlogAboutContact

ChatGPT Prompt Engineering for Developers

ChatGPT Prompt Engineering for Developers

Most “prompt engineering” content online is written for the ChatGPT web UI — clever one-liners and copy-paste templates. That’s not what I needed. I’m calling LLMs from Laravel and Python in production: Apply-Kit generates resumes, OpenClaw orchestrates agents, TradeMatrix runs sentiment analysis on news feeds. Prompts there aren’t one-offs. They’re code paths that have to be reliable across thousands of inputs.

This course is taught by Isa Fulford from OpenAI, who built the ChatGPT Retrieval plugin and contributes to the OpenAI Cookbook. It covers prompting from the developer angle — API calls, structured output, and the use cases that show up in real products.

What the Course Covers

Eight working lessons with notebooks for each module:

  1. Guidelines — the two foundational principles: write clear and specific instructions, and give the model time to think (chain-of-thought, intermediate steps).
  2. Iterative Prompt Development — treating prompts like code. Start rough, evaluate output, refine. Same loop you’d use for any other engineering problem.
  3. Summarizing — extracting summaries with constraints (word count, focus area, target audience). Handling multiple documents in one pass.
  4. Inferring — sentiment analysis, topic extraction, entity recognition. Using the LLM as a zero-shot classifier instead of training a custom model.
  5. Transforming — translation, tone adjustment, format conversion (JSON ↔ HTML, spell-check, grammar correction).
  6. Expanding — generating longer text from short inputs. Personalized email replies, controlled creativity via the temperature parameter.
  7. Chatbot — building a multi-turn assistant using the chat completions API with system, user, and assistant message roles.

Skills Gained

Two principles that actually matter Be specific, and give the model room to reason before answering. Most “the model is dumb” complaints are violations of one of these. Specificity ≠ short prompts — longer, more structured prompts almost always perform better than terse ones.

Delimiters and structured input Wrapping user input in triple backticks, XML tags, or explicit markers prevents prompt injection and makes intent unambiguous. This is now muscle memory in every prompt I write.

Structured output (JSON) Asking the model to return JSON with a defined schema, then parsing it server-side. Removes the regex-and-hope approach to extracting data from completions. Critical for anything downstream of the LLM call.

Chain-of-thought as an architecture choice Forcing the model to list steps before giving a final answer measurably improves accuracy on reasoning tasks. The trade-off is tokens and latency, so it’s a deliberate decision per use case, not a default.

Iterative prompt development as a workflow Write a prompt, run it on a small set of inputs, find the failure mode, refine. The same TDD loop I’d use for Laravel feature tests. Eyeballing one output and shipping is how prompts break in production.

Four practical patterns Summarizing, inferring, transforming, expanding — these cover most of what LLMs get used for in real apps. Naming them as patterns makes it easier to pick the right approach without reinventing it each time.

System / user / assistant roles The chat completions API expects messages with explicit roles. The system message defines persona and constraints; user and assistant messages carry the conversation. This is the foundation for every chatbot pattern that came after.

 


Course link: ChatGPT Prompt Engineering for Developers — DeepLearning.AI