[Class Report] Intro to Systems Development, Week 25 — Generative AI Hands-on: Running Prompts and Verifying Outputs
In Week 25, we actually ran the prompts we created last week in a safe learning environment and conducted a hands-on exercise to verify the output for “validity,” “readability,” and “safety.” In class, we honed practical collaboration with generative AI by rapidly cycling through prompt design → execution → verification → refinement.
■ Instructor’s Introduction: “Try it, check it, fix it — that’s the best way to learn.”
Mr. Tanaka: “A prompt isn’t finished once you write it. It’s crucial to look at the output and always check whether it meets expectations, whether it contains misinformation, and whether the phrasing is appropriate. Let’s fine-tune through practice.”
■ Today’s Goals
- Run prompts in a learning environment (a safe environment aligned with school rules).
- Check output for factuality, bias, and appropriateness of expression.
- Improve the prompt as needed and retrieve the output again.
■ Exercise 1: Execute the prompt and observe the output
Students ran their own prompts in turn. They first read the output plainly and checked the following:
- Does it follow the requested format (bulleted list / within 400 characters / code block, etc.)?
- Are there any obvious factual errors or contradictions?
- Does it contain discriminatory or inappropriate expressions?
Example (prompt used in class)
Prompt:
“For high schoolers, list three nature spots in the Tokyo area that can be done as day trips, including train access (nearest station) and approximate round-trip budget, in bullet points.”
Checklist for expected output:
- Is the nearest station clearly stated for each spot?
- Is the budget realistic (and does it state that it’s an estimate)?
- Is the output formatted as a bulleted list?
Many students reported observations such as “formatting looks good, but the budget and travel times are vague” or “inconsistent place-name spellings.”
■ Exercise 2: Fact-checking and bias detection
Instead of using AI output as-is, we practiced verifying it with external sources.
- Cross-check station names and facility names on the web or official sites (within classroom management policies).
- For items where factual accuracy matters—history, statistics, etc.—always confirm using multiple sources.
- Critically read for cultural or social bias in the wording.
Mr. Tanaka repeatedly emphasized: “Treat generated results as a draft of your thinking. Humans must take final responsibility to verify and correct.”
■ Exercise 3: Prompt refinement (revising)
Based on the output, teams discussed how to modify the prompt to get closer to expectations, created improved versions, and ran them again. We shared the following tips for refinement:
- Specify the output format more strictly (e.g., “Three items, each max 40 characters”).
- Reduce unnecessary speculation by instructing “answer only facts and mark any guesses explicitly.”
- Add constraints to avoid sensitive terms or personal information.
Example of improvement
- Original prompt: “Tell me recommended sightseeing spots.”
- Revised prompt: “List three outdoor spots in Tokyo that a middle schooler can enjoy in half a day. For each, include the nearest station and travel time (train only), one line each. If you are not confident in the facts, write ‘verification needed.’”
After revision, many teams saw improved accuracy and usability of the outputs.
■ Exercise 4: Output formatting and user guidance
We also practiced adding user-facing notices and sources instead of presenting the generated results raw.
Example: notice appended under the output
- “This information is generated by a model. Fares and travel times are estimates; please confirm on official sites.”
Students learned how a single automatically appended sentence can help prevent user misunderstanding.
■ Security & Ethics Check (reminder)
Special points of caution during execution:
- Never include personal information (name, phone number, email address, etc.) in prompts.
- Do not hardcode API keys or tokens in class notes or shared code.
- Carefully check outputs for expressions that could lead to discrimination or prejudice.
Mr. Tanaka: “Use it carefully precisely because it’s convenient. Always proceed on the premise that errors may exist.”
■ Students’ Reflections
- “After seeing the output and then fixing things, I began to understand how to write prompts more concretely.”
- “Watching the AI output ‘uncertain information’ so casually made me realize I must always verify.”
- “It was interesting how simply adding a notice changed the sense of trust.”
■ Instructor’s Closing Comment
“Generative AI becomes effective through the cycle of trial → verification → improvement. Today’s lesson wasn’t just about technique; it was about using the tool responsibly. Never forget that a generated output ≠ finished product.”
■ Homework (reflection + practice)
- Submit the original → revised version of the prompt you ran today (50–100 characters on why you revised it).
- From the revised output, list three items requiring verification and cite at least one source you would use to confirm them (e.g., name of an official site/URL).
- Add a one-line user notice—as learned in class—under the output in your app concept and submit it.
■ Next Week’s Preview: Designing App Integration with Generative AI (Advanced)
Next week, we’ll learn how to safely integrate generative AI into your mini apps. We’ll consider design caveats (input filtering, output verification flow, transparency to users) and outline simple UI integration strategies.
Week 25 prepared us to “work with” generative AI. Students took solid first steps toward using AI wisely as a tool.