[Class Report] Systems Development (Year 3), Week 47
— Generative AI Implementation Hands-on: Calling Safely, Displaying Safely —
In Week 47, based on the design we learned last time, we did a hands-on implementation that actually calls a generative AI API.
The theme was clear.
Not just “it can call the API,” but “it’s safe to use.”
This became the first step toward controlled, responsible AI integration—one that doesn’t stop at convenience.
■ Teacher’s introduction: “AI doesn’t end at output”
Mr. Tanaka: “Today’s goal is not to ‘show the text the AI returns as-is.’
It’s to ‘verify it, format it, and make it something you can take responsibility for.’”
The teacher wrote these three stages on the board:
- Prepare the input (prompt design)
- Verify the output (format/content checks)
- Control the display (UI considerations and fallback)
■ Today’s goals
- Call a generative AI API from code
- Assemble prompts in the Service layer
- Restrict the output format
- Implement fallback behavior for unexpected output
■ Practice ①: Safe prompt design
We started with controlling the input before sending it to the AI.
Improvements we implemented
- Limit user input length
- Simple screening for risky words
- Explicitly specify output format
- Add constraints such as “Do not add facts”
Example (concept)
You are an assistant AI for a library system.
Summarize the following text in 100 characters or fewer.
Do not add any facts.
Output must be a single sentence only.
Student A: “When the instructions are specific, the output wobbles less!”
■ Practice ②: Implement AI calls in the Service layer
Following the structure we designed last time, we created an AI Client class and called it from the Service layer.
UI
↓
SummaryService
↓
AiClient
↓
Generative AI API
Learning image (simplified)
def summarize(self, text):
prompt = self.build_prompt(text)
result = self.ai_client.generate(prompt)
return self.validate_output(result)
Key points:
- The UI does not directly “know” about the AI
- AI calls always go through the Service layer
- Output validation happens in the Service layer
Student B: “You can treat AI as ‘just another external API.’”
■ Practice ③: Add output validation
This was today’s most important part.
Checks we performed
- Does it exceed the character limit?
- Is it non-empty?
- Does it contain forbidden words?
- Is it a single sentence?
if len(result) > 120:
raise ValueError("Output is too long")
If validation fails, we:
- Replace it with a fixed message
- Display “Unable to generate a summary right now”
- Log the details for troubleshooting
Student C: “Designing to not trust AI output is what matters.”
■ Practice ④: Fallback design
We implemented alternative behavior in case AI fails.
Examples
- Return a fixed template
- Show the beginning of the original text
- Disable the AI feature and revert to normal mode
try:
summary = service.summarize(text)
except Exception:
summary = "Unable to generate a summary right now."
Student D: “It feels safer when it’s designed to work fine even without it.”
■ Practice ⑤: UI display considerations
Finally, we added UI elements to clearly indicate that the content was AI-generated.
Examples:
- “Auto-generated by AI”
- Display it as small, secondary text
- Note the possibility of errors
Teacher: “Transparency builds trust.”
■ Class-wide insights
- Use generative AI as a “helpful assist feature”
- The Service layer carries significant responsibility
- The most important rule is: don’t display output as-is
- Logging design becomes even more important
■ Teacher’s closing remark
“Generative AI integration is less about
‘technical skill’ and more about ‘design skill.’
Using AI is easy.
But building a design that
stays safe over time is the real capability.”
■ Homework (for next week)
- Create a safety design diagram (input → AI → validation → display) for today’s AI use case
- Add three more output-validation rules
- Write a user notification message for when the AI returns incorrect content
■ Next week preview: Evaluation and improvement cycles for generative AI
Next week, we’ll evaluate outputs using logs and run an improvement cycle (prompt tuning and stronger validation).
Week 47 was a practical first step toward integrating generative AI in a “controlled” way.
Students are steadily learning how to contain AI within design, rather than being pushed around by it.
