[Class Report] Introduction to System Development — Week 27: Hands-On with Embedding Generative AI — Turning Design into Code
■ Teacher’s Introduction: “The Joy and Responsibility of Making Designs Work”
Prof. Tanaka: “A design diagram is a promise of thought. Today’s goal is to verify that promise in code. Safety measures may seem tedious, but they are essential work for the people who will use your system.”
The class was conducted in a controlled school environment (no external public keys, logging enabled, API call quotas).
■ Today’s Flow (Shortened)
- Input sanitization
- Model call wrapper (timeout + exception handling)
- Output validation (format checks, prohibited word filter, fact-check flag)
- Cache & fallback logic
- Display in UI with automatic disclaimers
- Testing and peer review within groups
■ Example Implementation (Pseudocode Overview from Class)
Note: The code shown in class was for learning purposes only. Real deployments require stronger validation and auditing. API keys should be handled via environment variables or secret managers, never hard-coded.
import os
import time
from typing import Tuple
# Simple cache (for class demo)
CACHE = {}
def cache_get(key):
entry = CACHE.get(key)
if entry and time.time() - entry["ts"] < entry["ttl"]:
return entry["value"]
return None
def cache_set(key, value, ttl=300):
CACHE[key] = {"value": value, "ts": time.time(), "ttl": ttl}
def sanitize_input(text: str) -> str:
# 1) Mask personal info (example only)
text = mask_email_phone(text)
# 2) Block prohibited terms
if contains_prohibited(text):
raise ValueError("Input contains inappropriate terms.")
# 3) Enforce length limit
return text[:1000]
def call_model_api(prompt: str, timeout=5) -> str:
api_key = os.getenv("SCHOOL_API_KEY")
if not api_key:
raise RuntimeError("API key not set.")
resp_text = safe_model_call(prompt, timeout=timeout)
return resp_text
def validate_output(output: str) -> Tuple[str, str]:
if not matches_expected_format(output):
return ("retry", "Output format mismatch. Retrying...")
if contains_prohibited(output):
return ("fallback", "Inappropriate expression detected. Showing fallback.")
if needs_fact_flag(output):
return ("confirm", "Some information requires verification.")
return ("ok", output)
def get_ai_response(user_input: str) -> str:
try:
clean = sanitize_input(user_input)
except ValueError as e:
return f"Input error: {e}"
key = f"ai:{clean}"
cached = cache_get(key)
if cached:
return cached
try:
resp = call_model_api(make_prompt(clean), timeout=5)
except Exception:
return local_fallback_response()
status, body = validate_output(resp)
if status == "ok":
cache_set(key, body, ttl=300)
return body
elif status == "retry":
try:
resp2 = call_model_api(make_prompt(clean) + "\nPlease output in bullet points.", timeout=5)
s2, b2 = validate_output(resp2)
if s2 == "ok":
cache_set(key, b2, ttl=300)
return b2
except Exception:
pass
return local_fallback_response()
elif status == "fallback":
return local_fallback_response()
elif status == "confirm":
return f"{body}\n※ Important facts included — please verify."
■ Students’ Takeaways
- “At first I thought AI would answer everything, but without a validation pipeline you can’t trust it.”
- “Caching makes it faster and reduces API calls, which helps with cost control.”
- “Fallback templates are smart — they prevent bad user experiences.”
- “Clear error messages make testing and QA easier.”
■ Teacher’s Observations & Advice
Prof. Tanaka emphasized:
- “Be strict with input checks. Unexpected data can be catastrophic.”
- “Automatic flags (‘needs confirmation’) help both users and operators stay safe.”
- “Keep logs, but mask personal data and retain only briefly. Encode operational rules into design.”
- “Write unit and integration tests so unexpected behavior is caught automatically.”
■ Wrap-Up
“The joy of seeing your design come alive must be balanced with anticipating possible failures and building safeguards. These patterns you learned today are not just for generative AI — they are the foundation of safe system design.”
■ Homework (Practice + Reflection)
- Submit a flow diagram of your implementation (input → sanitize → API → validate → display).
- List 3 sanitization rules used, and explain the rationale for each (30–80 words).
- Identify 2 possible failure cases in your implementation, and show how to handle them (1–2 lines each).
■ Next Week’s Preview: Operational Testing & Log Analysis
Next week: run operational tests under simulated load, then analyze logs for frequency of wrong responses and common input patterns. Enter the design improvement cycle phase.
In Week 27, students translated design into code, embedding safety checks in practice. They began to internalize the balance of convenience and responsibility in real-world implementations.