Site icon IT & Life Hacks Blog|Ideas for learning and practicing

[Class Report] System Development (Year 3) — Week 46~An Introduction to Designing Generative AI Integrations: How to “Control Convenience Safely”~

teacher asking a question to the class

Photo by Max Fischer on Pexels.com

[Class Report] System Development (Year 3) — Week 46

~An Introduction to Designing Generative AI Integrations: How to “Control Convenience Safely”~

In Week 46, we finally began an intro to designing systems that integrate generative AI.
Building on last week’s foundations—“multi-API integration and asynchronous design”—
this week focused on understanding generative AI’s unique traits and risks, and how to control them.

The theme was:

“Generative AI isn’t a ‘smart component’—it’s something whose behavior must be managed.”


■ Teacher’s introduction: “Generative AI is more unpredictable than APIs”

Mr. Tanaka: “Generative AI is useful, but:

  • You won’t always get the same answer every time
  • It can say incorrect things in a very convincing way
  • Depending on how you use it, it can become dangerous
    Today, we’ll think less about ‘how to call it’ and more about ‘how to manage it.’”

To explain how it differs from APIs, he emphasized:

  • Typical API: fixed input → fixed output format
  • Generative AI: text input → probabilistic, varied outputs

■ Today’s goals

  1. Understand generative AI’s characteristics (strengths and weaknesses)
  2. Understand why you shouldn’t call generative AI directly from the UI
  3. Be able to design a basic architecture for generative AI integration
  4. Clearly define “ways you must not use it”

■ Exercise 1: Learn the “common problems” generative AI tends to cause

Using real examples, the teacher introduced typical failure patterns.

Common problems

  • Asserting something false with confidence
  • Filling in extra details that were never requested
  • Producing inappropriate or vague expressions
  • Output length/format varies every time
  • Depending on the input, outputs can become unexpected

Student A: “Because it’s more ‘human-like’ than an API, it’s dangerous if you trust it too much.”


■ Exercise 2: Clarify the use cases for generative AI

Next, we did a workshop to clarify what we want to use generative AI for.

Acceptable uses (examples)

  • Summarizing text
  • Rephrasing templates
  • Brainstorming draft ideas
  • Helping explain user input

Uses that easily become “NG” (requires caution)

  • Decisions with only one correct answer
  • Definitive statements about money, law, medicine, etc.
  • Answers that users are likely to believe as-is

Student B: “So the default is using it as ‘support,’ not for ‘decisions.’”


■ Exercise 3: Basic architecture for integrating generative AI

Leveraging what we learned in Year 2 and 3, we organized the basic structure for integrating AI:

UI
 ↓
Service (decision-making & control)
 ↓
AI Client
 ↓
Generative AI API

Key design points

  • Don’t call the AI directly from the UI
  • In the Service layer:
    • Validate inputs
    • Format/shape prompts
    • Validate outputs
    • Decide fallback behavior
  • AI is not “the one who answers,” but “the one who generates raw material”

Student C: “It looks like normal API integration, but with way more checks!”


■ Exercise 4: Prompts are “design documents”

With generative AI, the prompt (instruction text) heavily influences results.

The teacher explained it like this:

Prompt = the “request document” you write when you assign work to a person

Elements of a good prompt

  • Specify a role
  • Specify output format
  • State what must not be done
  • Control length and tone

Example (conceptual)

You are an assistant AI for a library system.
Summarize the following text in 100 characters or fewer.
Do not add any information that is not factual.

Student D: “If you ask vaguely, you get vague answers.”


■ Exercise 5: Designing for failure when generative AI goes wrong

More than with typical APIs, the critical part is designing with failure assumed.

Failures you should expect

  • No response
  • Nonsensical text
  • Too long / too short
  • Inappropriate expressions

Basic policy

  • Always validate AI results
  • Don’t display outputs as-is
  • If it’s not acceptable:
    • Show a fixed message
    • Fall back to behavior that doesn’t rely on AI

Student E: “‘Even if AI fails, the app still works’ is the key.”


■ What the whole class realized

  • Generative AI is not “all-purpose”
  • Without control, both UX and safety get worse
  • Service-layer design is the most important part
  • It’s also important to clearly tell users that AI is being used

■ The teacher’s closing summary

“Generative AI is not a ‘thinking machine.’
It’s a tool that probabilistically generates text.

That’s why humans must design:

  • where to use it
  • how much to trust it
  • what to do when it fails

What we learned today is essentially
the ‘baseline mindset for engineers in the age of generative AI.’”


■ Homework (for next week)

  1. Come up with one use-case idea that uses generative AI
  2. For that use case, write:
    • which part the AI handles
    • which part people/programs decide
  3. Propose a fallback plan for when the AI output fails

■ Next week preview: calling the generative AI for real (a safe hands-on)

Next week we’ll try implementing actual calls to a generative AI API.
However, the theme is “safe integration,” including:

  • output validation
  • display control
  • fallback behavior

Week 46 was an important session for gaining the perspective of “thinking before using” generative AI.
The students are steadily building the ability to control AI through design rather than getting carried away by convenience.

Exit mobile version