Site icon IT & Life Hacks Blog|Ideas for learning and practicing

[Class Report] System Development (Year 3), Week 49— Safety, Ethics, and Responsible Design for Generative AI: Thinking About the “Hidden Side” of Convenience —

four people using laptop computers and smartphone

Photo by Canva Studio on Pexels.com

[Class Report] System Development (Year 3), Week 49

— Safety, Ethics, and Responsible Design for Generative AI: Thinking About the “Hidden Side” of Convenience —

In Week 49, building on the generative AI features we have implemented and improved so far, we held a class focused on safety, ethics, information management, and accountability.

The theme was:

Designing responsibility in how AI is used is just as important as the technical skill of using AI.

This session deepened our perspective on what it means to be an engineer—because technology alone does not complete the story.


■ Teacher’s Introduction: “Working” and “Being Right” Are Different

Mr. Tanaka: “It’s one thing for AI to work conveniently, and another for it to be safe, fair, and appropriate.
Today we’ll identify where the risks are and think about how to address them through design.”

On the board, he wrote four items:

  • Misinformation (Hallucination)
  • Bias
  • Personal Information (Privacy)
  • Accountability

■ Today’s Goals

  1. Be able to explain generative AI risks in concrete terms
  2. Identify risks hidden in our own system
  3. Distinguish technical countermeasures from operational countermeasures
  4. Make “who is responsible” explicit in the design

■ Practice ①: Risk Identification Workshop

In groups, we listed possible problems that could occur with the AI features we are currently implementing.

Examples

  • Summaries differ from the facts
  • Discriminatory or inappropriate wording appears
  • Inputs contain personal information
  • Users misunderstand the AI output
  • The output is too definitive / overly assertive

Student A: “So the risk isn’t only ‘being wrong’—it’s also ‘misleading people.’”


■ Practice ②: Organizing Technical Countermeasures

Next, we considered countermeasures we can implement in code for each risk.

Examples of technical countermeasures

  • Input length limits
  • Banned-word filters
  • Fixing the output format (e.g., JSON)
  • Not storing AI output as-is
  • Restricting access to logs
  • Displaying an “AI-generated” label

Teacher: “If something can be prevented technically, it should be prevented technically.”


■ Practice ③: Organizing Operational Countermeasures

Some parts can’t be prevented by code alone.

Examples of operational countermeasures

  • Clearly stating terms of use
  • Restricting permitted use cases
  • Monitoring by administrators
  • Regular reviews of output logs
  • Setting up a bug report form

Student B: “So we need design outside the code too.”


■ Practice ④: Clarifying Accountability

The teacher posed an important question:

“If AI outputs incorrect information, whose responsibility is it?”

After discussion, the shared conclusion was:

  • AI does not take responsibility
  • Final responsibility lies with the system provider
  • That’s why the following matter:
    • How information is presented
    • Positioning AI as a support tool
    • Designing the system so users don’t over-trust it

Student C: “We can’t just blame ‘the AI.’”


■ Practice ⑤: Creating a Safety Design Checklist

Finally, as a class, we created a shared
Generative AI Safety Design Checklist.

Example checklist items

  • [ ] Input limits exist
  • [ ] Output validation exists
  • [ ] A fallback exists
  • [ ] It is clearly labeled as AI-generated
  • [ ] Personal information is not stored
  • [ ] Logs are managed securely
  • [ ] The intended purpose of use is clear

■ Class-wide Takeaways

  • AI problems are not “technical only”
  • Design that prevents user misunderstanding is crucial
  • Transparency builds trust
  • Design responsibility lies on the engineering side

■ The Teacher’s Closing Message

“AI is a tool.
But because that tool affects society,
designers carry responsibility.

What you learned today is
the ethics of controlling convenience.

Without it,
even the most advanced technology becomes risky.”


■ Homework (for next week)

  1. Submit a risk countermeasure table (technical / operational) for your AI feature
  2. Propose two additional safety measures the system should add
  3. Draft an “AI Usage Policy” (200–400 Japanese characters)

■ Next Week Preview: Final Integrated Project Design

Next week, we will begin the final project design, integrating everything we’ve learned:
API integration, asynchronous design, generative AI, and safety design.


Week 49 was an important class about learning the responsibility of handling AI.
Students are beginning to develop a perspective not only as engineers, but as designers who engage with society.

Exit mobile version