agency-agents Explained in Depth: What This AI Agent Collection Is and How to Use It to Call “Expert Teams” with Markdown
- agency-agents is an OSS that distributes many “specialist AI agent definitions” as Markdown files (in other words, an advanced form of a prompt collection).
- Its purpose is to reduce the need to explain roles to AI from scratch every time, and to make output quality, procedures, and decision criteria reproducible in a fixed form.
- It can be used with Claude Code sub-agents, agent features in Cursor and Copilot-style tools, Gemini CLI, and other environments through the idea of “placing definition files and calling them.”
- It becomes especially useful for preventing gaps in deep-dive perspectives, standardizing team practices, fixing review criteria, and making role division on the AI side easier to run (design → implementation → testing → documentation).
What exactly is agency-agents?
agency-agents (sometimes also called The Agency / Agency Agents) is a repository that organizes carefully designed AI agent personas by specialty—covering role, tone, thought process, deliverables, and evaluation criteria—as Markdown files. Its biggest feature is that it goes far beyond short role prompts like “You are an expert in XX” and instead structures the following elements from the start:
- The agent’s mission (what it is meant to achieve)
- Decision criteria (what it prioritizes and what it avoids)
- Work process (steps such as research → design → implementation → validation)
- Concrete deliverables (code, tests, checklists, specifications, review comments, etc.)
- Success criteria (what counts as a satisfactory result)
Official explanations also describe each agent as having “specialization,” “personality/style,” “focus on deliverables,” and “production-oriented workflows.” This makes it easier to understand as not just a “general prompt,” but as a design with a strong framework of procedure and quality.
In addition, community articles have introduced it as “providing 144 agent definitions in Markdown and supporting multiple tools,” showing a growing idea of reusing the same “expert framework” across tools.
How is it different from a normal prompt?
agency-agents tends to be especially useful because it can solve several common real-world problems at once.
1) It reduces omissions (because perspective templates are built in)
For example, in frontend development, once something looks like it works, it is easy to think, “Done.” But in real work, there are many perspectives to cover: Core Web Vitals, accessibility, state management, error handling, i18n, analytics, E2E, and so on. With a generic prompt, you rely on what the AI happens to remember in the moment, so omissions occur. agency-agents often includes the relevant checkpoints for each field from the beginning, which helps fix the foundation of the reasoning process.
2) It improves reproducibility (because differences in human instructions shrink)
It is hard for teams if only the people who are “good at prompting AI” get good results. By making agent definitions a shared asset, it becomes easier to align on a minimum quality framework, and review standards also become easier to unify.
3) It stabilizes the “format of deliverables” (which makes them easier to review)
Problems like “Just pasting code is not enough,” “I cannot tell the impact range,” or “There are no test considerations” become less common. That is because the agent side already defines what should be output as a deliverable.
How do you use it? The basic idea
The use of agency-agents is very simple. Conceptually, it is a two-step process:
- Place the agent definitions (Markdown) somewhere your AI tool can reference them
- Call the role during work and have it produce deliverables
Even when people say “install,” it is less about integrating a library and more about placing definition files. In fact, for Claude Code, it is widely shared in the community that you can copy them into a designated user directory and use them as sub-agents.
Below is a “tool-by-tool” overview of common usage patterns. The details may change as tools evolve, so it is best to understand the concept first.
How to use it with Claude Code (calling it as a sub-agent)
Claude Code works well with sub-agent workflows (switching and running roles in parallel), and agency-agents fits very naturally into that model. A typical workflow looks like this:
- Load the agent definitions against your repository
- Call roles such as “Frontend Developer,” “Backend Developer,” “QA Engineer,” “Tech Writer,” and so on as needed
- Each role returns “its own deliverable,” and the human integrates them
Example: finishing one feature through “role division”
Suppose what you want to do is “add two-factor authentication to login.” In that case, the ideal is not that AI writes everything at once, but that quality is solidified step by step.
- Product / PM-style agent
- Acceptance criteria (including failure patterns)
- Screen transitions
- Security requirements (recovery, lockout)
- Backend Developer agent
- API design (endpoints, error codes, rate limiting)
- DB design (token lifetime, invalidation)
- Audit log policy
- Frontend Developer agent
- UI implementation (state, loading, error display)
- Accessibility (focus, screen reader behavior)
- Consistency with existing forms
- QA Engineer agent
- Test perspectives (unit, integration, E2E)
- Edge cases (timeouts, retries, device changes)
- Regression risks
- Tech Writer agent
- Setup instructions
- Troubleshooting
- Operational procedures
When you have them respond through these “expert frameworks,” the human side can focus more on integration and decision-making.
How to use it with Cursor / Copilot / Gemini CLI, etc. (using agent definitions as “instruction assets”)
In environments like Cursor, Copilot-style tools, or Gemini CLI, each tool has slightly different agent mechanisms, but agency-agents becomes useful in the following two ways.
Method A: Register it in the tool’s “agent feature” and call it by name
If the tool supports “custom agents” or “sub-agents,” placing the definition files in the designated location may let you call them from a list. Community articles also mention support for multiple tools.
Method B: Use the definition Markdown as a “reference document” and lock the role
Even if the tool’s registration feature is weak, you can still get value by pasting the definition Markdown into the prompt and fixing the role at the beginning of the task with something like “Act according to this definition.” Because long-context models are now common, this kind of “role-document workflow” has become practical.
What becomes more convenient? Practical benefits from a real-world perspective
Here is a more concrete explanation of how agency-agents can change actual work.
1) It reduces ambiguity in requirements (because the AI knows what it should ask)
Good agent definitions often include “confirm missing information” before starting work. For example, a QA agent may ask about the test environment, target browsers, data size, existing CI, and so on. That greatly reduces rework later.
2) It fixes review perspectives (because “what to look at” becomes consistent)
One painful aspect of team development is that reviews become person-dependent, so “different people point out different things.” When you use something like agency-agents as a base, perspectives such as “security,” “performance,” or “accessibility” are more likely to appear consistently at the same level of detail. As a result, the quality of PRs becomes more stable.
3) It clarifies “what humans should think about”
The more you delegate to agents, the clearer human work becomes. For example:
- Policy decisions (trade-offs, priorities)
- Risk acceptance (what to leave out)
- Product judgment (UX, legal, branding)
- Final approval (release decision)
The more the AI takes over “tasks,” the more clearly humans can see the “decisions” they must make.
4) Division of labor can run “asynchronously” (so even a small company feels team-like)
Even for solo development, it is hard to switch in your head between “PM → design → implementation → QA → documentation.” agency-agents helps with those switches through “persona and procedure.” Even in a small setup, it becomes easier to work in a team-like way.
5) Onboarding becomes easier (because you can hand newcomers a “framework”)
It is difficult to explain “this is how our project thinks” verbally to new team members. If you combine agent definitions with your company rules (naming, exceptions, logging, monitoring, PR rules), the AI can act more effectively as a support system for newcomers.
A practical usage pattern: 3 steps that are less likely to fail
agency-agents is convenient, but if used poorly, it can just create “a pile of plausible documents.” The following order is recommended.
Step 1: Fix the objective and acceptance criteria first
Before handing things over to the agent, it is enough for you to decide only these:
- Objective: what you want to achieve
- Scope: what can be changed (files / modules / features)
- Acceptance criteria: minimum conditions for tests, compatibility, performance, UX
If this part is vague, the agent becomes “intelligently lost.”
Step 2: Use only 2–3 roles and iterate on something small
If you call 10 agent roles from the beginning, integration becomes difficult. A good starting set is:
- Implementation role (Frontend or Backend)
- QA role
- Tech Writer role (if needed)
With just these three, first complete one feature.
Step 3: Fix the output format
For example, have the agent always produce these:
- Summary of changes (3 lines)
- Impact scope (bullet list)
- Risks and countermeasures (bullet list)
- Added / updated tests (list)
- Rollback procedure (if necessary)
This alone makes review much easier.
Practical example: adapting agent definitions to your company’s own style
agency-agents is useful as-is, but its true power appears when you mix in your company’s own working style. For example, prepare a separate “additional rules” file like the one below, and make every agent read it as a prerequisite.
Example: company-wide common rules
- Exceptions: APIs must always return both an error code and a message
- Logs: personal information must be masked, and correlation IDs are mandatory
- Tests: all new features must add at least a unit test or an integration test
- Performance: no N+1 in list APIs, and SQL explanations (EXPLAIN) must be kept
- UI: keyboard operation and focus movement must be checked for forms
- Documentation: execution steps must be added to the README
When you combine these “common rules” with agency-agents’ “expert frameworks,” the AI gets closer to becoming an expert for your own company.
Important cautions: useful, but not万能
Finally, here are a few cautions for using it safely.
1) Agent definitions do not guarantee correctness
Even if the definition is good, the model may still make incorrect assumptions or produce code that does not work. Always make sure the output converges through tests, linting, type checks, and reviews to satisfy the acceptance criteria.
2) Information handling and permissions
What you let an agent read is itself an information management issue. For confidential information, personal data, customer data, keys, or internal-only specifications, it is safer to decide in advance how to provide them and how logs should be handled.
3) Do not create “mountains of deliverables”
If the agent produces too much documentation, humans stop being able to read it. Keep deliverables to the minimum necessary for review and operation, and trim templates when needed.
Summary: agency-agents is a mechanism for using AI as a “team framework”
agency-agents is not an AI model itself, but a role asset for fixing how AI thinks, how it proceeds, and what it outputs as deliverables.
That is why the biggest benefits are not just automation, but improvements in the quality of development, such as:
- Fewer omissions and more consistent perspectives
- A connected flow from specification → implementation → validation → documentation
- Deliverables that are easier to review
- The ability to divide work like a team even when working alone
- A consistent framework that makes it easier for newcomers and non-engineers to proceed
If you feel that “AI’s output is shallow,” “instructions vary every time,” or “review is exhausting,” agency-agents is likely to fit very well. First, choose one task that occurs frequently in your environment (for example, a small feature addition, bug fix, or adding tests) and try running it with just 2–3 roles. Then gradually mix in your own company rules. That is when the experience starts to stabilize.

