A lot of teams jump straight to building agents.
That is understandable. Agents sound powerful, strategic, and future ready.
But in many day-to-day engineering and knowledge workflows, the faster path to value is often much simpler:
Define the skill.
Shape the prompt.
Set the instructions.
That combination can deliver a large share of the practical value people expect from an agent, while staying easier to test, improve, govern, and reuse.
This matters if your goal is not just to experiment with AI, but to make it reliably useful in real work.

In this issue, I want to share a practical way to think about it.
A simple mental model
Use three small files:
SKILL.md
This defines what the AI should be good at.
prompt.md
This defines what you want the AI to do in a specific task.
instructions.md
This defines how the AI should behave while doing the task.
If you get these three things right, you often move from generic AI output to work that is much more aligned to your domain, quality expectations, and operating style.
Why this works
This approach helps because it creates:
faster experimentation
lower implementation effort
more consistent output
easier reuse across teams
simpler review and governance
a quicker path to business value
You do not need to build a full agent every time you want domain aware AI behavior. In many cases, you first need structured context and clear expectations.
What each file should contain
1. SKILL.md
This file captures the capability you want the AI to demonstrate.
Think of it as the role plus the standards.
Examples:
business requirements analyst
software architect
test design reviewer
cybersecurity threat model assistant
deployment review assistant
A good SKILL.md usually includes:
the role the AI is playing
the tasks it is expected to perform
the quality bar
the domain context
the common mistakes it should avoid
2. prompt.md
This file is about the task at hand.
It should include:
the specific problem
the input context
the expected output
any constraints
what good looks like for this task
3. instructions.md
This file shapes the behavior.
It can include guidance such as:
ask clarifying questions when information is missing
highlight ambiguity explicitly
prefer simple designs before complex ones
separate facts from assumptions
provide structured output
call out risks and tradeoffs
This is often the difference between output that is merely fluent and output that is actually useful.
Example 1: Requirements Engineering
Here is a detailed example for requirements engineering.
SKILL.md
# Requirements Engineering Skill
You are a senior requirements engineer.
Your job is to transform business needs into clear, testable, implementation ready requirements.
Focus on:
- functional requirements
- non functional requirements
- assumptions
- constraints
- dependencies
- acceptance criteria
- ambiguity detection
Quality standard:
- write clearly and precisely
- avoid vague language
- identify missing information
- make requirements testable
- separate business goals from solution decisions where possible
Common mistakes to avoid:
- mixing requirements with implementation details too early
- accepting unclear terms such as fast, user friendly, or scalable without clarification
- ignoring edge cases, error scenarios, or operational constraints
Output style:
- use headings
- use bullet lists
- include a section for open questions
- include a section for risks or ambiguities
prompt.md
# Task Prompt
We need to define requirements for a feature that allows enterprise users to upload compliance documents and receive an AI generated summary.
Business goal:
Reduce the manual effort needed to review large compliance documents.
Known context:
- Users are internal compliance analysts
- Documents may be PDF or Word files
- Some documents may contain sensitive content
- Users want a summary, key obligations, and flagged risk areas
- The first release is for internal use only
Please produce:
1. functional requirements
2. non functional requirements
3. assumptions
4. constraints
5. open questions
6. acceptance criteria for the first release
instructions.md
# Behavior Instructions
- Ask clarifying questions if important information is missing
- Point out vague business terms
- Distinguish between confirmed requirements and assumptions
- Prefer testable wording
- Include security and privacy considerations
- Include failure scenarios and edge cases
- Keep the language understandable for both business and engineering stakeholders
What this produces in practice
With this setup, the AI is much more likely to produce something useful such as:
a requirement that documents above a defined file size must be rejected with a clear user message
a non functional requirement that summaries must be generated within an agreed response time target
an open question about whether documents may leave a regional boundary for processing
an acceptance criterion for handling unreadable or password protected files
a privacy concern about storing uploaded documents and generated summaries
Without the three file structure, many teams get a vague paragraph. With it, they are more likely to get a usable first draft.
Immediate lesson for readers
If you work in requirements engineering, do not start with “write requirements for this feature.”
Start with a reusable requirements skill file and a reusable instruction file. Then only change the task prompt per feature.
That gives you better consistency across projects almost immediately.
Example 2: Architecture
Now let us look at architecture.
SKILL.md
# Architecture Skill
You are a senior software architect.
Your job is to evaluate solution options and recommend pragmatic architecture choices aligned with business goals, operational constraints, and security needs.
Focus on:
- system context
- component boundaries
- integration patterns
- scalability
- resilience
- security
- maintainability
- operational complexity
- tradeoffs
Quality standard:
- prefer clear reasoning over buzzwords
- explain tradeoffs explicitly
- recommend simple solutions when they meet the need
- identify risks, assumptions, and dependencies
- consider observability and supportability
Common mistakes to avoid:
- proposing distributed complexity without clear benefit
- ignoring operational cost
- overlooking failure modes
- making recommendations without stating assumptions
Output style:
- use structured sections
- compare options
- give a recommendation with rationale
- include risks and mitigation ideas
prompt.md
# Task Prompt
Evaluate architecture options for a document analysis platform used by internal teams.
The platform must:
- accept document uploads
- extract text
- send content to an AI service for analysis
- store results for later review
- support auditability
- handle moderate growth over the next 18 months
Constraints:
- initial users are internal only
- the team is small
- time to first release matters
- sensitive documents require controlled access
- the platform must integrate with existing identity management
Please provide:
1. a simple system context
2. 2 or 3 architecture options
3. pros and cons of each
4. recommended option
5. security and operational considerations
6. risks and next decisions
instructions.md
# Behavior Instructions
- Prefer pragmatic options suitable for a small team
- Do not introduce microservices unless clearly justified
- State assumptions explicitly
- Consider security, auditability, and operational effort
- Include failure modes and support concerns
- Keep explanations understandable to engineering leads and product stakeholders
What this produces in practice
With this setup, the AI is much more likely to compare options such as:
a modular monolith with background jobs
a service based split only around high value boundaries
a cloud native event driven approach if scale or decoupling truly requires it
It may also explain why a modular monolith is the better first step for a small team that needs speed, auditability, and controlled complexity.
That is much more useful than a generic answer that jumps immediately to microservices because the topic sounds modern.
Immediate lesson for readers
If you work in architecture, use AI to compare options and expose tradeoffs, not just to generate diagrams or fashionable patterns.
The skill file sets the quality bar.
The instruction file keeps the output grounded.
The prompt file gives the decision context.
That combination makes the conversation much more practical.
Example folder structure
my-cpp-project/
├── .github/
│ ├── instructions/
│ │ ├── cpp-conventions.instructions.md
│ │ ├── cpp-error-handling.instructions.md
│ │ ├── cpp-threading.instructions.md
│ │ ├── cmake-conventions.instructions.md
│ │ ├── test-conventions.instructions.md
│ │ └── architecture-decisions.instructions.md
│ │
│ ├── prompts/
│ │ ├── code-review.prompt.md
│ │ ├── create-unit-test.prompt.md
│ │ ├── create-adr.prompt.md
│ │ ├── check-requirements.prompt.md
│ │ └── pipeline-check.prompt.md
│ │
│ └── skills/
│ ├── code-review/
│ │ ├── SKILL.md
│ ├── architecture-review/
│ │ ├── SKILL.md
│ ├── test-review/
│ │ ├── SKILL.md
│ ├── requirements-tracing/
│ │ └── SKILL.md
│ └── devops-review/
│ ├── SKILL.md
│
├── docs/
├── src/
├── test/
When custom agents still make sense
None of this means custom agents are a bad idea.
They still make sense in cases such as:
regulated workflows with strict review, traceability, and policy enforcement
deep tool integration across ticketing systems, repositories, scanners, approvals, and deployment pipelines
proprietary planning logic that reflects internal operating models or domain specific reasoning
multi step automation where the system must take actions, not just generate outputs
environments where orchestration, memory, and controlled decision flow are essential
In those cases, an agent can provide real value.
But even then, the same principle still applies.
Before building the full agent, define the behavior clearly.
In other words, your skills, prompts, and instructions are not separate from agent design. They are often the foundation of good agent design.
A practical starting point for this week
If you want to apply this immediately, here is a simple exercise:
Pick one recurring task in your work.
Create a SKILL.md file for that task.
Create an instructions.md file that defines quality and behavior.
Create a prompt.md file for one live example.
Compare the output against your usual one shot prompting approach.
Improve the files and save them for reuse.
This is one of the lowest effort ways to make AI more consistent and more useful in software architecture, cybersecurity, and AI related work.
Closing thought
Many teams think they need agent complexity to get business value from AI.
Often, what they really need first is clarity.
Clarity about what the AI should be good at.
Clarity about the task.
Clarity about how the output should be shaped.
That is why structured skills, prompts, and instructions are often the highest return place to start.
