AI Assisted Greenfield Software Development, Part 3: Generating the Process Instruction Files

by John Miller | December 30, 2025

This is the 3rd post in the series on AI assisted greenfield software development. This post builds on the first AI-Assisted Greenfield Software Development, Part 1: Business Requirements and subsequent posts. If you haven't read these posts, you might consider starting there.

In Part 1 we defined the high-level business requirements and in Part 2 we started building out the scaffolding supporting AI code generation. In part 3, we'll continue to add guidance for AI code generation. Starting with project guidance and process definitions, we'll create prompts that create project overview instructions, AI code generation guidance, and Git workflow instructions. We'll use these prompt's to create instruction files and then check the guidance for errors and redundancies.

To achieve quality in AI code generation you need to constrain the AI model and focus it on what you intend to accomplish. Unconstrained, AI has many options to choose from and the chances that the AI will choose the wrong option are the greatest. This is where models are most non-deterministic and subject to hallucinations and errors.

By specifying custom chat-modes, prompts and instructions, you set the expectations for the intended output and how it's produced. As long as the instructions are clear and unambiguous, the model will respect the constraints. This isn't about slowing down; it's about moving fast safely.

Provenance Is King (ai-assisted-output.instructions.md)

In Part 2 we added our AI Provenance Policy. We can no longer accept code if we don't know where it came from. We've defined a metadata schema that must accompany every AI-generated artifact. Whether it's a Python script, a documentation file, or a diagram, it needs a “birth certificate.” We also want to record the conversation with AI that led to the generation of an artifact and a summery of the context used in producing the artifact. The AI assisted output instructions ensure that this provenance is recorded for standards compliance and as breadcrumbs for our AI generation effort. We'll see this in all generated artifacts from here on.

The Development Life Cycle (ai-dev-process.instructions.md)

In addition to the AI output instructions, we're adding instructions that formalizes the dance between human and machine. It's no longer just “generate and commit.” This file introduces a specific workflow:

  1. AI Code Generation: Must include provenance and pass existing tests.
  2. AI Code Review:: The AI reviews its own work (or another agent's work) for security, performance, and style before a human sees it.
  3. Human Code Review: We focus on high-level architecture and intent, letting the AI handle the linting and boilerplate checks.

We also define Quality Gates. You can't merge unless you have:

  • Technical Gates (CI/CD passing)
  • AI Review (Automated pass)
  • Human Approval (Final sign-off)

This process can be enhanced to increase the ability of AI to produce quality code. For example:

  • Use a different model to review the code than the model that generated it.
  • Use three models to review the code and only pass the review if two of the three models agree on the quality of the code.
  • Use multiple models to generate code and have AI review the pros and cons of each implementation.

Models have biases, if only to their own work. By utilizing multiple models you mitigate for these biases and the different perspectives can add to the robustness of the implementations.

Here is a link to the
ai-dev-process.instructions.md file.

The AI-assisted software development process further constrains what the AI can do when working with code in the project. It's important to note that these constraints only apply when AI is actively working. If you want these constrains to apply to human developers on your team, you'll want to add gates to the repository to enforce the constraints across the board.

Git Workflow Integration (git-workflow.instructions.md)

Another process we should nail down is how AI generated code is incorporated into the life cycle. While AI generated code should follow the same process as developer written code, there are some differences we might want to require.

In the Git Workflow instructions file, we've required a trunk-based Development (TBD) model. TBD is the practice of merging small, frequent updates to a single core branch (often called main or trunk), rather than maintaining long-lived feature branches. By requiring developers and AI agents to merge code at least daily, we minimize the “drift” that leads to catastrophic merge conflicts. It forces a discipline of continuous integration, ensuring the codebase is always in a deployable state. It relies heavily on automated testing to prevent breaking the build for everyone.

While we've chosen TBD for speed, Git Flow remains a popular alternative, utilizing a strict structure of develop, release, and feature branches that is excellent for managed release cycles but often cumbersome for rapid iteration. GitHub Flow is another common contender, simplifying Git Flow by deploying directly from feature branches, but for our high-velocity AI workflow, the immediate integration of TBD is essential. If you prefer Git Flow or GitHub flow you can modify the prompt that generated the instruction file (the prompts are covered later in this post) and create a Git Flow or GitHub specific instruction file.

Here are the AI specific features of the workflow:

  • AI branches: We use a specific naming convention ai/<chat-id>-<description>. This makes it instantly obvious which branches are machine-generated.
  • Commit messages: Must reference the Chat ID and Model.
  • Orphan cleanup: AI branches are ephemeral. If they aren't merged in 7 days, they are deleted.

Here is a link to the git-workflow.instructions.md file.

Like the AI development instructions, some of the requirement enforcement should be implemented through repo or pipeline configuration. This is called out in the Enforcement section of the instruction file.

The Big Picture (project-overview.instructions.md)

Finally, we created the project overview instructions to tie everything together. It acts as the entry point for any new agent (or human) joining the project, pointing them to the standards and critical files they need to respect.

Here is a link to the project-overview.instructions.md file.

The project overview instructions are a snapshot of the project in time. As the project evolves we'll update the project overview so that it always contains the current state of the project.

Checking the Context

When adding or changing instruction files it's important to verify that all of the instructions are complimentary. The check-context.prompt.md prompt file examines the current context for conflicting instructions, factual inconsistencies, logical contradictions, priority conflicts, technical incompatibilities, terminology gaps, and redundancies.

It's a good idea to submit this prompt to a different model than was used to generate the instruction files to avoid the model favoring it's own output.

Why This Matters

By codifying these rules into .instructions.md files, we are doing something powerful: We are programming the process.

When we load these instructions into our AI context, the AI knows how to behave. It knows it needs to generate metadata. It knows it needs to create a log entry. It becomes a compliant member of the team, rather than a chaotic tool.

The Prompt Files

Looking at the metadata for each of these artifacts you can see that a prompt file was used to create each instruction file. The prompt files, in concert with the instruction-file-generation.instructions.md instructions, define the contents of the generated instruction files.

While the instructions could have been created directly, the benefit of using a prompt file is that it codifies the structure and contents of the generated instruction files. If the process or tooling changes, the prompt can be modified and the instruction files updated.

This helps to maintain the instruction files. For example, if the Git process changes from trunk-based to Git Flow or GitHub Flow, the prompt can be updated and the instructions regenerated.

More importantly, the process of creating the instruction files is documented and committed to the repo.

How were the prompt files created? With AI assistance, of course. Using a simple prompt like:

Create a prompt file to create an Project Overview instruction file containing the high level context for the project. The instruction file should be optimized for AI agents and to minimize the token requirements.

When submitted, AI will create a prompt file that conforms to the prompt-file-generation.instructions.md instructions and meets the intent of creating the project overview instructions.

Here are links to the prompt files that created the instruction files: create-ai-dev-process.prompt.md, create-git-workflow-instructions.prompt.md, and create-project-overview.prompt.md

What's Next?

This concludes the work of establishing the core instructions we need and moves us closer to starting an implementation. In Part 4 we'll add instructions that define the architecture and the architecture implementation. Establishing a path that the AI can follow when implementing the solution.

If you're interested in seeing the files, you can find them in the Academia GitHub repository. If you like to follow along with your own implementation, fork this repo and use the fork to build you own implementation.

Feedback Loop

Feedback is always welcome. Please direct it to john.miller@codemag.com

Disclaimer

AI contributed to the writing to this blog post, but humans reviewed it, refined it, enhanced it and gave it soul.

Prompts:

  • Write a blog post describing the changes made to the Part-Three branch of the zeus.academia.3b repo
  • Write a paragraph explaining trunk-based development. What are other popular methods?