Interesting project. I am working on a similar solution. Eventually you will run into the following with harnesses, so I wonder how these questions work with your project;
1) Can you define a process other than build -> review -> .. etc. And more importantly, can you define a process that is more complex? For example for each review finding, do X. Or go from end-to-end test, back to build.
2) In your setup, how does a sub-agent prove undeniably, that it's work is complete? Does the "lead" agent just look at the output? If so, it would effectively make the lead an implicit reviewer for all agents, so I don't follow why you would need a review step.
3) Can you have steps in between these agentic processes that do not involve agents?
For 1), yes, there is an "observe" step in the process where - when the project is deployed - it observes and reconciles what happens vs what should happen based on specs.
I believe more variant are bound to emerge when harnesses become more prevalent. We only scratched the surface, so don't generalize over the process yet.
When an agent is told to do something by the scheduler, the next step in the process only believes it’s done if the agent’s status is marked as ‘posted’. Statuses like ‘ready_to_post’ or ‘draft_verified_awaiting_review’… these are actually errors that the system needs to fix on the following attempt.
The trickiest part was dealing with being stopped, but not having something break. You have to have ways to say “this happened, and it isn't what we wanted”, for example, ‘blocked_quota’, ‘blocked_no_credentials’, or ‘skipped_anti_bunching’. If you don't have those, the main program will endlessly retry and spend all your money.
the typed handoff in ahk is the right primitive imo. discipline on top: agents never write half-states. every run terminates in a documented terminal status, success or otherwise.
1) Can you define a process other than build -> review -> .. etc. And more importantly, can you define a process that is more complex? For example for each review finding, do X. Or go from end-to-end test, back to build.
2) In your setup, how does a sub-agent prove undeniably, that it's work is complete? Does the "lead" agent just look at the output? If so, it would effectively make the lead an implicit reviewer for all agents, so I don't follow why you would need a review step.
3) Can you have steps in between these agentic processes that do not involve agents?
For 1), yes, there is an "observe" step in the process where - when the project is deployed - it observes and reconciles what happens vs what should happen based on specs.
I believe more variant are bound to emerge when harnesses become more prevalent. We only scratched the surface, so don't generalize over the process yet.
When an agent is told to do something by the scheduler, the next step in the process only believes it’s done if the agent’s status is marked as ‘posted’. Statuses like ‘ready_to_post’ or ‘draft_verified_awaiting_review’… these are actually errors that the system needs to fix on the following attempt.
The trickiest part was dealing with being stopped, but not having something break. You have to have ways to say “this happened, and it isn't what we wanted”, for example, ‘blocked_quota’, ‘blocked_no_credentials’, or ‘skipped_anti_bunching’. If you don't have those, the main program will endlessly retry and spend all your money.
the typed handoff in ahk is the right primitive imo. discipline on top: agents never write half-states. every run terminates in a documented terminal status, success or otherwise.
How does this differ from RooCode and similar agent orchestration tools?