Skip to content
GitHub Agentic Workflows

Blog

Meet the Workflows: Project Coordination

Peli de Halleux

Welcome to the final stop in our tour through the agents in Peli’s Agent Factory!

We’ve journeyed through 15 categories of workflows - from triage bots to code quality improvers, from security guards to creative poets, culminating in advanced analytics that use machine learning to understand agent behavior patterns. Each workflow handles its individual task admirably.

But here’s the ultimate challenge: how do you coordinate multiple agents working toward a shared goal? How do you break down a large initiative like “migrate all workflows to a new engine” into trackable sub-tasks that different agents can tackle? How do you monitor progress, alert on delays, and ensure the whole is greater than the sum of its parts? This final post explores planning, task-decomposition and project coordination workflows - the orchestration layer that proves AI agents can handle not just individual tasks, but entire structured projects requiring careful coordination and progress tracking.

These agents coordinate multi-agent plans and projects:

The Plan Command provides on-demand task decomposition: developers can comment /plan on any issue to get an AI-generated breakdown into actionable sub-issues that agents can work on.

The Workflow Health Manager acts as a project manager, monitoring progress across campaigns and alerting when things fall behind. The Discussion Task Miner takes a different approach - it continuously scans GitHub Discussions (where code quality observations often emerge) and extracts actionable improvement tasks, automatically creating issues so insights don’t get lost in conversation threads.

We learned that individual agents are great at focused tasks, but orchestrating multiple agents toward a shared goal requires careful architecture. Project coordination isn’t just about breaking down work - it’s about discovering work (Task Miner), planning work (Plan Command), and tracking work (Workflow Health Manager).

These workflows implement patterns like epic issues, progress tracking, and deadline management. They prove that AI agents can handle not just individual tasks, but entire projects when given proper coordination infrastructure.


Throughout this 16-part journey, we’ve explored workflows spanning from simple triage bots to sophisticated multi-phase improvers, from security guards to creative poets, from individual task automation to organization-wide orchestration.

The key insight? AI agents are most powerful when they’re specialized, well-coordinated, and designed for their specific context. No single agent does everything - instead, we have an ecosystem where each agent excels at its particular job, and they work together through careful orchestration.

We’ve learned that observability is essential, that incremental progress beats heroic efforts, that security needs careful boundaries, and that even “fun” workflows can drive meaningful engagement. We’ve discovered that AI agents can maintain documentation, manage campaigns, analyze their own behavior, and continuously improve codebases - when given the right architecture and guardrails.

As you build your own agentic workflows, remember: start small, measure everything, iterate based on real usage, and don’t be afraid to experiment. The workflows we’ve shown you evolved through experimentation and real-world use. Yours will too.

This is part 16 (final) of a 16-part series exploring the workflows in Peli’s Agent Factory.

Meet the Workflows: Advanced Analytics & ML

Peli de Halleux

Time to get into data ana analytics at Peli’s Agent Factory!

In our previous post, we explored organization and cross-repo workflows that operate at enterprise scale - analyzing dozens of repositories together to find patterns and outliers that single-repo analysis would miss. We learned that perspective matters: what looks normal in isolation might signal drift at scale.

Beyond tracking basic metrics (run time, cost, success rate), we wanted deeper insights into how our agents actually behave and how developers interact with them. What patterns emerge from thousands of agent prompts? What makes some PR conversations more effective than others? How do usage patterns reveal improvement opportunities? This is where we brought out the big guns: machine learning, natural language processing, sentiment analysis, and clustering algorithms. Advanced analytics workflows don’t just count things - they understand them, finding patterns and insights that direct observation would never reveal.

These agents use sophisticated analysis techniques to extract insights:

The Prompt Clustering Analysis uses machine learning to categorize thousands of agent prompts, revealing patterns we never noticed (“oh, 40% of our prompts are about error handling”).

The Copilot PR NLP Analysis does sentiment analysis and linguistic analysis on PR conversations - it found that PRs with questions in the title get faster review.

The Session Insights workflow analyzes how developers interact with Copilot agents, identifying common patterns and failure modes. What we learned: meta-analysis is powerful - using AI to analyze AI systems reveals insights that direct observation misses.

These workflows helped us understand not just what our agents do, but how they behave and how users interact with them.

Next Up: Campaigns & Project Coordination Workflows

Section titled “Next Up: Campaigns & Project Coordination Workflows”

We’ve reached the final stop: coordinating multiple agents toward shared, complex goals across extended timelines.

Continue reading: Campaigns & Project Coordination Workflows →


This is part 15 of a 16-part series exploring the workflows in Peli’s Agent Factory.

Meet the Workflows: Organization & Cross-Repo

Peli de Halleux

Let’s zoom out at Peli’s Agent Factory!

In our previous post, we explored multi-phase improver workflows - our most ambitious agents that tackle big projects over multiple days, maintaining state and making incremental progress. These workflows proved that AI agents can handle complex, long-running initiatives when given the right architecture.

But all that sophisticated functionality has focused on a single repository. What happens when you zoom out to organization scale? What insights emerge when you analyze dozens or hundreds of repositories together? What looks perfectly normal in one repo might be a red flag across an organization. Organization and cross-repo workflows operate at enterprise scale, requiring careful permission management, thoughtful rate limiting, and different analytical lenses. Let’s explore workflows that see the forest, not just the trees.

These agents work at organization scale, across multiple repositories:

Scaling agents across an entire organization changes the game. The Org Health Report analyzes dozens of repositories at once, identifying patterns and outliers (“these three repos have no tests, these five haven’t been updated in months”). The Stale Repo Identifier helps with organizational hygiene - finding abandoned projects that should be archived or transferred. We learned that cross-repo insights are different - what looks fine in one repository might be an outlier across the organization. These workflows require careful permission management (reading across repos needs organization-level tokens) and thoughtful rate limiting (you can hit API limits fast when analyzing 50+ repos). The Ubuntu Image Analyzer is wonderfully meta - it documents the very environment that runs our agents.

Next Up: Advanced Analytics & ML Workflows

Section titled “Next Up: Advanced Analytics & ML Workflows”

Cross-repo insights reveal patterns, but we wanted to go even deeper - using machine learning to understand agent behavior.

Continue reading: Advanced Analytics & ML Workflows →


This is part 14 of a 16-part series exploring the workflows in Peli’s Agent Factory.

Meet the Workflows: Multi-Phase Improvers

Peli de Halleux

Let’s continue our journey through Peli’s Agent Factory!

In our previous post, we explored infrastructure workflows - the meta-monitoring layer that validates MCP servers, checks tool configurations, and ensures the platform itself stays healthy. These workflows watch the watchers, providing visibility into the invisible plumbing.

Most workflows we’ve seen so far run once and complete: analyze this PR, triage that issue, test this deployment. They’re ephemeral - they execute, produce results, and disappear. But what about projects that are too big to tackle in a single run? What about initiatives that require research, setup, and incremental implementation? Traditional CI/CD is built for stateless execution, but we discovered something powerful: workflows that maintain state across days, working a little bit each day like a persistent team member who never takes breaks. Welcome to our most ambitious experiment - multi-phase improvers that prove AI agents can handle complex, long-running projects.

These are some of our most ambitious agents - they tackle big projects over multiple days:

This is where we got experimental with agent persistence and multi-day workflows. Traditional CI runs are ephemeral, but these workflows maintain state across days using repo-memory. The Daily Perf Improver runs in three phases - research (find bottlenecks), setup (create profiling infrastructure), implement (optimize). It’s like having a performance engineer who works a little bit each day. The Daily Backlog Burner systematically tackles our issue backlog - one issue per day, methodically working through technical debt. We learned that incremental progress beats heroic sprints - these agents never get tired, never get distracted, and never need coffee breaks. The PR Fix workflow is our emergency responder - when CI fails, invoke /pr-fix and it investigates and attempts repairs.

These workflows prove that AI agents can handle complex, long-running projects when given the right architecture.

Next Up: Organization & Cross-Repo Workflows

Section titled “Next Up: Organization & Cross-Repo Workflows”

Single-repository workflows are powerful, but what happens when you scale to an entire organization with dozens of repositories?

Continue reading: Organization & Cross-Repo Workflows →


This is part 13 of a 16-part series exploring the workflows in Peli’s Agent Factory.

Meet the Workflows: Tool & Infrastructure

Peli de Halleux

Welcome back to our journey through Peli’s Agent Factory!

In our previous post, we explored testing and validation workflows that continuously verify our systems function correctly - running smoke tests, checking documentation across devices, and catching regressions before users notice them. We learned that trust must be verified.

But here’s a question that kept us up at night: what if the infrastructure itself fails? What if MCP servers are misconfigured, tools become unavailable, or agents can’t access the capabilities they need? Testing the application is one thing; monitoring the platform that runs AI agents is another beast entirely. Tool and infrastructure workflows provide meta-monitoring - they watch the watchers, validate configurations, and ensure the invisible plumbing stays functional. Welcome to the layer where we monitor agents monitoring agents monitoring code. Yes, it gets very meta.

These agents monitor and analyze the agentic infrastructure itself:

Infrastructure for AI agents is different from traditional infrastructure - you need to validate that tools are available, properly configured, and actually working. The MCP Inspector checks Model Context Protocol server configurations because a misconfigured MCP server means an agent can’t access the tools it needs. The Agent Performance Analyzer is a meta-orchestrator that monitors all our other agents - looking for performance degradation, cost spikes, and quality issues. We learned that layered observability is crucial: you need monitoring at the infrastructure level (are servers up?), the tool level (can agents access what they need?), and the agent level (are they performing well?).

These workflows provide visibility into the invisible.

Most workflows we’ve seen are stateless - they run, complete, and disappear. But what if agents could maintain memory across days?

Continue reading: Multi-Phase Improver Workflows →


This is part 12 of a 16-part series exploring the workflows in Peli’s Agent Factory.