Skip to content
GitHub Agentic Workflows

Blog

Meet the Workflows: Triage & Summarization

Peli de Halleux

Wonderful to see you again! 🎩 So glad you’ve returned to Peli’s Agent Factory!

We’re the GitHub Next team, and we’ve been on quite a journey. Over the past months, we’ve built and operated a collection of automated agentic workflows. These aren’t just demos or proof-of-concepts - these are real agents doing actual work in our githubnext/gh-aw repository and its companion githubnext/agentics collection.

Think of this as your guided tour through our agent factory. We’re showcasing the workflows that caught our attention, taught us something new, or just made our lives easier. Every workflow links to its source Markdown file, so you can peek under the hood and see exactly how it works.

To start the tour, let’s begin with one of the simple workflows that handles incoming activity - issue triage.

Issue triage now represents the “hello world” of automated agentic workflows: practical, immediately useful, relatively simple, and impactful. It’s used as the starter examples in other agentic automation technologies like Claude Code in GitHub Actions.

The purpose of automated issue triage is straightforward: when a new issue is opened, the agent analyzes its content, does research in the codebase and other issues, responds with a comment, and applies appropriate labels based on predefined categories. This helps maintainers quickly understand the nature of incoming issues without manual review.

Our Issue Triage Agent focuses on labels: it automatically labels and categorizes new issues the moment they’re opened. Let’s take a look at the full workflow:

---
timeout-minutes: 5
on:
schedule: "0 14 * * 1-5"
workflow_dispatch:
permissions:
issues: read
tools:
github:
toolsets: [issues, labels]
safe-outputs:
add-labels:
allowed: [bug, feature, enhancement, documentation, question, help-wanted, good-first-issue]
add-comment: {}
---
# Issue Triage Agent
List open issues in ${{ github.repository }} that have no labels. For each unlabeled issue, analyze the title and body, then add one of the allowed labels: `bug`, `feature`, `enhancement`, `documentation`, `question`, `help-wanted`, or `good-first-issue`.
Skip issues that:
- Already have any of these labels
- Have been assigned to any user (especially non-bot users)
After adding the label to an issue, mention the issue author in a comment explaining why the label was added.

Note how concise and readable this is - it’s almost like reading a to-do list for the agent. The workflow runs every weekday at 14:00 UTC, checks for unlabeled issues, and applies appropriate labels based on content analysis. It even leaves a friendly comment explaining the label choice.

In the frontmatter, we define permissions, tools, and safe outputs. This ensures the agent only has access to what it needs and can’t perform any unsafe actions. The natural language instructions in the body guide the agent’s behavior in a clear, human-readable way.

To continue the tour, let’s look briefly at two automated summarization workflows that help us stay on top of repository activity. These agents digest large amounts of information and present it in a concise, readable format.

First, the Weekly Issue Summary creates digestible summaries complete with charts and trends (because who has time to read everything?)

Next, the Daily Repo Chronicle marrates the day’s activity like a storyteller - seriously, it’s kind of delightful.

What surprised us most about this category?

First, the reduction of cognitive load. Having these agents handle triage and summarization freed up mental bandwidth for more important work. We no longer had to constantly monitor incoming issues or sift through activity logs - the agents did it for us, delivering only the essentials. This drastically reduced context switching and decision fatigue.

Second, the tone matters. When the Daily Repo Chronicle started writing summaries in a narrative, almost journalistic style, people actually wanted to read them. AI agents don’t have to be robotic - they can have personality while still being informative.

Third, customization is key. Triage differs in every repository. Team needs for activity summaries and actions that arise from them differ in every repository. Tailoring these workflows to our specific context made them far more effective. Generic agents are okay, but customized ones are game-changers.

Finally, these workflows became part of our routine. The Daily Repo Chronicle was a morning regular, giving us a quick overview of what happened overnight while we sipped. For teams that move fast using agents, these are key.

Next Up: Code Quality & Refactoring Workflows

Section titled “Next Up: Code Quality & Refactoring Workflows”

Now that we’ve explored how triage and summarization workflows help us stay on top of incoming activity, let’s turn to the agents that continuously improve code quality.

Continue reading: Code Quality & Refactoring Workflows →


This is part 1 of a 16-part series exploring the workflows in Peli’s Agent Factory.

Welcome to Peli's Agent Factory

Peli de Halleux

Good morning, starshine! 🎩✨ Welcome, welcome, WELCOME to Peli’s Agent Factory!

Imagine a software repository where AI agents work alongside your team - not replacing developers, but handling the repetitive, time-consuming tasks that slow down collaboration and forward progress.

Peli’s Agent Factory is our exploration of what happens when you take the design philosophy of “let’s create a new automated agentic workflow for that” as the answer to almost every opportunity that arises! What happens when you max out on automated agentic workflows - when you make and use dozens of specialized, automated AI agentic workflows and use them in real repositories.

Software development is changing rapidly. This is our attempt to understand how automated agentic AI can make software teams more efficient, collaborative, and more enjoyable.

So strike that, reverse it! Welcome to the factory - a place of pure imagination where AI agents work their magic. Come with me, and you’ll be in a world of pure automation! Let’s explore together!

Peli’s factory is a collection of automated agentic workflows we use in practice. Over the course of this research project, we built and operated over 100 automated agentic workflows within the githubnext/gh-aw repository and its companion githubnext/agentics collection. These were used mostly in the context of the githubnext/gh-aw project itself, but some have also been applied at scale in GitHub and Microsoft internal repositories, and some external repositories. These weren’t hypothetical demos - they were working agents that:

  • Triage incoming issues
  • Diagnose CI failures
  • Maintain documentation
  • Improve test coverage
  • Monitor security compliance
  • Optimize workflow efficiency
  • Execute multi-day projects
  • Validate infrastructure
  • Even write poetry to boost team morale

Some workflows are “read-only analysts”. Others proactively propose changes through pull requests. Some are meta-agents that monitor and improve the health of all the other workflows.

We know we’re taking things to an extreme here. Most repositories won’t need dozens of agentic workflows. No one can read all these outputs (except, of course, another workflow). But by pushing the boundaries, we learned valuable lessons about what works, what doesn’t, and how to design safe, effective agentic workflows that teams can trust and use.

It’s basically a candy shop chocolate factory of agentic workflows. And we’re learning so much from it all, we’d like to share it with you.

When we started exploring agentic workflows, we faced a fundamental question: What should repository-level automated agentic workflows actually do?

Rather than trying to build one “perfect” agent, we took a broad, heterogeneous approach:

  1. Embrace diversity - Create many specialized workflows as we identified opportunities
  2. Use them continuously - Run them in real development workflows
  3. Observe what works - Find which patterns work and which fail
  4. Share the knowledge - Catalog the structures that make agents safe and effective

The factory becomes both an experiment and a reference collection - a living library of patterns that others can study, adapt, and remix.

Here’s what we’ve built so far:

  • A comprehensive collection of workflows demonstrating diverse agent patterns
  • 12 core design patterns consolidating all observed behaviors
  • 9 operational patterns for GitHub-native agent orchestration
  • 128 workflows in the .github/workflows directory of the gh-aw repository
  • 17 curated workflows in the installable agentics collection
  • Multiple trigger types: schedules, slash commands, reactions, workflow events, issue labels

Each workflow is written in natural language using Markdown, then compiled into secure GitHub Actions that run with carefully scoped permissions. Everything is observable, auditable, and remixable.

In our first series, Meet the Workflows, we’ll take you on a 16-part tour of the most interesting agents in the factory. You’ll see how they operate, what problems they solve, and the unique personalities we’ve given them.

Each article is bite-sized. Start with Meet the Workflows to get an overview, then dive into the ones that catch your eye. If you’d like to skip ahead, here’s the full list of articles in the series:

  1. Triage & Summarization Workflows
  2. Code Quality & Refactoring Workflows
  3. Documentation & Content Workflows
  4. Issue & PR Management Workflows
  5. Quality & Hygiene Workflows
  6. Metrics & Analytics Workflows
  7. Operations & Release Workflows
  8. Security & Compliance Workflows
  9. Creative & Culture Workflows
  10. Interactive & ChatOps Workflows
  11. Testing & Validation Workflows
  12. Tool & Infrastructure Workflows
  13. Multi-Phase Improver Workflows
  14. Organization & Cross-Repo Workflows
  15. Advanced Analytics & ML Workflows
  16. Campaigns & Project Coordination Workflows

Running this many agents in production is… quite the experience. We’ve watched agents succeed spectacularly, fail in interesting ways, and surprise us constantly. Over the next few weeks, we’ll also be sharing what we’ve learned through a series of detailed articles. We’ll be looking at the design and operational patterns we’ve discovered, security lessons, and practical guides for building your own workflows.

To give a taste, some key lessons are emerging:

  • Repository-level automation is incredibly powerful - Agents embedded in the development workflow can have outsized impact
  • Diversity beats perfection - A collection of focused agents works better than one universal assistant
  • Guardrails enable innovation - Strict constraints actually make it easier to experiment safely
  • Meta-agents are valuable - Agents that watch other agents become incredibly valuable
  • Cost-quality tradeoffs are real - Longer analyses aren’t always better

We’ll dive deeper into these lessons in upcoming articles.

Want to start with automated agentic workflows on GitHub? See our Quick Start.

Peli’s Agent Factory is a research project by GitHub Next, Microsoft Research and collaborators, including Peli de Halleux, Don Syme, Mara Kiefer, Edward Aftandilian, Russell Horton, Jiaxiao Zhou.

This is part of GitHub Next’s exploration of Continuous AI - making AI-enriched automation as routine as CI/CD.