Skip to content

AI Engines

GitHub Agentic Workflows support multiple AI engines (coding agents) to interpret and execute natural language instructions. Each engine has unique capabilities and configuration options.

GitHub Copilot is the default and recommended AI engine for most workflows. The GitHub Copilot CLI provides MCP server support and is designed for conversational AI workflows.

engine: copilot
engine:
id: copilot
version: latest # Optional: defaults to latest
model: gpt-5 # Optional: defaults to claude-sonnet-4
args: ["--add-dir", "/workspace"] # Optional: custom CLI arguments
  • model: AI model (gpt-5 or claude-sonnet-4)
  • version: CLI version to install
  • args: Custom command-line arguments (supported by all engines)
  • COPILOT_MODEL: Alternative way to set the model
  • COPILOT_CLI_TOKEN: GitHub Personal Access Token (PAT) with “Copilot Requests” permission
  • GH_AW_GITHUB_TOKEN (optional): Required for GitHub Tools Remote Mode

Authenticating with a Personal Access Token (PAT)

Section titled “Authenticating with a Personal Access Token (PAT)”

To use the Copilot engine, you need a fine-grained Personal Access Token with the “Copilot Requests” permission enabled:

  1. Visit https://github.com/settings/personal-access-tokens/new
  2. Under “Permissions,” click “add permissions” and select “Copilot Requests”
  3. Generate your token
  4. Add the token to your repository secrets as COPILOT_CLI_TOKEN:
Terminal window
gh secret set COPILOT_CLI_TOKEN -a actions --body "<your-github-pat>"

For GitHub Tools Remote Mode, also configure:

Terminal window
gh secret set GH_AW_GITHUB_TOKEN -a actions --body "<your-github-pat>"

For more information about GitHub Copilot CLI authentication, see the official documentation.

The Copilot engine supports network access control through the network: configuration at the workflow level. When network permissions are configured, you can enable AWF (Agent Workflow Firewall) to enforce domain-based access controls. AWF is sourced from github.com/githubnext/gh-aw-firewall.

Enable network permissions and firewall in your workflow:

engine: copilot
network:
firewall: true # Enable AWF enforcement
allowed:
- defaults # Basic infrastructure domains
- python # Python ecosystem
- "api.example.com" # Custom domain

When enabled, AWF wraps the Copilot CLI execution and enforces the configured domain allowlist, logging all network activity for audit purposes. This provides network egress control and an additional layer of security for workflows that need strict network access control.

Advanced Firewall Configuration:

Additional AWF settings can be configured through the network configuration:

network:
allowed:
- defaults
- python
firewall:
version: "v1.0.0" # Optional: AWF version (defaults to latest)
log-level: debug # Optional: debug, info (default), warn, error
args: ["--custom-arg", "value"] # Optional: additional AWF arguments

Firewall Configuration Formats:

The firewall field supports multiple formats:

# Enable with defaults
network:
firewall: true
# Enable with empty object (same as true)
network:
firewall:
# Configure log level
network:
firewall:
log-level: info # Options: debug, info (default), warn, error
# Disable firewall (triggers warning if allowed domains are specified)
network:
allowed: ["example.com"]
firewall: "disable"
# Custom configuration with version and arguments
network:
firewall:
version: "v0.1.0"
log-level: debug
args: ["--verbose"]

See the Network Permissions documentation for details on configuring allowed domains and ecosystem identifiers.

Claude Code excels at reasoning, code analysis, and understanding complex contexts.

engine: claude
engine:
id: claude
version: beta
model: claude-3-5-sonnet-20241022
max-turns: 5
args: ["--custom-flag", "value"] # Optional: custom CLI arguments
env:
AWS_REGION: us-west-2
DEBUG_MODE: "true"

Set secrets using:

Terminal window
gh secret set ANTHROPIC_API_KEY -a actions --body "<your-anthropic-api-key>"
gh secret set GH_AW_GITHUB_TOKEN -a actions --body "<your-github-pat>"

OpenAI Codex CLI with MCP server support. Designed for code-focused tasks.

engine: codex
engine:
id: codex
model: gpt-4
args: ["--custom-flag", "value"] # Optional: custom CLI arguments
user-agent: custom-workflow-name # Optional: custom user agent for GitHub MCP
env:
CODEX_API_KEY: ${{ secrets.CODEX_API_KEY_CI }}
config: |
[custom_section]
key1 = "value1"
[server_settings]
timeout = 60
  • user-agent: Custom user agent string for GitHub MCP server
  • config: Additional TOML configuration appended to generated config.toml
  • args: Custom command-line arguments (supported by all engines)
  • OPENAI_API_KEY: OpenAI API key

Set secrets using:

Terminal window
gh secret set OPENAI_API_KEY -a actions --body "<your-openai-api-key>"

Define custom GitHub Actions steps without AI interpretation for deterministic workflows.

engine: custom
engine:
id: custom
steps:
- name: Install dependencies
run: npm ci

All engines support custom environment variables through the env field:

engine:
id: claude
env:
DEBUG_MODE: "true"
AWS_REGION: us-west-2
CUSTOM_API_ENDPOINT: https://api.example.com

All engines support custom command-line arguments through the args field, injected before the prompt:

engine:
id: copilot
args: ["--add-dir", "/workspace", "--verbose"]

Arguments are added in order and placed before the --prompt flag. Common uses include adding directories (--add-dir), enabling verbose logging (--verbose, --debug), and passing engine-specific flags. Consult the specific engine’s CLI documentation for available flags.

All engines support custom error pattern recognition for enhanced log validation:

engine:
id: codex
error_patterns:
- pattern: "\\[(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2})\\]\\s+(ERROR):\\s+(.+)"
level_group: 2
message_group: 3
description: "Custom error format with timestamp"

Switch engines by changing the engine field in your frontmatter:

# Simple switch
engine: copilot
# With configuration
engine:
id: copilot
model: gpt-5 # Optional; defaults to claude-sonnet-4
version: latest

Engine-specific features may not be available when switching engines.

  • Frontmatter - Complete configuration reference
  • Tools - Available tools and MCP servers
  • Security Guide - Security considerations for AI engines
  • MCPs - Model Context Protocol setup and configuration