Three Execution Contexts
When you invoke agents using the Task tool, they run within Claude Code and use the same Claude instance.
You: "Run the low-hanging-fruit-optimizer agent"
↓ Claude Code: Launches agent via Task tool
↓ Agent: Uses Claude Code's Claude instance
↓ Result: No API key needed, no API costs!
Available Agents (All Free):
- low-hanging-fruit-optimizer
- repo-maintenance-crew
- project-status-tracker
- system-diagnostics-expert
- accessibility-ux-reviewer
- And ~15 others built into Claude Code
Cost: $0 (included in Claude Code subscription)
When you use orchestrators from the shared library, they make direct API calls:
from shared.orchestration import BeltalowdaOrchestrator
orchestrator = BeltalowdaOrchestrator()
result = await orchestrator.execute_workflow(
task="Research quantum computing"
)
# Makes API calls to xAI/OpenAI/Anthropic
# Requires API keys in environment
# You pay per token used
Why? These run as standalone Python code that directly calls LLM provider APIs.
Cost: Varies by provider and model (standard API pricing)
Create a hybrid that uses Claude Code when available (no cost), falls back to API when standalone:
from shared.llm_providers.claude_code_provider import ClaudeCodeProvider
provider = ClaudeCodeProvider()
print(f"Running in {provider.get_mode()} mode")
# "claude-code" or "api"
response = await provider.chat(messages)
# Uses Claude Code if available, otherwise API
Setup:
export CLAUDE_CODE=1 # When using Claude Code
# Or in .env file
echo "CLAUDE_CODE=1" >> ~/.env
Practical Scenarios
Scenario A: Working in Claude Code Terminal
Best Approach: Use Task tool agents directly
You: "Use system-diagnostics-expert to check service health"
↓ Claude Code launches agent
↓ Agent uses Claude Code (no API key needed)
✓ Advantages
- Zero API costs
- Leverages Claude Code's context
- Access to all Claude Code tools
- File system access
Scenario B: Standalone Service (Flask API)
Best Approach: Use shared library with API keys
from shared.orchestration import SwarmOrchestrator
from shared.config import Config
config = Config()
orchestrator = SwarmOrchestrator()
# Uses API keys from environment
result = await orchestrator.execute_workflow(task=user_query)
Setup:
export XAI_API_KEY=xai-...
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
✓ Advantages
- Runs independently
- Scales to multiple workers
- Production-ready
- Choose specific models
Scenario C: Development with Both
Best Approach: Hybrid provider with environment detection
from shared.llm_providers.claude_code_provider import ClaudeCodeProvider
provider = ClaudeCodeProvider(
api_key=os.getenv('ANTHROPIC_API_KEY') # Fallback
)
if provider.get_mode() == 'claude-code':
print("✓ Using Claude Code - no API costs!")
else:
print("✓ Using Anthropic API - standard pricing")
✓ Advantages
- Same code, both contexts
- Automatic cost optimization
- Test locally free
- Deploy to production easily
What About the Coze Agents?
The 81 Coze agents analyzed were standalone services requiring API keys. We're extracting their patterns into three forms:
Form 1: Claude Code Agents (No API Key)
Convert Coze patterns into Claude Code Task agents:
# Coze: AltFlow agent (required API key)
↓ Extract pattern ↓
# Claude Code: accessibility-ux-reviewer agent (no API key)
You invoke via: @agent-accessibility-ux-reviewer
Form 2: Shared Library Functions (API Key)
Convert Coze patterns into reusable functions:
# Coze: AltFlow agent
↓ Extract pattern ↓
# Shared: generate_alt_text() function
from shared.web.vision_service import generate_alt_text
result = await generate_alt_text(image_data) # Needs API key
Form 3: Orchestrators (API Key or Hybrid)
Convert Coze multi-agent patterns into orchestrators:
# Coze: Swarm agent (50+ agents, required keys)
↓ Extract pattern ↓
# Shared: SwarmOrchestrator
from shared.orchestration import SwarmOrchestrator
orchestrator = SwarmOrchestrator() # Needs API key
Setting Up Hybrid Mode
Step 1: Set Environment Variable
# When using Claude Code
export CLAUDE_CODE=1
# Or add to ~/.bashrc
echo 'export CLAUDE_CODE=1' >> ~/.bashrc
Step 2: Use in Code
from shared.llm_providers.claude_code_provider import ClaudeCodeProvider
# Automatically detects context
provider = ClaudeCodeProvider()
# Check mode
print(f"Mode: {provider.get_mode()}") # "claude-code" or "api"
# Get cost info
print(provider.get_cost_info())
# Use it
response = await provider.chat(messages)
Step 3: Integrate with Orchestrators
from shared.orchestration import BeltalowdaOrchestrator
class MyOrchestrator(BeltalowdaOrchestrator):
def __init__(self, prefer_claude_code=True):
super().__init__()
self.prefer_claude_code = prefer_claude_code
async def execute_subtask(self, subtask, context):
provider = ClaudeCodeProvider()
# Automatically uses Claude Code if available
response = await provider.chat(messages)
return result
Frequently Asked Questions
Q: Can I use Claude Code agents in my Flask app?
A: No, Claude Code agents only work within Claude Code sessions. For Flask apps, use shared library orchestrators with API keys.
Q: Do I need API keys if I only use Claude Code?
A: Not for Task tool agents! But if you use shared library orchestrators, yes.
Q: Which is cheaper: Claude Code or API keys?
A: Claude Code is free (included in subscription) for development. For production at scale, API keys may be more cost-effective depending on volume.
Q: Can I mix both approaches?
A: Yes! Use hybrid providers that detect context and choose accordingly. Same code works in both environments.
Q: What about the Coze agents in the JSON file?
A: We're extracting their patterns into both Claude Code agents (free) and shared library functions (API keys), giving you flexibility.