pip install ag2-persona
PersonaAgent’s power comes from separating persona definition from runtime configuration. Domain experts define agent behavior in Markdown files, while developers handle the runtime integration.
Key Benefits:
PersonaAgent’s Markdown format follows a clear separation of concerns:
This design ensures domain experts can write rich character stories while systems get the structured data they need for routing and validation.
Example Structure:
---
# The SPEC - What the agent IS and MUST do
role: Senior Software Architect
goal: Review designs for scalability
constraints:
- Must consider security implications
- Focus on maintainable solutions
---
# The CHARACTER - Who the agent is
# Backstory
Twenty years building distributed systems at Netflix, Google...
Rich narrative with personality, experience, war stories...
The most powerful approach - load expert personas from configuration files:
from ag2_persona import PersonaBuilder, AsyncPersonaBuilder
# Sync version (blocks during I/O)
analyst = (PersonaBuilder.from_markdown("library/senior_data_engineer.md")
.set_name("data_analyst")
.llm_config({"model": "gpt-4", "temperature": 0.7})
.build())
# Async version (non-blocking I/O)
async def create_analyst():
return await (AsyncPersonaBuilder("data_analyst")
.from_markdown("library/senior_data_engineer.md")
.llm_config({"model": "gpt-4", "temperature": 0.7})
.build())
# Use like any AG2 agent
response = analyst.generate_reply(messages=[{"content": "Analyze this sales data"}])
# Basic installation (sync Markdown only)
pip install ag2-persona[markdown]
# Async support included
pip install ag2-persona[markdown-async]
# Development setup with async
pip install ag2-persona[all]
For simple, one-off agents you can construct directly:
from ag2_persona import PersonaAgent
# Direct construction for simple cases
expert = PersonaAgent(
name="data_analyst",
role="Data Analysis Expert",
goal="Analyze the provided dataset and identify key insights",
backstory="You specialize in statistical analysis and data visualization",
llm_config={"model": "gpt-4", "temperature": 0.7}
)