Mentioned in this video
Key Python Libraries and Frameworks
Large Language Model Providers
Large Language Models
Project Files
Example Stock Tickers
CrewAI Tutorial: Multiple Agents Working Together in Python
Summary
The Orchestration of Digital Cognition: An Overview of CrewAI
In the ever-evolving landscape of digital archaeology, the examination of frameworks that enable the sophisticated orchestration of autonomous agents reveals profound insights into the future of complex problem-solving. CrewAI, a Python-based architecture, stands as a testament to this emergent paradigm, allowing for the meticulous assembly of multi-agent systems designed to tackle intricate tasks through a principle known as 'separation of concerns.' Unlike monolithic entities that might attempt to wield a multitude of tools, CrewAI posits a collaborative structure where specialized agents, much like the differentiated roles within an ancient society, each contribute their unique expertise to a singular, overarching objective. One might envision a research agent diligently unearthing data, a summarization agent distilling vast quantities of information into concise narratives, and an evaluative agent meticulously assessing potential risks. This distributed intelligence, where agents operate in concert, offers a compelling approach to processing complex workflows, echoing the efficient division of labor observed in advanced historical civilizations.
Foundational Studies: Prerequisites for Engaging with Agent Architectures
Before embarking upon the construction of such a digital edifice, a solid grounding in several foundational disciplines is imperative. The primary language of interaction within this framework is Python, necessitating a proficient understanding of its syntax, data structures, and object-oriented principles. Familiarity with command-line interfaces and package management is also essential, particularly for the installation and management of project dependencies. Concepts pertaining to large language models (LLMs) and their application programming interfaces (APIs) are fundamental, as these intelligent engines form the very cognitive core of each agent. Furthermore, a basic comprehension of .env file management for secure credential handling is advisable, mirroring the careful guardianship of sensitive scrolls in ancient libraries.
The Lexicon of Construction: Key Libraries and Tools
The construction of a CrewAI system relies upon a specific set of digital instruments and conceptual blueprints:
- CrewAI: This is the core Python framework for designing, managing, and orchestrating multi-agent systems. It provides the architectural scaffolding for defining agents, tasks, and the overarching crew, facilitating their collaborative execution.
- UV: A Rust-based Python package manager, UV offers a swift and efficient method for installing project dependencies. Its role is akin to a highly optimized ancient quarry, ensuring all necessary building blocks are readily available.
- YFinance: This Python library serves as a robust interface for accessing historical and real-time market data from Yahoo Finance. In our context, it functions as the primary 'scrying tool' for the collector agent, providing insights into financial markets.
- OpenAI API: As a provider of advanced large language models, OpenAI's API offers the cognitive capabilities that empower CrewAI agents with reasoning and generative faculties. Access is governed by an API key, serving as a token of access to these powerful oracular services.
- Ollama: This platform enables the local execution of large language models, offering an alternative to cloud-based services. Utilizing Ollama means that the 'oracle' can be housed within one's own digital sanctum, reducing reliance on external infrastructure.
- lightllm: A lightweight library often used for interacting with various LLM APIs, including local ones. It acts as a universal translator, enabling CrewAI to communicate seamlessly with diverse cognitive engines.
- fastapi, apuler, email-validator, fastapi-sso: These dependencies are frequently required when integrating local LLM solutions like Ollama, particularly for establishing robust API endpoints and managing authentication mechanisms. They fortify the local infrastructure, much like the outer defenses of an ancient city.
The Architect's Handbook: A Code Walkthrough
The development of a CrewAI application follows a structured progression, beginning with the initiation of the project and culminating in the execution of the agent crew.
Initiating the Digital Edifice
The first step involves invoking the CrewAI command-line utility to establish the foundational project structure. This command generates a directory housing the essential configuration and execution files.
crewai create crew stock_briefing
Upon execution, the system prompts for the selection of an LLM provider and model, as well as the input of an API key if a cloud service such as OpenAI is chosen. This initial configuration populates the .env file with the necessary credentials.
Safeguarding the Oracle's Voice: Environment Configuration
The .env file serves as the repository for sensitive configuration parameters, primarily the LLM API key and the chosen model identifier. For OpenAI, the structure appears thus:
OPENAI_API_KEY='your_api_key_here'
OPENAI_MODEL_NAME='gpt-4o'
Should the desire arise to utilize a locally hosted model via Ollama, the configuration is adjusted to point to the local server endpoint and the specific model name:
OPENAI_API_KEY='ollama' # Or any placeholder as the key is not strictly used for Ollama
OPENAI_MODEL_NAME='ollama/qwen:0.5b'
API_BASE='http://localhost:11434'
This adaptability allows for seamless transitioning between remote and localized cognitive resources.
Crafting the Agent's Instruments: The Yahoo Finance Tool (src/tools/custom_tool.py)
Agents within CrewAI operate most effectively when equipped with specialized instruments, or 'tools,' that allow them to interact with external systems. Here, a get_stock_data tool is forged to interface with the Yahoo Finance API, retrieving current price information and recent news headlines.
import yfinance as yf
from crewai import Tool
@Tool(name="get_stock_data")
def get_stock_data(ticker_symbol: str) -> str:
"""Gets current price and recent news for a given stock ticker symbol."""
stock = yf.Ticker(ticker_symbol)
info = stock.info
current_price = info.get('currentPrice', 'N/A')
previous_close = info.get('previousClose', 'N/A')
price_info = "No price info available."
if current_price != 'N/A' and previous_close != 'N/A':
percentage_change = ((current_price - previous_close) / previous_close) * 100
price_info = f"Price: {current_price:.2f} (Change: {percentage_change:+.2f}%)"
news = stock.news[:8] # Limit to 8 headlines
headlines = []
for news_item in news:
title = news_item.get('title', '').strip()
if title:
headlines.append(f"- {title}")
if not headlines:
headlines.append("No headlines available.")
news_text = "\n".join(headlines)
return f"{price_info}\n\nRecent News:\n{news_text}"
The @Tool decorator from CrewAI registers this function as an accessible instrument for agents. The docstring is paramount, as it provides the semantic context an agent requires to judiciously decide when and how to invoke this tool.
Defining Roles and Narratives: Agent Configurations (src/config/agents.yaml)
Each agent is endowed with a distinct 'role,' 'goal,' and 'backstory,' which collectively define its persona and operational parameters. These are articulated within a YAML file, ensuring clarity and modularity.
collector:
role: >
Markets Data Collector
goal: >
Gather latest price movements and news headlines for {ticker}
backstory: >
You collect real-time market data and news headlines for stocks.
summarizer:
role: >
News Summarizer
goal: >
Condense news headlines into five key bullet points.
backstory: >
You extract the most important information from news and present it concisely.
risk_checker:
role: >
Risk Analyst
goal: >
Identify key risks and unknown unknowns from the news.
backstory: >
You flag potential risks like earnings, guidance changes, macro events, and litigation.
brief_writer:
role: >
Daily Brief Writer
goal: >
Create a concise daily brief for the respective ticker symbol.
backstory: >
You write clear, actionable daily briefs covering what changed, why, and what to watch.
Placeholders, denoted by curly brackets ({ticker}), allow for dynamic insertion of context during execution.
Prescribing Actions: Task Definitions (src/config/tasks.yaml)
Tasks are the specific assignments given to agents, each with a detailed 'description,' an 'expected output,' and an explicit assignment to a particular 'agent.'
collect_task:
description: >
Use the get_stock_data tool to collect price and news for respective ticker symbol. Call the tool once and return all the data you receive.
expected_output: >
Price data and news headlines returned from the tool.
agent: collector
summarize_task:
description: >
Review the collected data and create exactly five bullet points of key news. If news is limited, focus on price movement and general market context.
expected_output: >
A list of five concise bullet points summarizing the news and market context.
agent: summarizer
risk_task:
description: >
Based on the summary, identify three to five potential risks or opportunities related to the stock. Consider factors like company announcements, market trends, or broader economic indicators.
expected_output: >
A bulleted list of 3-5 identified risks/opportunities for the stock.
agent: risk_checker
brief_task:
description: >
Write a 6-8 line daily brief for the respective ticker symbol, incorporating the summarized news and identified risks. The brief should be clear, actionable, and cover what changed, why, and what to watch.
expected_output: >
A concise 6-8 line daily brief for the given stock ticker, suitable for a busy executive.
agent: brief_writer
Each task is meticulously defined to guide the agent toward the desired outcome, much like the instructions provided to a scribe for composing a decree.
Assembling the Digital Guild: Crew Initialization (src/crew.py)
With agents and tasks delineated, the crew.py file serves as the assembly point where these entities are instantiated and formed into a cohesive Crew object.
from crewai import Agent, Crew, Task
from crewai.project import CrewBase, agent, task
# Assuming custom_tool.py is in the same directory or properly imported
from .tools.custom_tool import get_stock_data
@CrewBase
class StockBriefingCrew:
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def collector(self) -> Agent:
return Agent(
config=self.agents_config['collector'],
tools=[get_stock_data], # Assign the tool to the collector agent
verbose=True
)
@agent
def summarizer(self) -> Agent:
return Agent(
config=self.agents_config['summarizer'],
verbose=True
)
@agent
def risk_checker(self) -> Agent:
return Agent(
config=self.agents_config['risk_checker'],
verbose=True
)
@agent
def brief_writer(self) -> Agent:
return Agent(
config=self.agents_config['brief_writer'],
verbose=True
)
@task
def collect_task(self) -> Task:
return Task(
config=self.tasks_config['collect_task'],
agent=self.collector()
)
@task
def summarize_task(self) -> Task:
return Task(
config=self.tasks_config['summarize_task'],
agent=self.summarizer()
)
@task
def risk_task(self) -> Task:
return Task(
config=self.tasks_config['risk_task'],
agent=self.risk_checker()
)
@task
def brief_task(self) -> Task:
return Task(
config=self.tasks_config['brief_task'],
agent=self.brief_writer()
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
verbose=True
)
Note the assignment of get_stock_data to the collector agent, signifying that this agent alone wields this particular instrument.
Activating the Digital Bureaucracy: Execution (src/main.py)
The main.py file orchestrates the execution of the entire crew, initiating the flow of information and collaborative processing. It provides the initial input, in this case, a stock ticker symbol.
from crewai import Agent, Crew, Task
from src.crew import StockBriefingCrew
def run():
inputs = {"ticker": "AAPL"}
StockBriefingCrew().crew().kickoff(inputs=inputs)
if __name__ == '__main__':
run()
Execution is initiated with crewai run, triggering the sequential and collaborative operations of the defined agents. The output of one agent's task serves as the input for the next, culminating in a comprehensive stock briefing.
The Lingua Franca of Code: Syntax Notes
The construction of these agentic systems leverages several key Pythonic and configuration patterns:
- Decorators (
@Tool,@agent,@task,@crew): These syntactic constructs are fundamental in CrewAI, allowing functions and methods to be registered as agents, tasks, or tools, simplifying their configuration and integration within the framework. - YAML Configuration: The use of YAML files (
agents.yaml,tasks.yaml) provides a human-readable and structured format for defining the declarative aspects of agents and tasks. The>symbol for multi-line strings enhances readability for lengthy descriptions. - Docstrings as Context: The presence and content of docstrings for tools are not merely for documentation; they are critical for the LLM-powered agents to comprehend the utility and proper invocation of the tools at their disposal.
- Placeholder Formatting: The use of curly braces (
{}) within YAML descriptions ({ticker}) enables dynamic context injection, making agent roles and task descriptions adaptable to specific inputs. - Type Hinting: Python's type hints (
ticker_symbol: str -> str) in tool definitions enhance code clarity and aid in static analysis, though their direct impact on LLM interpretation of tool signatures is also notable.
Analogies in Practice: Practical Applications
The stock briefing example, while illustrative, merely scratches the surface of CrewAI's potential. This framework is particularly adept at any domain requiring a complex workflow with distinct, specialized stages. Consider the meticulous process of academic research: one agent could be tasked with retrieving articles, another with summarizing key findings, a third with identifying gaps in existing literature, and a final agent with drafting a preliminary research proposal. In legal contexts, agents could process documents, extract relevant clauses, assess case precedents, and even draft initial legal briefs. The principle of orchestrated, specialized labor, deeply rooted in ancient societal models, finds robust expression in these modern agentic architectures, offering scalable and sophisticated solutions to multifaceted digital challenges.
Navigating the Labyrinth: Tips and Gotchas
Even with a well-defined architecture, practical challenges inevitably arise during implementation:
- Dependency Management: Ensuring all required Python packages (CrewAI, YFinance, lightllm, FastAPI-related libraries for Ollama) are correctly installed and aligned with the project's virtual environment is paramount. A common oversight is failing to install
lightllmand its associated dependencies when attempting to integrate Ollama, leading to execution failures. - API Key Vigilance: Misconfigured or expired API keys for cloud-based LLM services are a frequent source of errors. Meticulous verification of credentials, much like ensuring the authenticity of an ancient seal, is crucial.
- Model Competence: The performance of the agent crew is directly correlated with the capabilities of the underlying LLM. Smaller, less capable models, such as
qwen:0.5bused locally, may struggle with complex summarization or nuanced risk assessment, yielding less satisfactory outputs. Selecting an LLM appropriate for the task's cognitive demands is critical. - Prompt Engineering in YAML: The descriptions, goals, and backstories within the YAML files are, in essence, sophisticated prompts. Their clarity, precision, and comprehensiveness directly influence agent behavior. Iterative refinement of these declarative statements is often necessary to achieve desired outcomes.
- Tool Docstring Accuracy: Agents rely heavily on the docstrings of their assigned tools to understand their function. An unclear or inaccurate docstring can lead to improper tool invocation or agents failing to identify the correct tool for a given sub-task. These docstrings are the foundational instructions for the agent's practical engagements.
Mentioned in this video
Key Python Libraries and Frameworks
Large Language Model Providers
Large Language Models
Project Files
Example Stock Tickers