Welcome to Phase 3 of the Agentic AI course! This lab focuses on moving from a simple LLM Chatbot to a sophisticated ReAct Agent with industry-standard monitoring.
Copy the .env.example to .env and fill in your API keys:
cp .env.example .envpip install -r requirements.txtsrc/tools/: Extension point for your custom tools.
If you don't want to use OpenAI or Gemini, you can run open-source models (like Phi-3) directly on your CPU using llama-cpp-python.
Download the Phi-3-mini-4k-instruct-q4.gguf (approx 2.2GB) from Hugging Face:
- Phi-3-mini-4k-instruct-GGUF
- Direct Download: phi-3-mini-4k-instruct-q4.gguf
Create a models/ folder in the root and move the downloaded .gguf file there.
Change your DEFAULT_PROVIDER and set the path:
DEFAULT_PROVIDER=local
LOCAL_MODEL_PATH=./models/Phi-3-mini-4k-instruct-q4.gguf- Baseline Chatbot: Observe the limitations of a standard LLM when faced with multi-step reasoning.
- ReAct Loop: Implement the
Thought-Action-Observationcycle insrc/agent/agent.py. - Provider Switching: Swap between OpenAI and Gemini seamlessly using the
LLMProviderinterface. - Failure Analysis: Use the structured logs in
logs/to identify why the agent fails (hallucinations, parsing errors). - Grading & Bonus: Follow the SCORING.md to maximize your points and explore bonus metrics.
The code is designed as a Production Prototype. It includes:
- Telemetry: Every action is logged in JSON format for later analysis.
- Robust Provider Pattern: Easily extendable to any LLM API.
- Clean Skeletons: Focus on the logic that matters—the agent's reasoning process.
Happy Coding! Let's build agents that actually work.