A modern, scalable multi-agent task execution system built with Python, featuring asynchronous processing, safety management, and intelligent task orchestration.
- Multi-Agent Architecture: Modular agent system with specialized capabilities
- Asynchronous Processing: High-performance async/await patterns throughout
- Safety & Security: Comprehensive safety checks and security measures
- Shared Memory: Redis-backed shared memory for inter-agent communication
- Message Bus: Asynchronous message passing between agents
- Task Orchestration: Intelligent task scheduling and dependency management
- Rich CLI: Beautiful command-line interface with Rich
- Comprehensive Logging: Structured logging with Loguru
- Configuration Management: Environment-based configuration with Pydantic
- Data Validation: Robust validation system for all inputs
- Python 3.9+
- Redis server
- LLM API keys (Groq, Google, or OpenAI)
-
Clone the repository:
git clone <repository-url> cd task-ai
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
cp .env.example .env # Edit .env with your configuration
-
Start Redis:
redis-server
-
Run the system:
python main.py
-
Use the CLI:
taskai> help taskai> status taskai> submit
task-ai/
โโโ src/
โ โโโ core/ # Core system components
โ โ โโโ orchestrator.py # Task orchestration
โ โ โโโ memory.py # Shared memory system
โ โ โโโ message_bus.py # Message passing
โ โ โโโ safety.py # Safety management
โ โโโ agents/ # Agent implementations
โ โ โโโ base_agent.py # Abstract base class
โ โ โโโ planner.py # Task planning agent
โ โ โโโ file_handler.py # File operations agent
โ โ โโโ shell_executor.py # Command execution agent
โ โ โโโ monitor.py # System monitoring agent
โ โโโ utils/ # Utility modules
โ โ โโโ prompt_refiner.py # LLM prompt optimization
โ โ โโโ validators.py # Data validation
โ โโโ config/ # Configuration management
โ โโโ settings.py # Application settings
โโโ main.py # CLI entry point
โโโ requirements.txt # Python dependencies
โโโ pyproject.toml # Project configuration
โโโ README.md # This file
Create a .env
file with the following variables:
# Application
APP_NAME=TaskAI
DEBUG=false
# LLM Configuration
LLM_PROVIDER=groq
LLM_API_KEY=your_api_key_here
LLM_MODEL=llama3-8b-8192
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=0
# Logging
LOG_LEVEL=INFO
LOG_FILE_PATH=logs/taskai.log
# Safety
SAFETY_ENABLED=true
SAFETY_MAX_FILE_SIZE=10485760
- LLM: Language model provider settings
- Redis: Shared memory backend configuration
- Logging: Logging behavior and output
- Safety: Security and safety parameters
Abstract base class providing common functionality:
- Lifecycle management
- Task processing
- Memory access
- Message passing
- Error handling
- Planner Agent: Task decomposition and planning
- File Handler Agent: File system operations
- Shell Executor Agent: Command execution
- Monitor Agent: System monitoring and logging
- Submission: Tasks are submitted to the orchestrator
- Validation: Parameters and safety checks are performed
- Scheduling: Tasks are queued based on priority and dependencies
- Execution: Suitable agents execute the tasks
- Completion: Results are stored and status updated
- File Operations: Read, write, delete, list files
- Command Execution: Execute shell commands safely
- Data Analysis: Process and analyze data
- Code Generation: Generate code based on requirements
- File Operation Safety: Path validation and extension checking
- Command Execution Safety: Blocked commands and pattern detection
- Execution Time Limits: Prevent runaway processes
- File Size Limits: Prevent memory exhaustion
- Agent Operation Safety: Controlled agent interactions
- Input Validation: Comprehensive data validation
- Path Sanitization: Safe file path handling
- Command Filtering: Dangerous command detection
- Resource Limits: Memory and time constraints
- Structured Logging: JSON-formatted logs with context
- Multiple Outputs: Console and file logging
- Log Rotation: Automatic log file management
- Log Levels: Configurable verbosity
- System Status: Real-time system health monitoring
- Agent Performance: Task success/failure tracking
- Resource Usage: Memory and execution time monitoring
- Safety Violations: Security incident tracking
- Groq: Fast inference with Llama models
- Google: Gemini models for complex reasoning
- OpenAI: GPT models for general tasks
- Redis: Shared memory and caching
- File System: Local and remote file operations
- Shell: Command execution and process management
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Code formatting
black src/
isort src/
# Type checking
mypy src/
- Inherit from
BaseAgent
- Implement required abstract methods
- Register agent type with orchestrator
- Add validation rules
- Define task parameters
- Add validation schema
- Implement agent logic
- Update CLI interface
- Async Processing: Non-blocking I/O operations
- Connection Pooling: Efficient Redis connections
- Task Queuing: Priority-based task scheduling
- Memory Management: Automatic cleanup and expiration
- Resource Limits: Prevent resource exhaustion
- Horizontal Scaling: Multiple agent instances
- Load Balancing: Intelligent task distribution
- Resource Isolation: Per-agent resource limits
- Fault Tolerance: Graceful error handling
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
- Follow PEP 8
- Use type hints
- Add docstrings
- Write tests
- Use async/await patterns
MIT License - see LICENSE file for details.
- Documentation: Check the code comments and docstrings
- Issues: Report bugs and feature requests on GitHub
- Discussions: Join community discussions
- Web-based dashboard
- REST API
- Docker support
- Kubernetes deployment
- Plugin system
- Advanced scheduling
- Machine learning integration
- Cloud deployment guides
TaskAI - Empowering intelligent task execution through multi-agent collaboration.