trading

I’ve recently been into using agentic AI models for programming!

I’ve previously used OpenHands – an open source project that allows you to develop your own projects with the use of AI. I had also used the Continue extension in VSCode which also performs similar tasks. At the moment I am using Zed editor, from zed.dev, by paying for the pro model for $20/month you get access to both Claude Sonnet and Opus – at cheaper prices than using them directly from Anthropic.

Obviously I was curious to build a system where AI did basically all of the work… But I wanted to take it a little bit further than just asking it to build a system.

As such, I decided that for this project everything will be defined by AI, the project structure, the prompts, the separation into tasks…

I decided to start with Claude Opus 4 – the most complete model for agentic tasks – and I specified my requirements for the system. I started by letting the system know what I wanted from the system, what APIs I wanted to used and lastly, I requested that it asked me for more information it would need to create the plan. Here’s the first interaction with Opus.

I want to create a python project. I will be using AI agents with zed.dev for the development. It’s supposed to be an automated trading system. Using Alpaca for the trading, TwelveData for ohlc data, FinViz Elite as the screener to check what tickers to trade. Preference for saving the data as parquet so it is easy to process with Polars. So super fast and parallel by default. Dockerization is a must as well since we want to be able to run it as a service in the background while we do other tasks. Venv will also be needed. I need you to split the project into the multiple tasks. For each task I need you to create a prompt for the AI agent that will make sure it succeeds in creating a good system.

I’ll wait for you to read this first and ask me questions so we can create the best system possible, i’m sure you will want more information than what i provided at the top

This prompted the following questions:

Great project! I’d love to help you create a comprehensive automated trading system. Before we break this down into tasks and create effective AI agent prompts, I have several questions to better understand your requirements:

Trading Strategy & Logic

  1. What type of trading strategy are you implementing?

    • Day trading, swing trading, or long-term positions?

    • Technical analysis, fundamental analysis, or a combination?

    • Any specific indicators or patterns you want to use?

  2. Risk Management:

    • What position sizing methodology? (Kelly Criterion, fixed percentage, etc.)

    • Stop-loss and take-profit strategies?

    • Maximum number of concurrent positions?

    • Portfolio allocation rules?

Data & Analysis

  1. FinViz Elite Screening:

    • What specific screener criteria will you use?

    • How often will you run the screener? (daily, hourly, etc.)

    • Will you need to scrape FinViz or do they provide an API?

  2. TwelveData Usage:

    • What timeframes do you need? (1min, 5min, daily, etc.)

    • Historical data requirements for backtesting?

    • Real-time vs delayed data needs?

System Architecture

  1. Execution & Monitoring:

    • Paper trading first or straight to live?

    • How will you monitor positions and system health?

    • Alert/notification system requirements?

    • Logging and debugging needs?

  2. Performance & Scalability:

    • How many tickers will you monitor simultaneously?

    • Frequency of trade decisions?

    • Backtesting requirements?

  3. Data Pipeline:

    • Will you store historical data locally or fetch on-demand?

    • Data update frequency?

    • Any data validation or cleaning requirements?

Technical Preferences

  1. Additional Technologies:

    • Preference for async programming (asyncio)?

    • Message queue system (Redis, RabbitMQ) for component communication?

    • Database needs beyond Parquet files (PostgreSQL for trade history)?

    • Web dashboard or CLI-only interface?

  2. Development & Deployment:

    • Single container or docker-compose with multiple services?

    • CI/CD pipeline needs?

    • Configuration management approach (env files, config files, secrets)?

Understanding these details will help me create more targeted and effective prompts for each component of your system. This will ensure the AI agents have clear direction and can build a robust, production-ready trading system.

I answered these questions and the LLM created a vast initial proposal that contained multiple tasks, with up to 8 subpoints each, and also the prompt I could use to ask zed to generate it.

Of course, most of the code generation was actually done by Sonnet and not Opus, that’s because Opus is a hungry boy and eats all the credits.

I’ll give you an example of a task, this first one, which basically created the skeleton of our system.

Task 1: Project Foundation & Structure

AI Agent Prompt:

Create a Python project structure for an automated trading system with the following requirements: 1. Create a monorepo structure with these services: – data_collector: Fetches data from TwelveData and FinViz – strategy_engine: Implements trading strategies – trade_executor: Handles Alpaca API operations – risk_manager: Manages position sizing and risk – scheduler: Orchestrates all services – database: PostgreSQL service configuration 2. Project structure should be:

trading-system/
├── docker-compose.yml
├── .env.example
├── README.md
├── shared/
│ ├── init.py
│ ├── models.py (Pydantic models for data validation)
│ ├── config.py (centralized configuration)
│ └── utils.py
├── services/
│ ├── data_collector/
│ │ ├── Dockerfile
│ │ ├── requirements.txt
│ │ ├── src/
│ │ └── tests/
│ ├── strategy_engine/
│ │ ├── Dockerfile
│ │ ├── requirements.txt
│ │ ├── src/
│ │ └── tests/
│ ├── trade_executor/
│ │ ├── Dockerfile
│ │ ├── requirements.txt
│ │ ├── src/
│ │ └── tests/
│ ├── risk_manager/
│ │ ├── Dockerfile
│ │ ├── requirements.txt
│ │ ├── src/
│ │ └── tests/
│ └── scheduler/
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── src/
│ └── tests/
├── data/
│ ├── parquet/
│ └── logs/
└── scripts/
├── setup.sh
└── backtest.py

3. Create docker-compose.yml with: – All services defined above – Redis for message queue – PostgreSQL for trade history – Proper networking and volume mounts – Environment variable configuration 4. Create base Dockerfile template that each service can extend 5. Setup .env.example with: – ALPACA_API_KEY, ALPACA_SECRET_KEY, ALPACA_BASE_URL – TWELVE_DATA_API_KEY – FINVIZ_API_KEY – Database credentials – Redis configuration – Gotify URL and token 6. Create a Makefile with commands for: – Building all services – Running tests – Starting/stopping services – Viewing logs – Database migrations 7. Setup logging configuration that: – Uses Python’s logging module – Outputs to both console and files – Includes log rotation – Has different levels for different modules 8. Create shared Pydantic models for: – Market data (OHLCV) – Trade signals – Portfolio state – Risk parameters Include proper .gitignore and development setup instructions.

If you have ever played with agentic AI models before and the way they work, you are probably freaking out that I am using such a generic prompt. Generic not in the tasks that it’s building, because the tasks are well specified, it’s generic from the point of view of it creating multiple files with a lot of code. This is not ideal for agentic models as they prefer singular tasks with very specific instructions.

However, whether you believe it or not, these prompts totally worked and the agents created code that actually looked pretty promising.

One can imagine, though, that the amount of problems that the code that was generated caused was not little, thousands upon thousands of linting errors, missing variables, environment variables that didn’t exist.

I decided to generate the code for all the tasks first and only then start fixing it. The truth is, if I had to solve all the linting issues, missing variables, truncated files, etc… I would not leave my computer for 2 months. But the brilliant part is, I did not try to fix anything, I didn’t need to!

THIS PAGE IS STILL WORK IN PROGRESS