What Is Algorithmic Trading?
Algorithmic trading — often called algo trading or automated trading — is the use of computer programmes to execute trading instructions automatically, based on pre-defined rules or models. At its most basic, an algorithmic system might monitor a price feed and place a buy order whenever a specified condition is met. At its most sophisticated, it might integrate hundreds of data inputs, model complex market dynamics in real time, and adjust its behaviour continuously in response to new information. Both ends of this spectrum are described by the same phrase, which is one reason why the term "algorithmic trading" is so frequently misunderstood.
It is important to distinguish between algorithmic execution and algorithmic strategy. Algorithmic execution is concerned with the mechanics of placing trades: breaking large orders into smaller pieces, timing entries to minimise market impact, or routing orders to the venues offering the best available prices. This form of algorithmic trading is routine and widely used by institutional investors, and it says nothing about why a trade is being made — only about how it is executed once a decision has been taken. Algorithmic strategy, by contrast, is concerned with generating the trading decisions themselves: identifying when to buy, when to sell, and in what size, using systematic rules or models. Much of the public discourse about AI trading conflates these two functions, but they are distinct and should be understood separately.
The history of algorithmic trading is primarily institutional. The development of electronic exchanges in the 1970s and 1980s created the conditions for systematic, rules-based trading at scale. Quantitative hedge funds emerged in the 1980s and 1990s, applying mathematical models to equity and futures markets. By the 2000s, algorithmic execution had become standard practice on trading desks at major banks and investment managers. The democratisation of algorithmic tools for retail investors is a much more recent phenomenon — enabled by lower-cost computing, widespread API access to market data, and the emergence of retail platforms with strategy-builder functionality. The availability of tools does not imply equivalence of capability, and retail algorithmic tools should be evaluated on their own terms rather than by comparison with institutional-grade systems.
Types of Algorithmic Trading Systems
Execution Algorithms
Execution algorithms are designed to achieve a specific execution objective rather than to generate trading ideas. The most common types are VWAP (Volume-Weighted Average Price) algorithms, which try to execute an order at or near the volume-weighted average price over a specified period; TWAP (Time-Weighted Average Price) algorithms, which spread execution evenly across a time window; and Implementation Shortfall algorithms, which attempt to minimise the difference between the decision price and the final average execution price. These tools are used primarily by institutional investors dealing in large sizes, and their purpose is entirely mechanical — they do not generate signals or identify opportunities.
Trend-Following Systems
Trend-following systems attempt to identify and participate in sustained directional movements in prices. They typically use lagging indicators — moving averages, momentum measures, breakout rules — to signal when a trend may be establishing itself, and they hold positions until the trend reverses. Trend-following has a long history in managed futures and commodity trading, and systematic trend-following funds have demonstrated that disciplined, rules-based trend capture can be a viable long-term strategy — though it involves extended periods of drawdown and poor performance in range-bound markets. The underlying logic is that price trends tend to persist due to the gradual diffusion of information and the herding behaviour of market participants.
Mean Reversion Strategies
Mean reversion strategies take the opposite view: they assume that prices which have deviated significantly from a historical norm or equilibrium will tend to revert towards it. A simple example would be buying a stock when it has fallen more than two standard deviations below its 20-day moving average, on the assumption that the deviation is temporary. Mean reversion strategies tend to perform well in range-bound markets and poorly in strongly trending ones — the inverse of trend-following. Many retail algorithmic tools incorporate mean reversion logic, often without labelling it as such.
Statistical Arbitrage
Statistical arbitrage (stat arb) exploits historical statistical relationships between related securities. A classic form is pairs trading: if two stocks have historically moved together and their prices diverge, a stat arb strategy might buy the underperformer and short the outperformer, expecting the spread to close. More sophisticated stat arb strategies identify relationships across larger groups of securities using factor models and machine learning. Stat arb requires careful modelling of transaction costs, as the potential profit per trade is typically small and erodes quickly if costs are not accurately estimated.
Market-Making Algorithms
Market-making algorithms continuously quote both buy and sell prices for an instrument, profiting from the bid-ask spread. They require sophisticated inventory management and risk controls, and they operate most profitably in environments with predictable order flow and low volatility. Market-making is primarily an institutional and high-frequency trading activity and is not typically available as a retail trading strategy.
Machine Learning-Based Strategies
Machine learning strategies use statistical models that learn from historical data to identify patterns and generate signals. They may use any of the techniques described in the What Is AI in Trading? guide — supervised classification, regression, reinforcement learning, NLP-based sentiment signals — and they range from relatively simple to highly complex. The defining characteristic is that their decision logic is derived from data rather than hand-coded by a human. This creates both opportunities (models can find patterns invisible to human analysts) and risks (models can overfit to historical noise and fail in live markets).
Key point: The algorithmic systems accessible to retail investors differ fundamentally in sophistication, data access, infrastructure, and research depth from institutional quantitative strategies. Retail platforms may use similar terminology, but the underlying capabilities are not equivalent. This gap should be kept clearly in mind when evaluating any platform's algorithmic or AI claims.
How Backtesting Works — and Why It Can Mislead
Backtesting is the process of applying a trading strategy to historical data to evaluate how it would have performed. It is the primary way algorithmic strategies are developed, refined, and initially assessed. A researcher writes a set of rules or trains a model, then runs it against historical price data to see how it would have performed — what trades it would have generated, what returns those trades would have produced, and what drawdowns occurred along the way. Backtesting is a legitimate and necessary tool. The problem arises when its results are treated as a reliable predictor of future live performance, which they often are not.
Overfitting and curve-fitting are the most important pitfalls. When a strategy is developed by repeatedly adjusting its parameters until historical performance looks satisfactory, the process systematically biases it towards fitting the specific characteristics of the historical data used — including its noise. A strategy developed this way will appear impressive in backtests because it has, in effect, been optimised to describe the past. When applied to new data it has not seen, its performance will typically be considerably worse. The more parameters a strategy has and the more rounds of optimisation it undergoes, the more severe this problem becomes. Strategies with 20 tunable parameters and hundreds of optimisation cycles can produce spectacular backtests while having essentially no genuine predictive content.
Look-ahead bias occurs when a backtest inadvertently uses information that would not have been available at the time the trade was made. Common examples include using daily closing prices to generate signals that would notionally be executed at those same prices (impossible in practice), using revised economic data rather than the initial release that was available in real time, or incorporating corporate events (earnings, mergers) in a way that implies the strategy could have known about them in advance. Look-ahead bias is surprisingly easy to introduce by accident and systematically inflates backtested returns.
Survivorship bias affects backtests that use only securities that are currently trading, ignoring companies that went bankrupt, were delisted, or were merged out of existence. If a strategy is backtested on the current constituents of an index, all of those companies survived to the present — which is not the universe of companies that existed when the backtest period began. The strategy has been implicitly given the benefit of hindsight: it avoided the failures because the failures were excluded from the dataset. Backtests conducted on survivorship-biased data overstate performance, sometimes substantially.
Unrealistic cost assumptions are endemic in retail backtests. A backtest might assume execution at the mid-price between bid and ask, zero market impact, instantaneous fills, and no slippage. In live trading, particularly for less liquid instruments or at larger sizes, all of these assumptions break down. Bid-ask spreads, which represent an immediate cost on every trade, are sometimes ignored entirely. Commission structures may be understated. The cumulative effect of these cost optimisms can turn a marginally profitable backtest into a loss-making live strategy.
In-sample versus out-of-sample testing is a critical distinction. In-sample testing uses the data on which the strategy was developed; out-of-sample testing applies it to data that was held aside during development and not used to optimise the strategy's parameters. A strategy that performs well in-sample but poorly out-of-sample is almost certainly overfitted. Rigorous strategy development requires a clear separation between development data and validation data. A further step — forward testing or paper trading — involves running the strategy in real time but without committing real capital, to assess live performance without the costs of real losses. Even forward testing has limitations (it is typically conducted over short periods and may coincide with unusually favourable or unfavourable conditions), but it provides evidence that is qualitatively different from historical backtests.
Important: Backtested performance results are not a reliable indicator of future live trading results. Always treat backtest results with significant scepticism, particularly when they appear highly favourable, when out-of-sample testing is not disclosed, or when the cost assumptions underlying the results are not clearly stated. A compelling backtest demonstrates that a strategy would have worked in the past; it does not demonstrate that it will work in the future.
Common Pitfalls in Algorithmic Trading
Over-optimisation and curve fitting. As described in the backtesting section above, the practice of repeatedly adjusting a strategy's parameters to improve historical results almost inevitably produces a system that fits historical noise rather than genuine signal. The warning sign is a strategy with many adjustable parameters, a long optimisation history, and excellent historical performance — combined with no credible explanation of why the underlying logic should continue to work in the future.
Ignoring transaction costs. Algorithmic strategies that turn over their positions frequently face cumulative transaction costs — spreads, commissions, overnight financing charges — that can substantially erode or eliminate performance. A strategy that generates a large number of small winning trades in backtests, where costs are understated, may be loss-making in practice. Every additional trade is a cost, and trading systems should be evaluated net of realistic transaction costs, not gross.
Assuming perfect execution. Backtests typically assume that orders are filled immediately and at the specified price. In live markets, fills may be partial, delayed, or at a worse price than expected, particularly in fast-moving markets or for less liquid instruments. Execution algorithms are designed precisely to manage this problem for institutional investors, but most retail algorithmic tools do not provide this level of execution sophistication.
Ignoring liquidity constraints. A strategy that works in backtests at a given position size may face difficulties at larger sizes if the market for the instrument is not deep enough to absorb the order without moving the price. For retail investors trading modest sizes in major instruments, this is often not a binding constraint. For any strategy operating in thinner markets, or for platform-level strategies where many users are following the same signal simultaneously, liquidity should be considered.
Model degradation over time. Algorithmic strategies have a natural life cycle. When a strategy is first deployed, it may capture a genuine inefficiency. As other market participants identify the same pattern, the inefficiency narrows. As the strategy becomes more widely known — through publication, imitation, or reverse engineering — it may disappear entirely. Models trained on historical data may also degrade simply because market conditions evolve: the relationships between variables that were stable during the training period may not persist indefinitely. Regular monitoring of live performance, and willingness to retire or replace underperforming models, is essential.
Technical failures. Automated systems are dependent on technology that can fail. Connectivity outages, software bugs, data feed errors, exchange outages, and API failures can all cause an algorithmic system to behave unexpectedly — placing orders it should not, failing to place orders it should, or losing track of open positions. Any algorithmic system should have robust error-handling, position monitoring, and manual override capabilities. The consequences of a technical failure can be severe if there are no adequate safeguards in place.
Mistaking correlation for causation. Machine learning models, in particular, can identify statistical correlations in data that have no causal foundation — they are artefacts of the training period rather than genuine relationships. A model might learn that a particular technical indicator predicted price movements in the training data without there being any economic reason why it should continue to do so. Strategies built on spurious correlations are unlikely to perform reliably out of sample. Good model development includes asking whether the identified pattern makes economic sense, not just whether it was statistically significant in historical data.
High-Frequency Trading: Context for Retail Investors
High-frequency trading (HFT) is a specific category of algorithmic trading characterised by very high order submission rates, very short holding periods (often measured in milliseconds or microseconds), and strategies that depend on speed advantages over other market participants. HFT firms compete to be the fastest: they co-locate their servers physically within exchange data centres to minimise the distance signals must travel, use custom-built hardware and low-level programming to reduce latency to the minimum possible, and invest heavily in proprietary network infrastructure. The competitive advantage in HFT is, by definition, available only to those with the infrastructure to achieve it — which requires capital expenditure and technical capabilities far beyond those of retail investors or, indeed, most medium-sized financial institutions.
HFT encompasses several distinct strategies: market-making (continuously quoting both sides of a market and profiting from the spread), statistical arbitrage (exploiting tiny price discrepancies across related instruments or venues), and latency arbitrage (acting on publicly available information — such as exchange order flow data — faster than other participants can react). The role of HFT in markets is subject to ongoing academic and regulatory debate: its proponents argue that it improves liquidity and reduces bid-ask spreads; its critics argue that it extracts rents from slower participants and can amplify volatility during market stress. Neither view is entirely wrong, and the net effect likely varies by market, instrument, and market conditions.
The relevance for retail investors is primarily contextual: HFT should not be confused with the algorithmic or AI trading tools available through retail platforms. They operate on entirely different timescales, use entirely different infrastructure, and pursue entirely different objectives. References in retail platform marketing to "millisecond execution" should be understood in context: modern electronic trading infrastructure processes orders quickly, but speed of order routing is not the same as the co-located, sub-millisecond advantage that defines genuine HFT.
Note: References to "millisecond execution" or "HFT-grade technology" in retail platform marketing should be evaluated carefully. Achieving fast order routing through a modern electronic brokerage is not the same as operating a genuine high-frequency trading strategy. These capabilities require co-location infrastructure, proprietary hardware, and direct market access arrangements that are not available through ordinary retail accounts. When such language appears in retail platform marketing, it is worth asking precisely what is being claimed and how it would benefit a retail investor's actual trading activity.
Retail Algorithmic Tools: What's Actually Available
Strategy Builders
Strategy builders allow users to construct trading rules without writing code, typically using a drag-and-drop interface to combine technical indicators, price conditions, and logical operators. The resulting rules are then executed automatically. The appeal is that they democratise rule-based trading for users without programming skills. The important caveats are: the quality of the output depends entirely on the quality of the rules the user defines; backtesting within these tools is subject to all the limitations described earlier; and the strategies available are limited by the indicators and conditions the platform has built in. Asking what indicators are available, how backtesting is conducted, what cost assumptions are used, and whether the platform provides any independent validation of strategy ideas are all reasonable questions.
Copy Trading
Copy trading platforms allow users to replicate the trades of other traders — typically those who have performed well over a recent period. The underlying logic is that past performance by a specific trader may predict future performance. This assumption is far more problematic than it might appear. Short track records are subject to luck; strategy changes by the copied trader are not always transparent; and when many users copy the same trader, the market impact of that trader's activity changes, potentially degrading performance. Copy trading also exposes users to the full risk profile of the trader being copied, which may not be adequately disclosed. Before copying any trader, understanding their risk management approach, maximum drawdown history, and leverage usage is more relevant than their recent returns.
AI Signal Services
AI signal services provide trading signals — recommendations to buy or sell specific instruments — generated by automated models. These range from simple technical rule outputs to more sophisticated ML-based systems. The key questions are: how is the signal generated, what is the historical accuracy of past signals, how are those accuracy figures calculated and reported, are they based on live or backtested performance, and what are the risk parameters associated with acting on the signals? Signal services that prominently display their wins and obscure their losses, or that provide only backtested performance without live track records, should be evaluated with particular care.
Automated Execution with User-Defined Rules
Some platforms allow users to set up automated execution based on their own rules — for example, automatically closing a position if a price target or stop-loss level is hit, or executing a recurring investment on a defined schedule. This category is arguably the most straightforward: the logic is user-defined and transparent, the execution is mechanical, and the user retains control over the underlying strategy. The risks are that poorly specified rules can execute in ways users did not intend, that automated execution removes the opportunity for discretionary intervention when circumstances change, and that technical failures can affect execution at critical moments.
Key point: When evaluating any retail algorithmic tool, the quality of disclosure about methodology and risk is as informative as the tool itself. Our How to Compare Platforms guide provides a structured framework for assessing platform quality across these dimensions.
Evaluating Algorithmic Claims: A Critical Framework
When a platform claims to offer algorithmic or AI trading capabilities, the following questions provide a structured basis for evaluation. Not every question will be answerable from publicly available information, but the willingness and ability of a platform to address them — or its evasiveness — is itself informative.
- What exactly does the algorithm do? Is it an execution algorithm (managing how orders are placed), a strategy algorithm (deciding what to trade), or a signal generator (producing recommendations that a human or automated system acts on)? These are fundamentally different things, and the answer shapes every subsequent question. Platforms that describe their algorithmic capabilities in vague terms without making this distinction should be pressed for clarity.
- What are the defined risk parameters? Any algorithmic system that takes positions should have defined risk limits: maximum position size, maximum drawdown before the strategy is suspended, stop-loss logic, leverage constraints. Understanding these parameters matters because they determine the worst-case scenario if the algorithm underperforms. Algorithmic systems without clearly defined risk parameters are operating without adequate safeguards.
- How is performance measured and reported? Performance figures should specify whether they are based on live or backtested trading, the time period covered, the costs assumed, and whether they are net of fees. Gross backtested performance over a favourable period, without cost adjustment or out-of-sample validation, is the least meaningful form of performance data — yet it is the most commonly presented.
- Is out-of-sample testing disclosed? Has the strategy been tested on data that was not used during its development? Out-of-sample testing is the minimum standard for meaningful strategy validation. If the platform cannot confirm that out-of-sample testing was conducted — or does not know what the term means — that is a significant concern.
- How does it handle drawdowns? Every strategy experiences periods of loss. Understanding how the algorithm responds to drawdowns — whether it continues trading, reduces size, suspends activity, or something else — reveals how its risk management is designed. A strategy that lacks a coherent drawdown response can compound losses during adverse conditions.
- What happens in adverse market conditions? How has the algorithm performed during periods of high volatility, market stress, or unusual events? If the algorithm has no live history through such conditions, what theoretical or simulated analysis has been conducted? The answer to this question often reveals the depth of the platform's own understanding of its product.
- How are algorithm failures handled? What happens if the algorithm places an erroneous order, loses connectivity, or behaves unexpectedly? Are there automatic circuit breakers? Can users manually intervene and override? Is there 24-hour technical support? The absence of clear answers to these operational questions is a risk factor that often receives less attention than performance metrics.
Note: Platforms with genuinely robust algorithmic capabilities should be able to answer most of these questions specifically and credibly. Vague, deflecting, or marketing-language responses to technical questions about methodology and risk management are a meaningful indicator of the depth — or shallowness — of what is being offered. Our Warning Signs guide covers these and related red flags in more detail.
Key Terms
Key Terms
- Algorithm
- A set of defined rules or instructions that a computer executes to perform a task. In trading, algorithms automate decision-making and order placement. Not all algorithms involve learning or artificial intelligence.
- Backtesting
- Applying a trading strategy to historical data to assess how it would have performed. A necessary development tool that is subject to significant limitations, including overfitting, look-ahead bias, survivorship bias, and unrealistic cost assumptions.
- Overfitting
- When a model or strategy has been excessively tuned to its historical training data, capturing noise rather than genuine pattern. Overfitted strategies appear impressive in backtests but typically underperform on new, unseen data.
- VWAP (Volume-Weighted Average Price)
- A benchmark price calculated by dividing total traded value by total traded volume over a period. VWAP execution algorithms aim to match or beat this price, spreading a large order across the trading day in proportion to expected volume.
- TWAP (Time-Weighted Average Price)
- An execution approach that splits an order evenly across a defined time window, regardless of volume patterns. Simpler than VWAP, it is used when minimising market impact through even distribution is the primary objective.
- Drawdown
- The peak-to-trough decline in the value of a portfolio or strategy over a specified period. Maximum drawdown is a key risk metric for algorithmic strategies, measuring the worst loss experienced from any high point to any subsequent low point.
- Slippage
- The difference between the expected execution price of an order and the price at which it is actually filled. Slippage arises from market impact, delays in execution, and the discrete nature of order books. It is a real cost that is often understated in backtests.
- Model Drift
- The gradual degradation in model performance as market conditions change and the relationships the model was trained on cease to hold. Monitoring for drift and updating or replacing models accordingly is an ongoing operational requirement for any algorithmic system.
- Out-of-Sample Testing
- Validation of a strategy on data that was not used during its development. Out-of-sample performance is considerably more meaningful than in-sample performance, as it tests whether a strategy's rules generalise beyond the period on which they were optimised.
- Curve Fitting
- A form of overfitting in which a strategy's parameters are optimised so precisely to historical data that the resulting rules describe historical noise rather than genuine market dynamics. Curve-fitted strategies typically fail when applied to new data.
- Mean Reversion
- The tendency of prices, spreads, or other financial variables to return towards a historical average after deviating from it. Mean reversion strategies exploit this tendency by taking positions that benefit from the expected return to the mean.
- Statistical Arbitrage
- A quantitative strategy that exploits historical statistical relationships between related securities, betting that deviations from those relationships will revert to the mean. Relies on careful modelling of the relationship, robust cost estimation, and systematic risk management.
Related Guides
Educational content only. This guide is provided for informational and educational purposes and does not constitute financial advice, investment advice, or a recommendation to use any financial product. Trading and investing involve significant risk of loss. Read our full Risk Disclaimer.