Backend Database 8 min read

TimescaleDB Continuous Aggregates: 3,600x Faster Than PostgreSQL + Go Aggregation

Hoang Dang Tan Phat (Kane)

Hoang Dang Tan Phat (Kane)

Jan 24, 2026

Our crypto price API was taking 10+ seconds to render weekly charts. After migrating from PostgreSQL with Go-side aggregation to TimescaleDB continuous aggregates, we achieved consistent 3ms response times regardless of the time range. Here’s the full benchmark and migration guide.

Test Environment

All benchmarks run on:

  • CPU: AMD Ryzen 7 8845HS (16 threads, 8 cores @ 2.26GHz base)
  • RAM: 32GB DDR5
  • Storage: 512GB NVMe SSD
  • OS: Ubuntu 24.04.3 LTS
  • Database: PostgreSQL 17 / TimescaleDB (Docker containers)
  • Dataset: 24.5 million price history records (5+ years of minute-level OHLCV data)

The Problem

Traditional approach for OHLCV candlestick APIs:

  1. Store minute-level price data in PostgreSQL
  2. Fetch raw data points on each API request
  3. Aggregate candles (15m, 1h, 4h, 1d, 1w) in Go application code

This works fine for small datasets. With 24.5M rows across 8 cryptocurrency symbols, weekly chart requests require scanning millions of rows per request.

Benchmark Results

PostgreSQL + Go Aggregation

=== Symbol: BTC ===
15m: avg=0.0304s min=0.0255s max=0.0456s
1h:  avg=0.0626s min=0.0600s max=0.0644s
4h:  avg=0.1939s min=0.1851s max=0.2031s
1d:  avg=2.0950s min=2.0318s max=2.1435s
1w:  avg=10.5139s min=10.3480s max=10.7378s

Response times grow exponentially with time range. Weekly charts are unusable at 10+ seconds.

TimescaleDB Continuous Aggregates

=== Symbol: BTC ===
15m: avg=0.0029s min=0.0026s max=0.0032s
1h:  avg=0.0032s min=0.0025s max=0.0042s
4h:  avg=0.0037s min=0.0034s max=0.0042s
1d:  avg=0.0035s min=0.0029s max=0.0041s
1w:  avg=0.0029s min=0.0023s max=0.0039s

Consistent ~3ms response times regardless of interval. This is the key insight: pre-computed aggregates eliminate the relationship between data size and query time.

Performance Comparison

IntervalPostgreSQL + GoTimescaleDBSpeedup
15m30.4ms2.9ms10x
1h62.6ms3.2ms20x
4h193.9ms3.7ms52x
1d2,095ms3.5ms599x
1w10,514ms2.9ms3,626x

The longer the time range, the bigger the win. Weekly aggregates show a 3,626x improvement.

Why Continuous Aggregates Win

1. Pre-computed Results

Aggregations are computed incrementally as data arrives, not on every query:

CREATE MATERIALIZED VIEW price_ohlc_1h
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('1 hour', timestamp) AS period,
    symbol,
    FIRST(open_price, timestamp) AS open_price,
    MAX(high_price) AS high_price,
    MIN(low_price) AS low_price,
    LAST(close_price, timestamp) AS close_price,
    SUM(volume) AS volume
FROM price_histories
GROUP BY time_bucket('1 hour', timestamp), symbol;

2. Automatic Refresh Policies

No application code needed to keep aggregates fresh:

SELECT add_continuous_aggregate_policy('price_ohlc_1h',
    start_offset => INTERVAL '3 days',
    end_offset => INTERVAL '1 hour',
    schedule_interval => INTERVAL '30 minutes'
);

This policy refreshes data from 3 days ago up to 1 hour ago, every 30 minutes. Recent data stays current automatically.

3. Hierarchical Aggregates (TimescaleDB 2.7+)

Build larger intervals from smaller ones for efficiency:

-- 4h aggregate built from 1h aggregate
CREATE MATERIALIZED VIEW price_ohlc_4h
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('4 hours', period) AS period,
    symbol,
    FIRST(open_price, period) AS open_price,
    MAX(high_price) AS high_price,
    MIN(low_price) AS low_price,
    LAST(close_price, period) AS close_price,
    SUM(volume) AS volume
FROM price_ohlc_1h
GROUP BY time_bucket('4 hours', period), symbol;

4. Simplified Application Code

Before (Go aggregation - 80+ lines of complex logic):

func (s *tokenService) GetTokenHistory(
    ctx context.Context,
    symbol string,
    interval dto.Interval,
) (*dto.TokenHistoryResponse, error) {
    config := dto.GetIntervalConfig(interval)
    // Fetch enough 1m candles: FetchDays * 24 * 60 minutes
    limit := config.FetchDays * 24 * 60
    history, err := s.priceHistoryRepo.GetHistory(ctx, symbol, limit)
    if err != nil {
        return nil, err
    }

    // Convert to price points
    points := make([]dto.PricePoint, len(history))
    for i, h := range history {
        points[i] = dto.PricePoint{
            OpenPrice:  h.OpenPrice,
            ClosePrice: h.ClosePrice,
            HighPrice:  h.HighPrice,
            LowPrice:   h.LowPrice,
            Volume:     h.Volume,
            Timestamp:  h.Timestamp,
        }
    }

    // Aggregate to requested interval
    points = aggregateCandles(points, config.Minutes)

    // Limit to MaxCandles
    if len(points) > config.MaxCandles {
        points = points[:config.MaxCandles]
    }

    return &dto.TokenHistoryResponse{Symbol: symbol, History: points}, nil
}

// aggregateCandles merges 1m candles into larger intervals
func aggregateCandles(candles []dto.PricePoint, intervalMinutes int) []dto.PricePoint {
    if len(candles) == 0 || intervalMinutes <= 1 {
        return candles
    }

    interval := time.Duration(intervalMinutes) * time.Minute
    bucketMap := make(map[int64]*dto.PricePoint)
    bucketOrder := []int64{}

    for _, c := range candles {
        bucketTime := c.Timestamp.Truncate(interval).Unix()

        if existing, ok := bucketMap[bucketTime]; ok {
            if c.HighPrice > existing.HighPrice {
                existing.HighPrice = c.HighPrice
            }
            if c.LowPrice < existing.LowPrice {
                existing.LowPrice = c.LowPrice
            }
            existing.OpenPrice = c.OpenPrice
            existing.Volume += c.Volume
        } else {
            bucketMap[bucketTime] = &dto.PricePoint{
                OpenPrice:  c.OpenPrice,
                ClosePrice: c.ClosePrice,
                HighPrice:  c.HighPrice,
                LowPrice:   c.LowPrice,
                Volume:     c.Volume,
                Timestamp:  time.Unix(bucketTime, 0),
            }
            bucketOrder = append(bucketOrder, bucketTime)
        }
    }

    result := make([]dto.PricePoint, len(bucketOrder))
    for i, ts := range bucketOrder {
        result[i] = *bucketMap[ts]
    }
    return result
}

After (TimescaleDB - 15 lines, zero aggregation logic):

func (s *tokenService) GetTokenHistory(
    ctx context.Context,
    symbol string,
    interval dto.Interval,
) (*dto.TokenHistoryResponse, error) {
    config := dto.GetIntervalConfig(interval)

    // Fetch pre-aggregated data directly from TimescaleDB continuous aggregates
    points, err := s.priceHistoryRepo.GetOHLCV(ctx, symbol, interval, config.MaxCandles)
    if err != nil {
        return nil, err
    }

    return &dto.TokenHistoryResponse{Symbol: symbol, History: points}, nil
}

All aggregation logic moves to the database. The Go service just reads pre-computed rows.

Implementation Guide

Step 1: Enable TimescaleDB

Update your Docker image:

# docker-compose.yml
services:
  postgres:
    image: timescale/timescaledb:latest-pg17
    # ... rest of config

Enable the extension:

CREATE EXTENSION IF NOT EXISTS timescaledb;

Step 2: Create Hypertable

Convert your time-series table to a hypertable:

-- For new tables
SELECT create_hypertable('price_histories', 'timestamp',
    chunk_time_interval => INTERVAL '7 days');

-- For existing tables with data
SELECT create_hypertable('price_histories', 'timestamp',
    chunk_time_interval => INTERVAL '7 days',
    migrate_data => true);

Step 3: Create Continuous Aggregates

Create a view for each interval:

-- 15 minute candles
CREATE MATERIALIZED VIEW price_ohlc_15m
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('15 minutes', timestamp) AS period,
    symbol,
    FIRST(open_price, timestamp) AS open_price,
    MAX(high_price) AS high_price,
    MIN(low_price) AS low_price,
    LAST(close_price, timestamp) AS close_price,
    SUM(volume) AS volume
FROM price_histories
GROUP BY time_bucket('15 minutes', timestamp), symbol;

-- 1 hour candles
CREATE MATERIALIZED VIEW price_ohlc_1h
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('1 hour', timestamp) AS period,
    symbol,
    FIRST(open_price, timestamp) AS open_price,
    MAX(high_price) AS high_price,
    MIN(low_price) AS low_price,
    LAST(close_price, timestamp) AS close_price,
    SUM(volume) AS volume
FROM price_histories
GROUP BY time_bucket('1 hour', timestamp), symbol;

-- 1 day candles
CREATE MATERIALIZED VIEW price_ohlc_1d
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('1 day', timestamp) AS period,
    symbol,
    FIRST(open_price, timestamp) AS open_price,
    MAX(high_price) AS high_price,
    MIN(low_price) AS low_price,
    LAST(close_price, timestamp) AS close_price,
    SUM(volume) AS volume
FROM price_histories
GROUP BY time_bucket('1 day', timestamp), symbol;

-- 1 week candles
CREATE MATERIALIZED VIEW price_ohlc_1w
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('1 week', timestamp) AS period,
    symbol,
    FIRST(open_price, timestamp) AS open_price,
    MAX(high_price) AS high_price,
    MIN(low_price) AS low_price,
    LAST(close_price, timestamp) AS close_price,
    SUM(volume) AS volume
FROM price_histories
GROUP BY time_bucket('1 week', timestamp), symbol;

Step 4: Add Refresh Policies

-- 15m: refresh every minute, covering last day
SELECT add_continuous_aggregate_policy('price_ohlc_15m',
    start_offset => INTERVAL '1 day',
    end_offset => INTERVAL '1 minute',
    schedule_interval => INTERVAL '1 minute');

-- 1h: refresh every 30 minutes, covering last 3 days
SELECT add_continuous_aggregate_policy('price_ohlc_1h',
    start_offset => INTERVAL '3 days',
    end_offset => INTERVAL '1 hour',
    schedule_interval => INTERVAL '30 minutes');

-- 4h: refresh every hour, covering last week
SELECT add_continuous_aggregate_policy('price_ohlc_4h',
    start_offset => INTERVAL '7 days',
    end_offset => INTERVAL '4 hours',
    schedule_interval => INTERVAL '1 hour');

-- 1d: refresh every 6 hours, covering last month
SELECT add_continuous_aggregate_policy('price_ohlc_1d',
    start_offset => INTERVAL '30 days',
    end_offset => INTERVAL '1 day',
    schedule_interval => INTERVAL '6 hours');

-- 1w: refresh daily, covering last 90 days
SELECT add_continuous_aggregate_policy('price_ohlc_1w',
    start_offset => INTERVAL '90 days',
    end_offset => INTERVAL '1 week',
    schedule_interval => INTERVAL '1 day');

Step 5: Backfill Historical Data

For existing data, manually refresh all continuous aggregates:

CALL refresh_continuous_aggregate('price_ohlc_15m', NULL, NULL);
CALL refresh_continuous_aggregate('price_ohlc_1h', NULL, NULL);
CALL refresh_continuous_aggregate('price_ohlc_1d', NULL, NULL);
CALL refresh_continuous_aggregate('price_ohlc_1w', NULL, NULL);

Step 6: Query the Aggregates

-- Get last 100 hourly candles for BTC
SELECT period, open_price, high_price, low_price, close_price, volume
FROM price_ohlc_1h
WHERE symbol = 'BTC'
ORDER BY period DESC
LIMIT 100;

Step 7: Update Go Repository

Create a repository method that routes to the correct aggregate view:

// GetOHLCV retrieves pre-aggregated OHLCV data from TimescaleDB continuous aggregates
func (r *priceHistoryRepository) GetOHLCV(
    ctx context.Context,
    symbol string,
    interval dto.Interval,
    limit int,
) ([]dto.PricePoint, error) {
    // Select the appropriate continuous aggregate based on interval
    switch interval {
    case dto.Interval15m:
        rows, err := r.queries.GetOHLCV15m(ctx, gen.GetOHLCV15mParams{
            Symbol: symbol,
            Limit:  int32(limit),
        })
        if err != nil {
            return nil, err
        }
        return convertOHLCVRows(rows), nil

    case dto.Interval1h:
        rows, err := r.queries.GetOHLCV1h(ctx, gen.GetOHLCV1hParams{
            Symbol: symbol,
            Limit:  int32(limit),
        })
        if err != nil {
            return nil, err
        }
        return convertOHLCVRows(rows), nil

    case dto.Interval1d:
        rows, err := r.queries.GetOHLCV1d(ctx, gen.GetOHLCV1dParams{
            Symbol: symbol,
            Limit:  int32(limit),
        })
        if err != nil {
            return nil, err
        }
        return convertOHLCVRows(rows), nil

    case dto.Interval1w:
        rows, err := r.queries.GetOHLCV1w(ctx, gen.GetOHLCV1wParams{
            Symbol: symbol,
            Limit:  int32(limit),
        })
        if err != nil {
            return nil, err
        }
        return convertOHLCVRows(rows), nil

    default:
        // Default to 15m
        rows, err := r.queries.GetOHLCV15m(ctx, gen.GetOHLCV15mParams{
            Symbol: symbol,
            Limit:  int32(limit),
        })
        if err != nil {
            return nil, err
        }
        return convertOHLCVRows(rows), nil
    }
}

SQL queries (using sqlc):

-- name: GetOHLCV15m :many
SELECT period, open_price, high_price, low_price, close_price, volume
FROM price_ohlc_15m
WHERE symbol = $1
ORDER BY period DESC
LIMIT $2;

-- name: GetOHLCV1h :many
SELECT period, open_price, high_price, low_price, close_price, volume
FROM price_ohlc_1h
WHERE symbol = $1
ORDER BY period DESC
LIMIT $2;

-- name: GetOHLCV1d :many
SELECT period, open_price, high_price, low_price, close_price, volume
FROM price_ohlc_1d
WHERE symbol = $1
ORDER BY period DESC
LIMIT $2;

-- name: GetOHLCV1w :many
SELECT period, open_price, high_price, low_price, close_price, volume
FROM price_ohlc_1w
WHERE symbol = $1
ORDER BY period DESC
LIMIT $2;

Storage Statistics

MetricValue
Total rows24,551,171
Unique symbols8
Time range2019-12-31 to 2026-01-24
Hypertable chunks318
Total size5.07 GB

When to Use This Pattern

Good fit:

  • Financial candlestick data (OHLCV)
  • IoT sensor aggregations (min/max/avg per hour/day)
  • Metrics dashboards with time-based rollups
  • Any time-series with frequent range queries

Not needed:

  • Small datasets (<100K rows)
  • Point queries (single timestamp lookups)
  • Data that rarely gets aggregated

Migration Checklist

  1. Update Docker image to timescale/timescaledb:latest-pg17
  2. Create migration to enable extension + convert to hypertable
  3. Create migration for continuous aggregates + refresh policies
  4. Update repository layer to query aggregate views instead of base table
  5. Remove application-side aggregation code
  6. Run CALL refresh_continuous_aggregate() for historical backfill
  7. Monitor refresh policy execution

Conclusion

For OHLCV candlestick data at scale, TimescaleDB continuous aggregates provide:

  • 10x to 3,600x faster queries (scales with time range)
  • Consistent ~3ms response times regardless of data volume
  • Simpler application code (aggregation pushed to database)
  • Automatic maintenance via refresh policies

The migration took about 2 hours including testing. For any time-series workload with aggregation requirements, TimescaleDB is worth the switch.

Resources

timescaledb postgresql go performance time-series benchmarks
Hoang Dang Tan Phat (Kane)

Hoang Dang Tan Phat (Kane)

Full-stack developer with 8+ years experience. Building scalable systems with Go, TypeScript, and React.