162 strategies, by the numbers
What's in the PineForge parity corpus: how strategies break down by category, asset, and complexity. Read like a museum guide to the gallery.
When we say "158 of 162 strategies pass strict parity," the claim only means something if you know what the 162 are. This is the guided tour.
Why a corpus matters
Without a fixed reference set, "parity" is a feeling. You run a few strategies,
they look close enough, you ship. Then a user tries something with OCA exit groups
or request.security in a loop and nothing matches. The bugs were always there;
you just hadn't tested for them.
A held-out corpus changes that. Every engine release runs the same 162 strategies on the same canonical OHLCV. If the count of passing strategies goes up, something improved. If it goes down, a regression landed. The corpus is a regression harness disguised as a backtest gallery.
The 162 strategies also function as a public attestation. We've committed the engine outputs — trade counts, total returns, Sharpe ratios — to the gallery so anyone can see what the engine produces. That's different from a press release claiming parity.
The three categories
The corpus splits into three folders: basic, community, and validation.
They aren't ranked by quality; they describe where the strategies came from
and what they test.
basic — 9 strategies
The basic category holds canonical, well-known strategy types: MA cross,
Supertrend, Stochastic Slow, Parabolic SAR, Keltner Channel, Inside Bar,
Donchian Breakout, and a volatility expansion method. These are the textbook
examples you'd find in any introductory algo-trading course.
They're in the corpus because textbook examples are where compilers break in
embarrassing ways. If a MACD cross doesn't produce the right trade list, everything
else is suspect. basic is the sanity layer.
The 9 basic strategies in the corpus range from 14 trades (the greedy
momentum strategy, very selective) to 7,580 trades (the volatility expansion
strategy, which fires frequently on 15-minute bars). That's a 500× spread within
the "simple" category — which says more about strategy selectivity than strategy
complexity.
community — 11 strategies
The community category holds Pine scripts contributed by the algo-trading
community. Real-world authorship means real-world variation: different coding
styles, different use of built-ins, different edge cases in how position sizing
and exit logic are wired.
The 11 community strategies include a 4-EMA RSI filter, a Break-of-Structure curve detector, a liquidity sweep strategy, a trend-following system called MarketShift, and others. Trade counts range from 71 (VCP — a very selective pattern scanner) to 2,541 (IES, which fires on almost every bar). Return spread is wide: MarketShift returned +$3,231 over the backtest window; IES returned −$162,977. Corpus inclusion says nothing about whether a strategy is profitable — only that it compiles and produces trades.
Community scripts are the hardest category to maintain parity on. Authorship diversity means the scripts exercise unusual combinations of Pine features that validation suites don't always anticipate. When we find a new parity gap, it usually lives here first.
validation — 142 strategies
The validation category is the largest by far: 142 of the 162 corpus entries
are here. These are synthetic strategies designed to exercise specific Pine
semantics — not to be profitable, but to prove that a particular language feature
maps correctly to C++ output.
A few examples of what individual validation strategies target:
49-partial-exit-qty-percent— testsstrategy.exit(qty_percent=...)for partial position unwinding. If the engine over-emits exit fills, this is the strategy that catches it.request.securityvariants — verify that multi-timeframe lookups resolve at the correct bar, with and withoutlookahead=barmerge.lookahead_off.- OCA (One-Cancels-All) exit groups — verify that when one exit fires, the competing exits are correctly cancelled.
- Trailing stop variants — test that trailing stop ratchet logic tracks the high-water mark correctly across gaps and limit fills.
- UDT method tests — verify that Pine v6 user-defined type methods compile to the right C++ struct operations.
The "if this strategy passes, feature X is covered" logic is explicit in how we
add to this folder. When a community script exposes a new edge case, we usually
write a minimal synthetic reproducer and add it to validation.
How the numbers actually look
All 162 strategies in the current corpus run on the same asset and timeframe:
ETHUSDT at 15-minute bars. This is a deliberate constraint — using one canonical
OHLCV makes cross-strategy comparison meaningful and removes asset-specific factors
from the parity analysis.
Trade count distribution:
| Range | Strategies | |---|---| | Under 100 | 7 | | 100 – 499 | 41 | | 500 – 999 | 63 | | 1,000 – 4,999 | 44 | | 5,000 or more | 7 |
The median strategy in the corpus produces 757 trades over the backtest window.
The lowest is 14 (the greedy strategy, highly selective). The highest is 11,218.
This spread matters for parity testing: a strategy with 14 trades and a misplaced
exit is a 7% error rate; the same bug on a 5,000-trade strategy shows up in
aggregate metrics even if most trades match.
Sharpe distribution:
The Sharpe values in the corpus are lower than the range you'd see in a curated strategy library. Most corpus strategies were included for their Pine feature coverage, not their risk-adjusted returns. The median Sharpe across the 162 strategies is 0.023. 109 of 162 strategies have a positive total return over the backtest window; 53 are negative.
We surface these numbers honestly in the gallery rather than filtering for "good" strategies. If you're browsing for inspiration, the return and Sharpe columns tell you what to look at. If you're reading for parity verification, the actual numbers are beside the point — what matters is that PineForge and TradingView produce the same numbers.
What every corpus strategy must do
Inclusion in the corpus requires four things:
-
Compile in PineForge codegen. If it doesn't compile, it can't be tested. This is a higher bar than it sounds — some community Pine scripts use
importstatements for libraries that aren't yet in the supported subset. -
Produce at least one trade on the canonical OHLCV. A strategy that compiles but never fires an entry can't have its trade list compared to anything.
-
Have a TradingView reference CSV. Every corpus strategy has a committed
engine_trades.csvfrom the TradingView "List of Trades" export, window-clipped to the OHLCV span. This is the ground truth the engine output gets diffed against. -
Have its engine output committed alongside the Pine source. The gallery serves from these committed outputs. Nothing in the gallery is generated on the fly — it's the snapshot of what the engine produced on the day the strategy was added or last updated.
Why we publish aggregates, not source
The gallery publishes trade counts, returns, Sharpe ratios, and sparklines. It does
not publish the Pine source code or the raw engine_trades.csv files directly for
all strategies.
The reasons are straightforward. Community Pine scripts are written by their
original authors; we don't have blanket permission to redistribute source.
TradingView CSV exports are covered by their own terms of service. Our internal
LEGAL.md reflects this.
What we can publish — and do — is the summary statistics and the visual outputs: sparklines, parity tier badges, and the gallery metadata. These are derivative enough to be clearly ours and concrete enough to be verifiable: if you have a TradingView Premium account and the same Pine script, you can reproduce the same trade list and compare it to our committed output yourself.
What the gallery enables
The gallery serves three audiences:
Solo quants browsing for ideas. The 162 strategies span enough variation in entry logic, exit logic, and indicator type that browsing the gallery gives you a reasonable survey of "what compiles and runs on the engine today." Sort by return, filter by category, look at the sparkline.
Engineers verifying parity claims. If you want to know whether PineForge's claim of 158/162 strict parity is real, the gallery is where you start. Each card shows the parity tier and the committed trade count. The methodology for how parity tiers are assigned is described in the engineering post on cross-validation.
CI baseline for engine releases. We run the full 162-strategy corpus sweep before every release. The gallery snapshots represent the last committed baseline. If a release changes a corpus strategy's output, the diff is caught before shipping.
Where to go from here
- Browse the full gallery — all 162 strategies, sortable and filterable by category, trade count, and return.
- Try the codegen API — transpile a Pine strategy and run it on your own OHLCV with one tool call from Claude or Cursor.
- Get early access — the free tier includes 100 transpiles per month, enough to add your own strategies to a local corpus.