pineforge
Engine comparison · v0.2 · 50-strategy benchmark

PineForge vs PyneCore.
Reproducible, not rhetorical.

Every number on this page is generated by bash benchmarks/run_all.sh in the open-source pineforge-engine repo, against the same 41,307-bar Binance ETH/USDT 15-minute feed. Reproduce in ~3 minutes from a clean clone, zero external API calls.

Side by side

What each engine actually gives you.

CapabilityPineForgeTradingViewPyneCore
Byte-reproducible backtests
Native compiled runtime
158/162 strict TV parity
Sell strategies as compiled binaries
Time-bound seller licenses
Machine-bound seller licenses
Open-source runtime you can audit
Run on your own data, your machine
Audit-grade reproducibility for compliance
Native live broker integrations
50-strategy match degree

How many of 50 strategies hit excellent tier against TradingView.

C++ static lib
PineForge
49 / 50
Excellent49Strong1Moderate0Weak0
Python (PyneSys cloud-compiled)
PyneCore
46 / 50
Excellent46Strong1Moderate2Weak1
TypeScript (LuxAlgo)
PineTS
indicators only
Strategy backtesterPer-bar indicators10/10 indicatorsmatch

Strategy execution is on the PineTS roadmap. We benchmark indicator-precision against PineTS to triangulate floating-point divergences.

Tiers follow the canonical PineForge parity sweep: excellent = all four dimensions (count delta, entry p90, exit p90, P&L p90) within strict thresholds and ≥95% trades matched; strong within 5× strict; moderate / weak / minimal step down from there. Strategies that use TradingView’s trail_* exits get the production threshold profile (looser exit + P&L tolerances).

The 3-strategy delta

Three strategies draw all the daylight.

On 47 of 50 reference strategies PineForge and PyneCore both hit excellent. The 3-strategy gap is not random — every divergence is in the same category: bracket exits, trailing stops, or partial position closes. PyneCore’s broker emulator differs from TV here; PineForge mirrors TV trade-for-trade.

06-liquidity-sweep
bracket exit
PineForgeexcellent (88 / 88)·PyneCoremoderate (91)
93 TV trades in window. PineForge matches 88 within strict tolerances. PyneCore generates 91 trades — a +3 count drift, plus exit-price drift on bracket-stopped exits.
07-scalping-strategy
trailing stop (production thresholds)
PineForgeexcellent (412 / 429)·PyneCoremoderate (412)
429 TV trades in window. PineForge: 412 matched, all four parity dimensions inside production thresholds. PyneCore: same matched count but exit-price p90 outside threshold — broker-emulator trail_offset arithmetic diverges from TV.
49-partial-exit-qty-percent
partial close (qty_percent)
PineForgeexcellent (683 / 725)·PyneCoreweak (2,671)
The clearest divergence in the corpus. 725 TV trades, PineForge matches 683 at strict parity. PyneCore generates 2,671 trades — 3.7× the correct count. Root cause: strategy.close(qty_percent=…) in PyneCore splits each entry into per-percentage sub-exits instead of a single partial close. Open upstream issue as of this commit.
Where each engine wins

We don’t hide our gaps. Neither should they.

CHOOSE PINEFORGE WHEN
  • You need byte-reproducible determinism (CI gates, audit trails, paid-parity claims to clients).
  • You need TV-faithful semantics on bracket exits, trailing stops, or partial closes. Three concrete strategies above are unambiguous on this.
  • You need native compiled speed for parameter sweeps (Optuna across thousands of parameter combinations on 50k-bar feeds).
  • You want a hosted Studio UI later — Code · Backtest · Optimize · Compare · Reports tabs are coming Q4 2026.
  • You eventually want to sell compiled strategies to other traders. The encrypted-distribution + license-server design is in the public engine repo.
CHOOSE PYNECORE WHEN
  • You need forward-testing or live broker execution today. PineForge ships those Q3-Q4 2026; PyneCore has them now.
  • You need a fully-Python strategy execution path (deeper integration with NumPy/Pandas backtesting tooling, Jupyter-native iteration).
  • You’re comfortable on the bracket/trail/partial-exit caveats (47/50 of strategies don’t exercise them).
  • The fully open-source ethos matters more than the closed transpiler tradeoff. PyneCore is open end-to-end; PineForge’s runtime is OSS but the codegen is closed.
  • You’re a heavy contributor and want a project where your PRs land directly in the strategy execution path.
Indicator precision

PineForge sits two orders of magnitude closer to TradingView than PyneCore.

Indicator drift vs TradingView (lower = closer) PineForgePyneCore
ema21
1.9e-10·1.9e-8
sma21
1.9e-10·1.9e-8
rsi14
9.7e-11·9.7e-9
atr14
2.8e-10·2.8e-8
macd_line
2.3e-10·2.3e-8
macd_signal
2.4e-10·2.4e-8
bb_basis
0·0
bb_upper
1.9e-10·1.9e-8
1e-12abs error · log scale1e-7

Drift figures from the in-tree benchmark sweep at HEAD. Methodology

Don’t trust the table. Reproduce it.

Every number on this page is generated by the public benchmark suite. No hidden config, no API keys, no committed-snapshot tricks. ~3 minutes from a clean clone.

# 1. Clone the open-source engine + benchmark suite
git clone https://github.com/fullpass-4pass/pineforge-engine
cd pineforge-engine

# 2. Pull the LFS-tracked OHLCV (2.3 MB)
git lfs install && git lfs pull

# 3. Run the full three-engine sweep (~3 min)
bash benchmarks/run_all.sh

# 4. Read the results — same table that's on this page
cat benchmarks/results/summary.md