Pine v6's type system, in practice
PineScript v6 is more strongly typed than its reputation suggests. Walking through the type rules, the inference, the gotchas — and what shipping a transpiler taught us about which rules actually matter at runtime.
Pine Script has a reputation for being loosely typed. The reputation is partly
deserved and mostly wrong. It earned the label because int widens to float
silently, na is accepted almost everywhere, and the language rarely demands
an explicit annotation. But underneath that permissive surface is a type system
with real teeth — and once you need to transpile it to a statically typed
language, every tooth matters.
This post walks through what Pine v6's types actually are, where the inference layer hides the complexity, and the three gotchas we hit most often while building the C++ codegen.
Reputation vs. reality
A new Pine programmer writes:
x = 5
y = close + x
No annotations. No complaints from the compiler. The programmer concludes "Pine
has no types." But Pine inferred exactly what those variables are: x is a
simple int (a compile-time constant integer), and y is a series float
(a per-bar floating-point value). The inference is doing real work; it's just
invisible.
The distinction between simple int and series float isn't cosmetic. They
have different runtime behaviors, different operator compatibility rules, and
different representations in a transpiled output. The language is typed; it
just doesn't force you to write the types down.
The type kinds
Pine v6 has four layers to its type system. Most users encounter only the first.
Primitives
The primitive types are int, float, bool, string, and color. There's
also na, which deserves its own section below. These are the leaf types — the
ones that hold scalar values.
int and float participate in implicit widening: when you mix them in an
expression, the int widens to float. This is the behavior that earns Pine
the "loosely typed" label, but it's a deliberate and consistent rule, not a
special case. Every strongly typed language does the same thing for numeric
types.
Composites
Pine v6 supports three composite types: array<T>, matrix<T>, and
map<K, V>. All three are parameterized by their element type, which must
be a Pine type (including UDTs). You can have array<float>, array<int>,
even array<MyUDT>.
Composite types are mutable objects. They're reference types in the sense that
passing an array<float> to a function gives the function a handle to the same
underlying storage. If the function modifies the array, the caller sees the
change.
User-defined types (UDTs)
Pine v6 added the type keyword, which lets you define named record types with
fields and optional methods:
type Order
float price
int qty
bool filled = false
method fill(Order this, float fillPrice) =>
this.price := fillPrice
this.filled := true
UDTs are Pine's closest analog to structs. They can have default values for
fields (as shown with filled = false). Methods are called with dot notation
and receive the UDT instance as this. Fields are mutable via := assignment.
Form qualifiers: series vs. simple
Every value in Pine — not just primitives, but composites and UDTs too — carries a form qualifier that describes its relationship to the bar timeline.
series— the value can differ bar by bar. It has history:x[1]gives you the value ofxon the previous bar. This is the default for most expressions involving price data.simple— the value is fixed for the entire strategy run. It doesn't have history (accessingx[1]on asimplevalue is a compile error in most contexts).const— the value is known at compile time.const int VERSION = 6. Folded away by the codegen; never appears in the compiled output.
The form qualifiers form a hierarchy: const ⊂ simple ⊂ series. An
expression that mixes forms is promoted to the highest (most general) form.
simple int + series float is series float.
na semantics
na is not a universal null. It's a typed value — na<int> and na<float>
are distinct, and the type is inferred from context. In an expression like:
x = na
Pine infers x as series float by default (the most commonly useful type
for an uninitialised series). In a context where the type is constrained —
say, you later write x := 5 — the inference locks in int.
Arithmetic on na propagates: na + 1 is na. This is intentional and
consistent — if a value is unknown, any expression depending on it is also
unknown. The escape hatch is nz(x, 0), which replaces na with a default
value.
Comparison operators follow the same rule: na == na is na, not true.
This is the gotcha that bites most often. If you want to test whether a value
is missing, use na(x) (the built-in predicate), not x == na.
In our C++ codegen, na maps to std::optional<T>. Arithmetic on
std::nullopt produces std::nullopt. The na(x) predicate becomes
!x.has_value(). The semantics map cleanly; the verbosity is higher in C++.
Type inference in detail
Pine infers types from the right-hand side of assignments. The rules are straightforward once you know the form qualifier hierarchy:
a = 5 // const int
b = 5.0 // const float
c = close // series float (close is always series float)
d = close + 1 // series float (int widens to float; series promotes)
e = bar_index // series int (bar_index is series int, always)
The interesting case is inference across branches:
x = condition ? close : 0
condition might be series bool. close is series float. 0 is
const int. Pine widens 0 to series float and the result is series float.
The compiler always picks the most general form and the most encompassing type.
Function arguments constrain inference too. If a built-in expects simple int
and you pass bar_index (which is series int), you get a compile error.
This is where the type system's strictness surfaces: you can't pass a
bar-varying value to a parameter that requires a fixed-at-startup value.
How the C++ codegen represents these types
Translating Pine's type system to C++ requires mapping each Pine form and type to a C++ representation that has the right runtime behavior.
Primitives map to double, int64_t, bool, and std::string.
float in Pine is always 64-bit; we use double throughout.
Series are represented as templated lazy-evaluated history buffers. The abstraction exposes:
template <typename T>
class Series {
public:
T current() const;
T at(int bars_ago) const; // implements the [] operator
void advance(T next_value);
};
When Pine code writes close[2], the codegen emits close.at(2). When a bar
completes, the engine calls advance() on every series to shift the history
window.
na maps to std::optional<T>. Arithmetic operators on Series<std::optional<T>>
propagate nullopt exactly as Pine propagates na. The nz() built-in becomes
x.value_or(default_value).
UDTs become plain structs. Fields become struct members. Methods become
member functions. The Pine field-assignment operator := becomes a plain
C++ assignment; there are no setters.
Arrays and matrices become std::vector<T> and a thin 2D wrapper around
std::vector<T>, respectively.
The three gotchas we hit most
1. Mixing series and simple promotes to series — even when you expected a constant
myLength = 14
ma = ta.sma(close, myLength)
If myLength is declared in the global scope and never changes, you might
expect the codegen to treat it as a compile-time constant. Pine infers it as
simple int, which is not const — the value is fixed at runtime startup,
not at compile time. The consequence is that ta.sma(close, myLength) is
series float, not something the compiler can fold.
This matters in our codegen because we wanted to const-fold certain window
sizes for performance. Pine's type system correctly refuses: simple is
not const, and a variable declared without the const keyword could in
principle be set via input.int() — which makes it startup-determined, not
compile-determined.
2. UDT field assignment doesn't trigger methods — and that's intentional
type Box
float value
method doubled(Box this) =>
this.value * 2
b = Box.new(value = 10.0)
b.value := 20.0 // direct field write, no notification
In an object-oriented language, you might expect b.value := 20.0 to go
through a setter that could call doubled() or trigger any other behavior.
In Pine, it doesn't. Fields are public and directly mutable. Methods are
just functions scoped to the type. If you're building a state machine in a
UDT and you want validation on write, you must explicitly call a method
that does the validation — field writes bypass everything.
We document this internally because it's the main place where a Pine program can look like it has encapsulation but doesn't. The C++ codegen reflects this accurately: field writes are plain struct member assignments.
3. request.security defaults to causal — but the lookahead default has changed across versions
htf_close = request.security("BTCUSDT", "D", close)
This call is type-correct and compiles without warnings. What it returns
is the daily close as of the current bar's time. If the current bar is
a 15-minute bar mid-day, htf_close carries the previous day's close
until the new daily bar closes — that's the causal interpretation.
The historical gotcha: in Pine v4, request.security defaulted to
lookahead=barmerge.lookahead_on, which allowed peeking at the future daily
bar before it closed. This inflated backtest results for strategies that used
HTF data. Pine v5 changed the default to lookahead_off (causal), which is
the correct behavior. Pine v6 preserves the v5 default.
The type system doesn't warn you about this because both modes are
type-correct. The PineForge codegen always applies lookahead_off semantics
by default, matching the Pine v6 reference behavior. If you're porting a
strategy from v4 and your backtest results look surprisingly different on
PineForge — this is likely why.
Why transpiling Pine made us more rigorous
Every ambiguity in Pine's type rules has to become a decision in the C++ codegen. When Pine's reference documentation says "this is implementation-defined" or when two behaviors are both technically consistent with the spec, we can't pick one and hope. We have to test against TradingView's actual output, figure out which branch TradingView took, and implement that branch.
This process has made us read the Pine v6 language reference more carefully
than any user would bother to. It's also generated most of our parity test
cases: when we find a corner case in request.security or strategy.exit
where our codegen disagrees with TradingView, we write a minimal reproducer,
add it to the validation corpus, and fix the codegen until both outputs match.
The type system is where corner cases live. The form qualifiers — series vs.
simple vs. const — look like a minor implementation detail, but they
determine which operations are legal, which values can be passed to which
functions, and which expressions can be evaluated at compile time rather than
bar by bar. Getting them right is the difference between a transpiler and a
good one.
Where to go next
- Try the codegen API from Claude or Cursor — transpile your own Pine v6
strategies and run them against local OHLCV data. The
transpile_pinetool returns the C++ output if you want to inspect the type representations. - Browse the gallery — the
validationcategory includes 142 strategies specifically designed to exercise type-system edge cases. The trade counts tell you which strategies fire enough to make errors visible. - Read why we built this — the origin story of the engine and the decision to target C++ as the transpilation target.