How mathematics stabilizes from traces
This page presents a formal core. It does not replace standard mathematics, but shows how familiar structures emerge as stabilizations of interruptible, interpretable processes.
This is a work in progress formal definitions are not yet fully rigidified to simplify transmissibility and are still suceptible to change.
In this work, the most primitive objects are not variables, symbols, or terms, but traces. A trace is a local mark indicating that something happened, without yet specifying what, where, or why. Dots and partial crossings are used to represent such traces.
From collections of traces, a notion of local space can be recovered. Only after space is locally reconstructed do environments appear, as regions bounded by strokes. These environments may later be linearized as brackets, yielding well-parenthesized expressions compatible with standard mathematics.
Variables are therefore not primitive. They arise as stabilized summaries of repeated traces, once sufficient regularity allows compression into symbols. In this sense, strokes precede variables, and variables remember strokes only imperfectly.
Throughout this work, reasoning is both inductive and projective. When constructions are foregrounded, induction is the default. When relational spaces are foregrounded, projection is the default. Communities fix these defaults locally, not globally.
At this stage, in order to make transmission possible rather than merely local, interpretation must be rigidified. This is achieved by fixing a default point of entry, taken here to be the d-point of view.
This choice is explicitly anthropomorphised. The symbols are not treated as neutral placeholders: d is read as Dolores, q as Qudsia, p as Persephone, and b as Benedict. This anthropomorphisation functions as a stabilizing device, enabling a rigid and translatable interpretation across readers and implementations.
What is fixed here is not belief, but orientation. Alternative points of view remain possible, but this one is declared as default for all constructions that follow.
From this point onward, all constructions are written from the d-point of view.
Nothing below assumes numbers, functions, or sets.
Before introducing formal objects or structures, it is necessary to clarify the minimal conditions under which mathematics can emerge and stabilize within a human community.
These conditions are not themselves mathematical statements. They do not belong to the object language. Rather, they constrain what it means for a formal system to be learnable, transmissible, and revisable in practice.
In this work, a mathematical framework is considered acceptable only if the following learning and stabilization processes can themselves be rigidified:
In this sense, the stabilization of practices within a community functions as a reference frame: it is what makes discretization possible without presupposing it. Mathematical objects appear as discrete only relative to such stabilized frames, not as pre-existing entities.
Mathematical entities thus appear as discrete only relative to such stabilized frames. They are not taken as pre-existing primitives, but as invariants extracted from repeatable, interruptible, and interpretable activity.
What follows are examples of how one can begin to formalize mathematical constructions starting from the idea that symbolic objects emerge only after sufficient stabilization of traces and shared activity. These constructions are not presented as definitive axioms or universal laws, but as *illustrative rigidifications* that show one way to recover familiar mathematical behavior from more primitive notions.
Synchronization is treated as a primitive relation. It is not reduced to equality, substitution, or function application.
A synchronizer relates two pairs
(belief, assurance),
while itself carrying a situated view
of what is being synchronized.
Neither belief nor assurance is absolute;
both are defined relative to a presumed center
of the community in which synchronization takes place.
This allows derivations to be understood not as single analytic paths, but as families of compatible paths maintained in superposition until a local rigidification is required.
What is usually called a “default” is not treated here as an absence of information, nor as an error state. Instead, it is understood as the result of a progressive relaxation of synchronization requirements.
Consider an attempt to synchronize an algorithm, a command, and a parameter. Such an attempt may succeed fully, but it may also fail partially, while still preserving meaningful structure.
Rather than collapsing into failure, synchronization may settle on weaker agreements: first retaining the algorithm and command, then the algorithm alone, and finally only an explicit boundary indicating that something was intended, even if its content can no longer be recovered.
These successive relaxations do not represent errors. They correspond to admissible fallback positions, each preserving a form of mutual intelligibility within a community.
In its weakest form, the default reduces to a visible boundary,
such as -|-.
This boundary is not meaningless:
it marks the persistence of an intention to communicate,
even when algorithmic, semantic, or parametric details
can no longer be synchronized.
Defaults are thus understood as stabilized outcomes of interrupted synchronization processes. They play a central role in allowing systems— human or formal—to remain operational under informational loss or mismatch of expectations.
No construction presented here is terminal. Every stabilization admits further refinement, rescaling, or reinterpretation. What appears as a final object is only final relative to a suspended process.
In particular, defaults do not converge toward a fixed point. They form plateaus: regions of temporary agreement from which computation may resume without contradiction.
The discussion above implicitly assumes that the space of inputs is well defined. This assumption itself, however, cannot be guaranteed without further stabilization.
In any system relying on memory, transmission, or inherited traces, parts of the past may be missing, corrupted, or no longer accessible. From the point of view of the present computation, this is indistinguishable from an interruption that occurred before interpretation began.
As a consequence, one cannot assume that all regions of the input space are equally meaningful or even available. What counts as an input is already the result of a community-level agreement on what may be trusted, ignored, or reconstructed.
This does not weaken formal reasoning. It clarifies the scope of its guarantees: correctness is always relative to a stabilized fragment of history, not to an idealized total memory.
The following expands on how notions that resemble probability can emerge from stabilized sequences. It is not required for basic understanding of this framework.
In this framework, probabilities are not primitive. What comes first are numbers: number of occurrences, number of times something was judged plausible, number of times it was rejected. These numbers only make sense relative to when and where the interactions took place. Probability appears later as a compression of such counts, inside a space–time context where experiments are repeatable, discretised, and comparable. Without this context, probability has no meaning.Probability is not introduced here as a primitive measure attached to events. It emerges from how a community stabilizes selections within a rigidified sequence of past interactions.
Let $E = (e_1, e_2,\ldots , e_n)$ be a finite or growing sequence, stabilized as a shared history. No intrinsic classification is assumed on its elements. All distinctions arise from contextual selection.
A community does not operate by extracting intrinsic subsequences
from a given history.
Instead, it stabilizes a finite sequence E
and repeatedly traverses it under a fixed but revisable length.
POP-like constructions then point to positional addresses
across these repetitions.
What matters are not elements taken out of E,
but the positions that are selected when the sequence is read,
reread, or synchronized at different moments.
Two selections are considered disjoint
when they refer to non-overlapping positional addresses
within these traversals.
When introducing labels for such selections, no denomination is neutral. Interchanging letters throughout the construction changes which compatibilities are preserved, which expectations are stabilized, and which interpretations remain accessible.
With this in mind, a stabilized reading may be presented as:
E ≃ A ⊔ C ⊔ R
This presentation should not be read as a decomposition of an object into intrinsic parts, but as a bookkeeping device relative to a chosen traversal and addressing scheme.
{Good, Bad, Undecided},
which remain interchangeable and revisable.
Which roles are assigned to A, C, or R
is not intrinsic.
Any of the three labels
Good, Bad, or Undecided
may later be reassigned
as the community revises its stabilization.
To preserve fully positive integer boundaries,
constructions may restrict attention
to histories whose length |E|
is a multiple of a fixed base.
This allows multiplicities to remain explicit
while avoiding normalization assumptions.
At this level, probability is recovered
through cardinality alone.
For a stabilized sequence of length n,
the community may estimate:
π(A) = |A| / n π(C) = |C| / n π(R) = |R| / n
These quantities are not probabilities in an absolute sense. They are contextual summaries of stabilized selections, meaningful only relative to the community and the chosen selectors.
This construction is compatible with Kolmogorov-style interpretations, but does not commit to a single arithmetic. Different orientations may induce distinct local probability structures, each with its own topology, geometry, and notion of convergence.
What matters at this stage is not which arithmetic is chosen, but that these distinct probabilistic views remain synchronizable through shared sequences and revisable partitions.
The first families of manipulable quantities introduced here are not defined as abstract numbers, but as boundary-stabilized processes. They are designed to behave numerically only after sufficient rigidification.
A quantity X is described by two coupled components,
$X_d$ and $X_p$,dots
$X_d$ and $X_p$,
corresponding to distinct directional accumulations.
Neither component is meaningful in isolation;
their relation defines what may later be projected
as a numerical value.
Access to these components is mediated by contextual gestures,
such as get and push,
each associated with a boundary orientation.
These gestures should not be read as functions,
but as situated acts of reading or writing,
indexed by an anticipated interpretation.
The evolution of $X_d$ and $X_p$ is multiplicative and memory-dependent: $X_d = (X_{d_q} / X_{d_b}) · old(X_d)$ and $X_p = (X_{p_d} / X_{p_q}) · old(X_p)$
A rational projection may be recovered by defining
valℚ(X) =$ X_d − X_p$,
but this projection is a rigidification,
not the primitive object.
Internally, construction histories may be encoded
using factorizations of the form 2i(2j+1),
where elementary POP operations increment
either depth (i) or lateral variation (j).
Such encodings compactly represent
families of compatible construction paths,
rather than single numerical states.
One may, for example, encode the depth parameter i
using a unary trace of i consecutive |,
making the construction history itself
directly readable and interruptible.
In this sense, numbers emerge as communal invariants: stabilized summaries of repeated, interpretable actions performed under shared boundary conventions.
At early stages of construction,
composition is strictly ordered.
Traces are read sequentially,
and grouping matters:
(a · b) · c is not interchangeable
with a · (b · c).
However, when construction histories are written side by side rather than nested, a change of regime occurs.
a b c | | |
Side-by-side inscription suppresses intermediate grouping information. What was once associativity-sensitive becomes readable as commutative, not because operations commute, but because the trace no longer records how they were composed.
Commutativity thus appears here as a lossy projection of associative structure. It is not a property of the operation, but of the writing regime.
This explains why higher- and lower-order terms, when flattened into a common plane, become interchangeable: their original temporal nesting has been erased by stabilization.
Associativity is not rejected but postponed. It becomes meaningful only after empty regions have been identified and after interactions have been localized in space and time. Premature associativity hides ambiguity instead of resolving it.
When a positive integer is used here, it should not be imagined by default as a fixed, atomic quantity. It is better understood as a communal dimension: something that may be inflated, partitioned, or redistributed depending on context.
In particular, when viewed through an anthropomorphic process (ATM), a positive integer participates in a never-stopping dynamic of insertion and extension. Even when bounded at a given scale, at least one of its aspects behaves as if it belonged to an unbounded or infinite-dimensional system.
This does not contradict finite arithmetic. It explains how finite behavior is recovered as a rigidification of a process that remains, in principle, extensible and revisable.
This perspective will later allow different modes of counting, probability, and synchronization to coexist, depending on which dimensions are stabilized and which remain open.
In the sections that follow, we explore how familiar numerical behaviors emerge from collections of simple gestures and boundaries. The aim is not to redefine arithmetic, but to show how it can be *recovered* after sufficient stabilization of traces and roles.
We represent a point using a minimal visible mark:
for example an occidental dot .,
a disk or a circle,
or an Arabic–Indic dot ٠,
possibly rendered as a lozenge.
These marks must not be read as numerical zeros. Zero already carries algebraic commitments (notably absorption under multiplication) that are inappropriate here. What is represented is a trace: the minimal indication that something happened.
Formally, a trace may be understood
as the intersection of distinguishable strokes.
For instance, if the available symbols are
- and |,
a trace corresponds to a crossing
that does not belong to either generating subspace alone,
and may require additional
(possibly imaginary) directions
to be made separable.
A trace designates a location in which a community may later be embedded, attempting to relate this occurrence to past and future events. Any such embedding must allow locally reconcilable meaning to be obtainable from within the community, relative to its own inherited parameters.
In the strong derive model, separability is guaranteed by a time demon. The demon is not a community: it has no beliefs, no perspective, and no access to “what time it is”. Its sole role is structural: at each observation, it ensures the insertion of a finite number of new strokes at positions that remain pairwise separable (for instance via imaginary bounding boxes).
Any entity equipped with a notion of time, history, or revision is already considered a community. The demon only enforces the possibility of distinguishing successive insertions; interpretation and temporal meaning arise exclusively at the community level.
Viewed as an encoding process,
such constructions may later be read
as relating parameters n and m
together with matchings
between copies of past configurations
and updated ones,
where new traces are added
without erasing earlier occurrences.
A derive is not a symbol, nor an object, but the stabilized trace of a synchronization event. It records that something happened at a place where several potential interpretations briefly aligned.
In the constructions considered here, derives are observed relative to four distinguished axes. Each axis carries an ordered sequence of integer-indexed positions, interpreted as possible anchoring sites for traces.
When a derive occurs, it manifests locally as the appearance of three distinguishable dots across the axes. These dots are not points in a fixed space, but marks indicating that a crossing, or near-crossing, of strokes has been synchronized.
From the point of view of a community, all strokes may be read as lines originating from integer positions on an axis. A local rigidification mechanism (sometimes referred to as a time demon, though it is not itself a community) ensures that relative scales are preserved under rescaling: intervals may expand or contract, but their ordering and proportional relations remain comparable.
Tie-breaking is only required when a trace lies exactly at the midpoint of an interval. In such cases, different axes adopt distinct tie policies, preventing symmetry collapse and allowing derives to remain distinguishable. Notice that inside a synchronised process a careful insertion procedure can avoid ties using as lexicographic tie breaker the synchronising command order.
When viewed globally, a derive may be represented as a spiral-like trajectory: moving forward corresponds to continued synchronization, while moving backward is only permitted along specific axes. This backward motion may be read as a restart, or the initiation of a new liane.
A liane may simply cross space,
represented schematically as |.|,
or may bounce and reconnect,
as in |.|.|.
In all cases,
full interpretation is deferred:
whether a crossing corresponds
to a completed pairing
or to a pending synchronization
is decided only when local spaces
are later fitted together.
Derives thus support revision. They encode not only what was constructed, but also the possibility that future messages, returning along different paths, may alter how past traces are understood. This preserves continuity without freezing meaning.
Consider composite traces built from the same internal components, but rigidified through different directional couplings:
.[λ[.]μ − σ[.]ρ]. .[λ[.]μ | σ[.]ρ].
These expressions differ only by the rigidification
(- versus |),
yet they induce distinct synchronisation possibilities
when read by a community.
A partial decoding operator may
attempt to recover an inscription that she we view as c in the context she has to create
from a trace configuration:
Decoding may succeed and recover c,
or fail and leave only a directional trace
(- or |),
depending on available synchronisations
and inherited rigidifications.
What remains is always a trace: never pure absence, never guaranteed meaning.
Consider the act of drawing a graph. The drawing itself is a rigidified trace: once it exists, it constrains interpretation, even though the exact history of its construction may no longer be uniquely recoverable.
From the point of view of the present reader, the drawing belongs to a superposition of plausible past constructions. To make interpretation possible, a community adopts a canonical reading of drawings of this kind.
One such canonical reading may be described as follows. Vertices are encountered in an order of emergence. At each encountered vertex, a small set of admissible gestures is assumed:
Once a vertex belongs to a liane, subsequent encounters of the same vertex may either: continue that liane, or initiate a new instance, treated as a distinct construction even if it reaches a previously visited root.
In this way, a timed construction emerges: not as a single path, but as a sequence of path fragments indexed by the moments at which vertices are encountered.
What will later be interpreted as derivations, paths, or algebraic relations are thus grounded in a prior stabilization: a shared way of reading drawings as records of possible histories, rather than as fixed objects.
Consider a pixelized screen whose resolution evolves over time. At the outset, only four pixels are present, located at the four corners of the screen. Each corner stores a memory, not as a value, but as a pair of integers: reasons to say black and reasons to say white.
A screening process then unfolds. Using POP-like local operations, each cell is visited in turn. For each cell, a provisional color is proposed based on its current confidence pair, without requiring a definitive decision.
Resolution is not fixed. At certain moments, a rescaling command may occur: zoom in or zoom out. Zooming in creates new pixels; zooming out aggregates existing ones. Importantly, older memories are not erased. They remain available as contextual traces influencing future decisions.
When a new pixel appears, its coloring is not chosen in isolation. It is inferred from:
Confidence evolves asymmetrically.
For example,
refinement may occur slowly,
following a pattern such as (n+1)/n,
while coarsening may occur more abruptly,
following patterns such as n → 4n/5.
These rates are not laws,
but stabilized community choices.
Over time, what appears as a stable black-and-white image emerges from the accumulation, comparison, and partial synchronization of many such local confidence pairs across scales. The image is thus not a snapshot of truth, but a stabilized reading of an evolving communal memory.
By fixing reading orders, stabilizing roles, and enforcing consistent insertion rules, the constructions above recover familiar structures: graphs, planar embeddings, automata, and discrete proof systems.
Edges may be interpreted as bi-labeled relations, carrying one pair at their origin and another at their destination. Interrupted edges remain meaningful, carrying only a single pair.
Standard mathematical objects thus appear not as foundational entities, but as compressed stabilizations of compatible synchronizations— between construction processes, explanatory positions, and the constraints required to make them mutually intelligible within a community.
A key principle of this work is to remove choice as an axiom and replace it with the superposition of multiple plausible constructions. The hope is that what appears as choice can, in some frameworks, be recovered as a derived property rather than a primitive one.
This intuition connects to classical rewriting systems, such as Bourbaki’s τ, which can be described as “find all occurences of symbol inside a term and substitute it by a square linked to this $\tau$ .” Here, this idea is revisited as a process rather than a rule: no box is taken as primitive or magical. Whenever a box appears, it must be imagined as emerging from at least four strokes, combining vertical and horizontal strikes, in a discretized interruptible procedur for which $\tau$ is an informal limit.
We lift this viewpoint to meta-mathematics by treating strokes as
generators of viable mathematical spaces.
Context is introduced by drawing brackets,
more precisely the symbols [ and ],
which already require at least three strokes.
In this way, context is not abstracted away but explicitly drawn. These brackets can be interpreted as parthensis in a linarized form of term fixing partial or total associativity rigidification.
To avoid tautologies and the demons of certainty, we deliberately restrict what is allowed to close. We draw only:
Anthropomorphic viewpoints naturally imagine distorted links between
partial realities as oriented corners.
For example, Dolores may appear as ⌈ or /,
Qudsia as ⌊ or \,
Persephone as ⌉, and Benedict as ⌋.
These are not symbols of identity but of orientation.
Links themselves can be distorted to create navigable spaces,
such as dashed connections (-.-) or split vertical paths.
They may also be equipped with arrows to encode intent,
not merely direction.
An arrow represents how a speaker imagines a stroke was drawn,
or where attention is placed — much like a machine head positioned
on a boundary and oriented toward one of several possible continuations.
When interaction stabilises — when someone answers — quantisation of the joint history (intent, strokes) is suspended. Dots disappear, and we read only plausible boundaries: partially drawn boxes whose meaning remains revisable and whose closure is intentionally deferred.
A crucial limitation must be stated explicitly: when a box-like region appears, one cannot know whether it contains something or nothing. As a consequence, rewriting cannot rely on a single operator. Instead, we work with a superposition of two rewriting processes: one treating a region as a looks-like box (LLbox), and one treating it as a looks-like nothing (LLn).
One have to try to keep in mind the plurality of interpretations assumed here, forr instance these notions just defined as looking as boxes (or co-boxes) will also have locus dependent interpretation as connectors or links.
Negation is therefore not fully functorial. Errors of addressing, interpretation, or synchronisation prevent us from deciding locally whether absence should be trusted as emptiness. Any decision must be delayed until additional structure is introduced.
Associativity is introduced later as a rigidification, not as a primitive axiom. It acts by collapsing empty regions through synchronisation mechanisms, such as message passing or majority effects, where loops are resolved rather than assumed away. Until then, mathematical reasoning takes place in a superposed space of plausible drawings, composed of local discrete regions interacting without global associativity.
Perfect synchronization preserves symmetry and therefore preserves loops. Alternation, even minimal, is sufficient to break loops once history is recorded. This is why bidirectionality and delay are not defects, but the minimal cost of making progress without contradiction.
What can always be shared inside a community is not a value, but a count. Counts arise from repetition: how many times something appears, how often it stabilises, how many times it fails. Compression is achieved by rescaling these counts, not by assuming exact equivalence.
Numbers are not truths here; they are memories.
Formalization here explicitly accounts for the imagined reader. Writing anticipates reading strategies; reading reconstructs plausible writing histories.
Coherence is preserved not by enforcing a single path, but by maintaining translatable links between compatible interpretations. This allows standard formalisms— manuals, archives of proofs, established notations— to coexist with more flexible constructions.
At a later stage, lianes will be interpreted as stabilizers carrying a phase. Crossings and uncrossings will then appear as controlled transformations, rather than primitive geometric facts.
One objective of this bridge is to explain why quantum isomorphisms count homomorphisms into planar graphs, without restricting the underlying constructions to a fixed two-dimensional Euclidean space.
Geometry here is treated contextually: each point may locally recover a triple of distinguishable directions, sufficient to support stabilization and phase, without enforcing a global embedding.
The requirement that each stabilizer carry at least three terms will later be introduced as a technical condition. It is not foundational, and does not affect the present constructions.