“On the shoulders of giant numbers”
http://www.iteror.org/big/book/ch2/ch2_5.html
bigΨ
Ψ.2. Up to the Ackermann numbers
chapter 2.5, edit 0.2.9
published: 20110107
updated: 20111230
# 2.5. Rules of precedence
§2.5.1. Right minority and right majority
Donald Knuth
Stanford 1998
Star *
and arrow ^
operations are right associative –
evaluated moving from right to left.
The various operations +,×,^ of the old school
reduce in the same familiar direction.
Star associativity looks familiar, but our use of
minority precedence for these operators *..
differs from the majority precedence taught at school and applied to
the classic arrowhead operators ^..
The rule of minority precedence compares neighbouring operations
(those that share an operand), so that smaller or equal
operators must be evaluated before larger operators.
Same example:
111**111*111*111 = 111**111*111111111
The main advantage here is that we don't need extra brackets to further reduce the result of the operation. Usually after an operation is addressed for evaluation, its reduction takes place within brackets.
Donald Knuth's uparrows continuing as
majority arrows
^_{R;}..
will overrule the minority nature of
stars *_{S;}..
so that in complex expressions a mix of
*
and ^
operations
can be used instead of brackets.
The mutual precedence order
of these superpower operators is illustrated below._{ }
At the factorial precedence position ( !
other unary operations also take place. But units come first,
such as the natural units 1..
which are joined by default,
and infinite units ω
which join from right to left.
In the formula for right minority stars the precedence function
G
will eat away expressions from the inside out.
G(X*^{..}a*^{..}b) {*#m *#n n≤m} = G(X*^{..}(a*^{..}b)) {*#m *#n}
G(X*^{..}a*^{..}b*^{..}Z) {*#m *#n *#p n<p n≤m} = G(X*^{..}(a*^{..}b)*^{..}Z) {*#m *#n *#p}
G(a*^{..}b*^{..}Z) {*#k *#m k<m} = G((a*^{..}b)*^{..}Z) {*#k *#m}
# Presiding house rules

The units
x
substituting for a number variable a:(x) and the signs ofX
substituting for an ellipsis...
or wildcard A:X will be evaluated within the expression they become part of. 
Backslashed
\
operations will resolve when the host operator has been counted down. Outer backslashes wait for inner, so the rightmost guest operationH\×G
has the lowest precedence. 
Parameter separators
,..
and;..
let the parameter values be evaluated first. All types of separators are fully controlled by function definition, not by associativity. 
Further left in our precedence pecking order come the comparison and
equality signs
~ = < >
etc. (mixed) and the logical signs&
(and
),
(or
). 
After this you'll read
text
again, which controls the flow (brackets areLords of the Flow
[ ;–).
 Within metaexpressions {M} standard signs help to quantify the metasigns # : which are evaluated last. Metastatements are resolved moving from left to right, though their corresponding marks in the source are not necessarily in that order, and left incrementors {X#i} have to wait to be resoved by their right owners.
§2.5.2. Operator comparison
Binary precedence usually works by scanning expressions from right to left,
until two neighbouring operators are compared by a rule
which decides that the right operation must be evaluated first.
In the case of
right minority
precedence, scanning to the left, the comparison rule {n≤m}
demands, that for a right operator n to be evaluated,
it must be smaller than or equal to the operator m on the left.
Given right associativity and that we only compare for operator size,
7 distinctive types of precedence for binary operators can be put in the following order.
For left associative operators the signs <
and >
should be reversed.
Minority precedence {n≤m} results in the biggest numbers
of all 7 types of comparison,
because the highest operators will be served the fattest operand values.
We have right associativity (straight evaluation n=n)
as a good second, the rest is trailing.
Try them on expressions like 2**3**2*2**3
,
we still need more statistics! [above order was a good guess :–]
A human being would pick out the smallest operators and resolve those first. An expression ruled by right minority precedence can then be seen as subdivided by its larger operators waiting to be resolved – which appeals to our intuitive notion of larger vessels containing smaller ones – with addition at the bottom, corked in a bottle.
A series of mixed number units such as ω1..
must be added^{ }
right associatively.
In the context 0_{^}
of countable
arrows
a^^{...}b {^#0}
the operation
with zero operators is multiplication,
and addition can only be represented by a negative number of
^^{..}
{^#} operators.
If we place variables a b
next to each other,
there is (by definition) no operation happening within the universe or
context where the operator rules.
Here ab
fuses to become one entity –
a single number
–
until a simpler context is taken into account.
In the context 0_{*}
of minority stars
the principle is clear – addition simply joins
two natural numbers 1..1..
Where in arrow context 0_{^}
two quantities ab
are joined by multiplication,
addition requires an
extra rule.
For quantities the prospect of such instant multiplication looks grim, but for the mathematical qualities of the units {1,,0,ω,ψ} there is hope. Does it help us to obtain division by zero, the Holy Grail of the Dudes? Try this.
 1^^^ = 0^ = 1/0 = ψ (our utopian unit psi)

0.. {#ψ} =
0*ψ =
0 {0:ψ 1:0} =
1.. {1#ψ 1:0} =
ψ {1:0}
= 0^1*0^ = 0^1 {0_{*}} = 0^0 {0_{^}} ~ 0^0
= 0/0 = 1.. {1#n<ω} ~ 1
§2.5.3. Bracketology
The common attitude towards brackets is that bracketing doesn't cost anything.
Traditionally brackets are a writing tool, a short cut,
without additional meaning for the algorithm that employs them.
But let's study them closer.
The prime function of brackets is to help us insert a subexpression
that isn't yet reduced to number, but that is in essence (and will come to be)
a number variable. In fact we have two expressions
of which the new subexpression should be resolved first,
to replace a variable in the old expression with a number.
The following examples show that for some purposes either double brackets
(X)
are necessary or we can not allow instant addition
x`y`z
= x+y+z (without +
operator).
Whether with star operators or inside a function,
an unwanted overlap occurs in the evaluation of these convoluted expressions.
≡ (a*b)c**d(e**f) = (a*b+c)^(d+e^f)
`a,b`c,d`e,f` ≡ G(a, b+G(c,d)+e, f)
≡ G(G(a,b)+c, d+G(e,f))
Indeed, for calculations with minority star operators it is wise to use double brackets,
although in a typical reduction train conflicts like this wouldn't occur.
The definition of functions for Big numbers can perfectly well be done with
minibrackets `X`
because the added value
of allowing expressions such as in the example isn't much.
Of course, the resolution of the notation system would improve
from adding an extra character that renders these expressions unambiguous,
but the
capacity
doesn't suffer from excluding the potentially ambiguous operation of instant addition
for all but the simplest settings.
So there is a tradeoff, and if we count a bracket double
or introduce an operator +
we add an extra character
to our algorithm. What we get back is a little resolution,
while a whole lot of capacity can be gained
from the right adaptation of a new algorithmic sign.
Capacity we know creates mindbogglingly Big numbers.
Common expressions inside ordinary brackets (X)
can always be evaluated on the spot, but subexpressions inside square brackets
[X]
which are nested in an expression,
must wait for the conditions to be right, in order to be resolved by a selective rule.
We call such expressions waitors
and their square brackets waitor brackets.
When a reduction rule does not apply to [waitor] subexpressions
in a particular parameter position,
the evaluation by such a select rule [excluding the waitor] can be written with a
special equal
=:
sign.
Waitors that
slow down
(increase resolution) and
speed up
(increase capacity)
our operatorials will be presented in part 2 of the book.
They really make a difference.
With the introduction of special rules for waitor subexpressions
the analysis of the algorithmic cost of gaining either
resolution or capacity with brackets will become more important.
Radix systems, where a character sequence (word) always expresses a unique number,
have superior resolution compared to algorithms involving a reduction train,
where many different bracketed words will inevitably express the same number.
We question if an algorithm can be devised,
where the input is written with an r
number of characters
and that employs brackets for substitution, but that produces
more numbers than a base r
number system?
With waitors the type of substitution applied
depends upon the position of the bracketed inner expression
as well as the state of the outer expression,
so there's an awful lot of algorithmic variation possible here.
The answer is that when (any type of) brackets can be introduced
by application of the algorithmic evaluation rules
starting with an expression without brackets,
then certainly radix systems cannot be beaten.
But when brackets can be introduced arbitrarily,
our conjecture is that all algorithms, where number size keeps expanding
faster with every next parameter value,
will eventually cover more numbers than a radix system.
We've worked out two interesting bracketed structures
in the spirit of our operatorial functions –
for single separator
,
algorithms
with a row of parameters, and for multiple separator
,..
algorithms
with a multidimensional array of parameters.
The moment of transition from subbinary resolution
to superbinary resolution is pivotal here.
§2.5.4. Experimental unbracketing
The lifespan of a pair of common fixed brackets is regarded as eternal:
( = (_{ω}
On the other hand some parts of an expression won't need bracketing:
(_{0} = (.. {(#0}
With unbracketed evaluation the operands issued by intermediate reduction steps
are allowed to drift away and attach to neighbouring operators
before the original operation has been totally reduced to natural number.
After each partial reduction the whole expression should be evaluated again
right associatively.
Most common is plain unbracketing (_{0}
which directly removes fixed ()
brackets in a given formula.
Imagine a type of temporary brackets (_{s}
that last for a variable number of reduction steps #s
within the reduction train that carries them
(steps from other operations, outside or inside these particular brackets do not count).
The last reduction included in the cascade of s
is the step to an iterator value
b=1.
Brackets (_{r}
with remaining steps r<1
are to be dropped from the expression before their contents has been evaluated,
after which the usual precedence of the operators prevails.
Brackets (_{r}
where r≥1
stay fixed after the original reduction train has past,
until their contents is reduced to number (as usual).
Now let right/left associative minority/majority
exponential star
operators with (_{s}
brackets
count off single steps.

Rm/m: (_{s;}a**b1) =
(_{s;}a*(_{s;}a**b)) =
(_{s;}a*(_{s;}a*(_{s;}a**b)))
_{ } == (_{r;}a*..(_{s;}a).) {#b#} where r = s(*b1)i 
Lm/m: (_{s;}a××b1) =
(_{s;}(_{s;}a××b)×a) =
_{ } == (_{r;}..(_{s;}a).×a) {#b#} where r = sb1+i
Put step countdown s=0
so that after the applicable rule is processed brackets
(_{0}
are dropped.
In the above formula this will work out to two or three different results
(and one irreducible expression, running to infinity).
The first case Rmin
of unbracketed minority stars *..
is overall slower than arrowheads ^..
and hereby is determined that this unbracketing
is less powerful than 1
extra operator counted by c
.

(_{0} Rmin:
a*b1 >
aa*b >
(.a*2).. {(#b#*2)}
= a*2^b
_{ } a**b1 > (a*a)**b > (.a**2).. {(#b#**2)} ~ 2^^b\^a
_{ } a*^{..}b1 {*#c1} ~ 2^^{..}b\^^{..}a {^#c1 ^#c} < a^^{..}(b+1) {^#c1 a>2} 
(_{0} Rmaj:
a^(b+1)
= a*(a**b)
== a*.. {a#b1}
_{ } a^^(b+1) = a**(a***b) == a**.. {a#b1}  (_{0} Lmin: a××b1 > a××(b×a) ↑ ω

(_{0} Lmaj:
a××(b+1) =
(a××b)*a ==
a*.. {a#b1}
= a^(b+1)
_{ } a×××(b+1) = (a×××b)^a == a^(a^b)
Compare (_{0} Rmin
with the unbracketing of a similar function Fa
to match bigA's pluses in
chapter 4.2.
When a right minority operation is unbracketed,
the item operand a
of the largest operator is expanded,
so that the numbers produced become Bigger.
This is shown below for exponentiation –
with larger operators differences like these tend to increment absolutely,
yet become algorithmically less and less significant.
 a**b1 = a*(a**b) < a*a**b = a**bb
 a*^{..}b {*#c} < a*^{..}a*^{..}b {*#c *#c} < a*^{..}bd {*#c c↑ω d↓0}
We hereby close this exercise of unbracketing subscripted brackets,
which is in the form where it counts off every single step (as defined
above)
a rather tedious tool – similar to subscripted
backslashing.
However, with these we can express Big numbers that would otherwise be lost in the
black hole
of inexpressibility.
Perhaps our surgical Big number tools could be put to better use,
when each pair of brackets subdivides a larger number of steps in one go.
Such as when a whole level of iterators is counted down in some nested surge
and then a feedback operation ripples through…
Outside precedence
In an operatorial context the reduction steps on the outside of an expression can be given precedence over those of special inner subexpressions. This distinction does not play a role of importance here yet, but it does for inner waitors. Strict outside precedence for nested functions is counterintuitive, because we like to evaluate the smaller inner expressions first – repetition of unresoved subexpressions looks ugly.
If the evaluation direction for functions would always be right to left such provisions for waitors are not necessary, as they are positioned on the left. With exceptions for waitors, the problem looks artificial if we start clean with an expression without nested subexpression – our usual algorithms won't offer any choice in the course of a reduction cascade. Expressions are either evaluated from the outside in or a subexpression in a right iterator has to be reduced to number first in order to be able to decrement it.