Walter Tydecks

 

A commentary on Laws of Form from Spencer-Brown

Reentry
Modulation of a wave, designed by Spencer-Brown, LoF, 68

Version July 12th, 2019

 

 

Introduction

George Spencer-Brown (1923-2016) was from England and came from the same county of Lincolnshire as Newton. He worked early in parallel and overlapping on questions of mathematics, engineering and psychology, e.g. in World War II as a radio operator, communications engineer and hypno-pain therapist with the Royal Navy, 1950-51 at Trinity College Cambridge with Ludwig Wittgenstein (1889-1951), from 1960 with the mathematician and philosopher Bertrand Russell (1872-1970), in the field of psychotherapy and child-education with Ronald D. Laing (1927-1987), who was his therapist and whom he taught mathematics, and he was also a writer, songwriter, chess player, game inventor, glider pilot (with two world records), a quite unusual personality. 1959-61 Chief Logic Designer at Mullard Equipment, 1963-64 Advisor at British Rail, 1963-1968 projects at mathematical institutes in England, 1968-69 full-time employment as psychotherapist with hypnosis techniques, later only part-time. From 1976 temporary professorships at universities in Australia and USA. 1977-78 consulting for Xerox. In later years he became impoverished and withdrew himself, but was able to live on the property of the Marquess of Bath, a former student of his. – Laws of Form was published in 1969 thanks to promotion by Bertrand Russell. A positive review by Heinz von Foerster (1911-2002) attracted some attention, but it has remained a marginal phenomenon in mathematics, computer science and philosophy to this day. In Germany, the sociologist Niklas Luhmann (1927-1998) took up Spencer-Brown's ideas and saw them as a new approach to systems theory which, in his view, enabled a liberation from traditional philosophy. A first introduction to his logic can be found at Wikipedia. Louis Kauffman (* 1945), University of Chicago, has further developed the approach. There are references to computer science (von Foerster), cybernetics (Norbert Wiener), antipsychiatry (Laing), anthropology (Bateson), psychology (Piaget) and communication science (Watzlawick), systems science (Luhmann, Baecker), self-referential systems (Maturana, Varela). – Spencer-Brown is said to have been rather difficult to deal with personally, which probably has something to do with the fact that – as I think rightly – he saw his works on a par with those of Aristotle and Frege and was disappointed by the lack of recognition.

Accordingly, there is little secondary literature. Spencer-Brown's texts are composed extraordinarily compactly. The Laws of Form comprise 77 pages in the main text, supplemented by some prefaces and self-comments. In 1973 a conference took place at the Esalen Institute at the californian pacific coast, attended by Spencer-Brown, Bateson, von Foerster and others (American University of Masters Conference, abbreviated AUM-Conference). This already lists the source texts. Dirk Baecker sent me an unpublished typescript An Introduction to Reductors written by Spencer-Brown in 1992. In Germany, following Luhmann, there are circles at the anthroposophically influenced private university Witten/Herdecke in the Ruhr area, in Heidelberg and Munich that regularly are working on Spencer-Brown. As secondary literature, I have therefore used Luhmann's complex chapter on observation from his 1990 work Wissenschaft der Gesellschaft (Science as a Social System), in which he presents his position on Spencer-Brown, and, building on this, an anthology Kalkül der Form (Calculus of Form), edited by his student Dirk Baecker in 1993, which also contains Heinz von Foerster's review, as well as Luhmann's 1995 lecture Die neuzeitlichen Wissenschaften und die Phänomenologie (Modern Sciences and Phenomenology) in Vienna, in which he explains his philosophical position on Husserl and Spencer-Brown. In 2016, Claus-Artur Scheier, in Luhmanns Schatten (Luhmann's shadow), linked his approach to the newer French philosophy and formalized it in his own way. In Heidelberg a short introduction to the logic of Spencer-Brown by Felix Lau was published in 2005 and in Munich in 2004 an extended introduction by Tatjana Schönwälder-Kuntze, Katrin Wille and Thomas Hölscher was published in 2009. They are associated with Matthias Varga von Kibéd (* 1950), who studied with the logician Ulrich Blau (* 1940). Blau has written standard books on new directions in logic, in which, however, he does not mention Spencer-Brown. Varga von Kibéd has turned to Far Eastern traditions and has become active in the founding, introduction and marketing of new methods of systemic structures, which refer to Spencer-Brown.

In summary, Spencer-Brown has mainly influenced the theory of self-referential systems and new methods of systemic therapy, but is largely ignored and unknown in traditional philosophy, mathematics and computer science.

This contribution is also for me a first beginning and introduction and can only try to approach further philosophical questions. However, I suspect that his idea contains a potential that is far from exhausted.

Motives

In his highly self-reflexive work, Spencer-Brown stresses »There can be no distinction without motive« (LoF, 1). This seems to me to be the best approach to this unusual work. In order to understand his thoughts, it is hardly possible to rely on what has been learned and practiced in mathematics, logic or philosophy so far. On the contrary, he encourages us to unlearn much of it and to become aware again of the deeper motives which originally underlie our enthusiasm for mathematics and which have largely been lost.

– Liar paradox and negative self-reference

The first motive is surely the concern with the liar paradox ›This sentence is wrong‹. With it, the logical foundations of mathematics and thus of all natural science were fundamentally questioned. This caused a deep sense of uncertainty, that is hardly imaginable today – since we have become »accustomed« to it. At the same time, with the decline of all traditional religious and mythical beliefs, mathematics and natural science had become the only force promising identity and security. If their foundations are shaken too, every hold is in danger of being lost. Robert Musil has written about this marvellously in his novel Die Verwirrungen des Zöglings Törleß (The Confusions of Young Törless), published in 1902. Anyone who studied mathematics between 1900 and 1980 was almost inevitably confronted with Russell's paradox and was plunged into a crisis comparable to the skeptical challenge to philosophy. Only the long-lasting economic upswing in the Western countries after 1945 was able, at least for a while, to push all concerns and questions of this kind into the background and establish bare pragmatism and ordinary materialism as new values. What danger should such a paradox pose when the applications of the same science are so successful around it? In such times, Spencer-Brown's questions have little chance of resonating, since they threaten to stir up the questions that had just been laboriously swept under the carpet.

The liar paradox is the most famous example of negative self-reference: the sentence refers to itself, and it negates itself. This leads to the paradox: If this statement is right, it is wrong, and if it is wrong, it is right.

This paradox can be brought into many different forms, which has driven Ulrich Blau to the outermost ramifications with the result that there is nothing in the usual logic that leads out of this paradox.

The sentence is by no means just a logical gimmick that every teenager between the ages of 12 and 16 goes through, but in the original formulation »a Cretan says that all Cretans lie« it is of mythological importance when Cretans had to cope with the fear that they might have killed God on the island of Crete. In philosophy Hegel gave it systematic significance and it is considered the decisive characteristic of the dialectical method.

Bertrand Russell was initially Hegelian, but could not be satisfied with Hegel's speculative sentences on contradiction. For him, there had to be a formal solution that could withstand the demands of traditional logic. He succeeded only negatively with the prohibition of self-referential sentences, and he was therefore enthusiastic when Spencer-Brown was able to present a completely new formal solution.

– Imaginary numbers

Spencer-Brown was looking for a way out based on the example of imaginary numbers. The imaginary numbers are still a strange, mysterious foreign body within mathematics. Already the name ‘imaginary’ is completely untypical for the usual mathematical thinking. Are these numbers imaginary like a dream, or are they mere images, virtual objects of thought, born out of the imagery (fantasy) of the soul? What distinguishes a mathematical imago from a psychological imago? Questions of this kind certainly moved Spencer-Brown a lot after his therapy at Laing.

The expression ‘imaginary number’ goes back to Descartes and Euler. For mathematicians it is a consistent formalism to extend the space of numbers, and for engineers and physicists it is an extremely helpful tool to simplify complex equations, even if they can usually hardly answer the question of why this is possible with imaginary numbers. As a communications engineer, Spencer-Brown was undoubtedly familiar with this meaning and had learned how engineers and scientists simply calculate with such numbers, because it works well and proves itself without asking about the mathematical basics.

In the preface to the first American edition of Laws of Form in 1972, he saw an amazing relationship between imaginary numbers and the question of Russell's paradox. Russell and Whitehead wanted to exclude the threat of paradox in type theory by prohibiting self-referential elements. Spencer-Brown considered this a mistake. »Mistakenly, as it now turns out.« (LoF, ix) Spencer-Brown reports that he met Russell in 1967 and showed him how to work without this exclusion. »The Theory was, he said, the most arbitrary thing he and Whitehead had ever had to do, not really a theory but a stopgap, and he was glad to have lived long enough to see the matter resolved.« (LoF, x)

In order to define a negative self-reference within arithmetic, an equation must be established in which a variable x appears as its own reciprocal value and is negated with the minus sign:

Gleichung (1)

With this equation the negative self-reference is formalized: The reciprocal stands for the self-reference, the minus sign for the negation.

At first glance, this equation looks just as harmless as if someone heard the sentence ›this sentence is wrong‹ for the first time. Everybody considers this at first a common statement about another sentence (one says »the heaven is green« and another answers «this sentence is wrong«) until its inner explosive force is seen through when it refers to itself. So formula (1) does not look unusual at first sight. The paradox it contains becomes visible when x is set to 1 or −1:

Gleichung (2)

1 and −1 are the simplest possible integers, and they show that there is no integer as solution for this formula. With it, Spencer-Brown has recreated the paradox of negative self-reference within arithmetic. (Scientists trained in traditional logic and analytical philosophy, however, deny that it is permissible to transform Russel's paradox, originally formulated in set theory, in this way and to find an astonishing solution. This was my experience at the Wittgenstein Symposium 2018 in Kirchberg am Wechsel.)

Whitehead and Russell had only seen the way out to provide and exclude questions of this kind with the indefinite, third truth value ‘meaningless’. While it is not possible to operate logically and mathematically with this truth value, a solution can be found with the imaginary numbers, which allows calculating with an additional truth value. Like the y-axis, the imaginary numbers are perpendicular to the number axis. Their unit is the newly introduced number i. Both i and −i are solutions of equation (1).

Gleichung (3)

That's what Spencer-Brown is about. He wonders whether, following the example of imaginary numbers in arithmetic, imaginary numbers can also be introduced into logic with far-reaching consequences for logical thinking.

»What we do in Chapter 11 is extend the concept of Boolean algebras, which means that a valid argument may contain not just three classes of statement, but four: true, false, meaningless, and imaginary.« (LoF, xi)

– Traditional logic is both too simple and too complex.

These examples already show the new approach to logic by Spencer-Brown. He was completely dissatisfied with the traditional logic. For him it is banal, not to say childish, and at the same time unnecessarily complicated.

(1) On the one hand, traditional logic is too simple, almost childish.

Anyone who has ever read textbooks of logic with their examples understands what Spencer-Brown means. A typical logical task is: »Which of the following assertions is true: Some apples are bananas. All apples are bananas. Some bananas are not apples. Some bananas are apples.« The correct solution is hidden in the sentence »Some bananas are not apples.« No one does realize that immediatly, because no one would say so in everyday life, as everyone would spontaneously understand that sentence to imply that there are some bananas that are apples. But formally this sentence is correct. It is intended to teach an abstract understanding of logic that is independent of a prior understanding of content and a feeling for language.

Is logic nothing more than the search for funny brain teasers that everyone has to smile about? This can be expanded at will: Is a sentence like »the number 2 is yellow« true or false, and what can be deduced from this sentence? What is to be thought of the conclusion: »If John has no children, both the statement ‘All John's children sleep’ and the contrary statement ‘None of John's children sleep’ are true«? Does this refute the contradiction? Analytical philosophy continues to think up new examples according to this pattern and has been struggling with questions of this kind for decades, although Herbert Marcuse (1898-1979) had already mocked this way of thinking in the 1960s (One-dimensional Man, 1964) and basically sees in it a certain kind of education for stupidity, which unfortunately dominates almost all western philosophical faculties at which philosophers of this direction are preferentially employed. Obviously, people and especially teachers who can teach in schools with this kind of logic are to be trained on a large scale at the universities.

Because of his training in logic, Spencer-Brown had been given a job to program circuits, and quickly noticed:

»The logic questions in university degree papers were childishly easy compared with the questions I had to answer, and answer rightly, in engineering. We had to devise machinery which not only involved translation into logic sentences with as many as two hundred variables and a thousand logical constants--AND's, OR's, IMPLIES, etc.--not only had to do this, but also had to do them in a way that would be as simple as possible to make them economically possible to construct--and furthermore, since in many cases lives depended upon our getting it right, we had to be sure that we did get it right.« (AUM, Session One)

As a result, although he and the other developers could generally rely on their logical thinking, they started programming and very quickly gave up systematically designing and checking their programs according to the rules of learned logic, but were satisfied when they saw that the circuits they had programmed worked without being able to explain or prove it logically. In my experience, this is the way most programmers do it, and yet it leaves a feeling of unease that there must be another kind of logic that comes closer to the practice of programming and is more helpful. (Who hasn't struggled as a programmer with methods such as the logically structured program flowchart and dropped it at some point because the effort to replicate program extensions in methods of this kind is greater than programming and testing? With complex programs, logical methods of this kind are more complicated than the program text itself, and in reality a simple pseudo code has become established for documentation, which represents the algorithms in an easy-to-read language.)

(2) At the same time, logic is far too complex.

At the beginning of every Logic and Mathematics lecture a lot of effort is put into introducing different characters, although everyone understands them intuitively. These are own characters for constants, variables, logical operators like ∧, ∨, ¬ and quantifiers like ∀ and ∃. The various professors and textbooks outdo each other in introducing the necessary conceptual apparatus as abstractly as possible. One example is the lecture Finite Model Theory of Geschke (online in the German original):

»1.1. Structures. A vacabulary symbol is a finite set consisting of relation symbols P, Q, R, …, function symbols f, g, h, … and constant symbols c, d, … Each relation and function carries a natural number, its arity (Stelligkeit).
   Fix a vokabulary symbol. A structure symbol for symbol (a symbol-structure) is a set A together with
   (S1) relations symbolsymbol ⊆ An for each n-digit relation symbol symbolsymbol,
   (S2) functions symbolsymbol: Am → A for m-digit function symbol symbolsymbol and
   (S3) constants symbolsymbol ∈ A for each constantsymbol symbolsymbol.
   Often you identify a structure symbol with its underlying set A, so you write A instead of symbol.« (Geschke,1, my translation)

Anyone who has studied mathematics and logic understands why a lecture is introduced in this way, and yet everyone has the feeling that there is something wrong with this way of approaching things. In the end, the attempt is made to deduce everything from a single teaching. That does not succeed. Instead, the view seems to have prevailed today that set theory, model theory and the theory of formal languages are more or less equal at the beginning. Set theory cannot do without a formal language, formal language uses set theory, each theory can be understood as a model, which in turn requires a formal language and set theoretical operations, as can be seen in this example. The result is the impression that it has not been possible to trace the different doctrines back to their simplest elements, and that the great abstraction and the complex technical apparatus are basically an escape to avoid this question.

For Spencer-Brown, this whole tendency is on the wrong path. His first basic decision is therefore to make a substantial reduction and thus open up an area that can be described as proto-logic or proto-mathematics (primary logic, primary arithmetic). He understands his approach as »perfect continence« (LoF, 1), a moderation and self-control in clarifying the logical foundations.

– Logic and Time

Classical logic is independent of time. Only statements that are timeless in themselves are made and linked with it. A fact or an observation always remains, even with sentences that refer to a certain point in time. A sentence like »On 24th February 2017 the first crocuses of the year 2017 blossomed in Bensheim« is still valid even if the crocuses have long since faded again. It is valid even if it proves to be a deception, because this sentence in itself is a fact of my thinking. Wittgenstein founded the Tractatus logico-philosophicus in 1921: »1 The world is everything that is the case. […] 1.13 The facts in logical space are the world.« For him, the world is the totality of all protocol propositions. Spencer-Brown had witnessed in 1950-51 how Wittgenstein wanted to go beyond this approach. It is too little to base logic on propositions of this kind and their connections.

Now logic has not stood still. Thus in 1954 Paul Lorenzen (1915-1994) developed the operative modal logic with which the differences between possibility, reality and necessity are to be logically included and formalized. The technical apparatus is as far as possible adapted to traditional logic. This direction has taken a certain upswing in connection with quantum logic. – Even in modal logic the consideration of time is still missing. At the same time, in the 1950s, this was attempted to be solved with a temporal logic. If one takes a look at the results of both modal logic and temporal logic, one will understand why this is not what Spencer-Brown was looking for. He took a completely different approach from the practical experience that, in programming, the equal sign is understood context-dependently as an assignment or as an agreement.

Elena Esposito has pointed out this aspect (Esposito in Baecker Kalkül der Form, 108-110). When programming, it is neither a contradiction nor a paradox when you write:

(4a)    i = i + 1
(4b)    a = f(a)

In the programming line (4a), the character i does not stand for a variable, as in a mathematical equation, for which an arbitrary number can be used, but which then has to be included in the course of the calculation. Instead, the character i stands for a storage location, the contents of which can be changed during the execution of the program. With both allocations, the result is on the left side later than the arithmetic operation on the right side. The assignment ›i = i + 1‹ is one of the most frequently used programming lines, if in a loop in the course of a temporal repetition it is counted, how often the loop was passed through. – The statement ›a = f(a)‹ is also a temporal operation that reads the current value stored from storage location a, recalculates it with a function f(a), and then reassigns the result to memory location a. The result of this operation is then used to calculate the value. The a on the left side therefore usually has a different value than the a on the right side. If, for example, in an asset management system, the value of an asset is located at storage location a and recalculated with function f, the content of storage location a changes as a result. The »old a« is located on the right-hand side and the »new a« calculated with function f is located on the left-hand side. This difficulty in understanding arises because abbreviations are used in programming: The character a stands both for the storage location and for the value stored at this storage location.

To explain what happens during programming and to find a suitable logic for it is in my opinion one of the strongest motives of Spencer-Brown. If his work has so far been almost completely ignored by computer science, it only shows to what extent computer science is incapable of finding something like a transcendental turn from its own actions, as Kant had done within philosophy, and how it looked for my impression of Spencer-Brown for programming. Especially for a mathematician and computer scientist, Spencer-Brown's book is not easy to read. Although he is more familiar than others with formal considerations of this kind, he must free himself from entrenched prejudices. He must – as Spencer-Brown said – unlearn a lot, and approach the unlearning step by step, in order not to get into too much confusion already at the first step.

Spencer-Brown answered a question from John Lilly (1915-2001) (neurophysiologist, dolphin research, study of drugs and their effects like LSD, influence on New Age):

»LILLY: Have you formulated or recommended an order of unlearning?
SPENCER BROWN: I can't remember having done so. I think that, having considered the question, the order of unlearning is different for each person, because what we unlearn first is what we learned last. I guess that's the order of unlearning. If you dig too deep too soon you will have a catastrophe; because if you unlearn something really important, in the sense of deeply imported in you, without first unlearning the more superficial importation, then you undermine the whole structure of your personality, which will collapse. Therefore, you proceed by stages, the last learned is the first unlearned, and this way you could proceed safely.« (AUM, first session)

– The influence of the observer

Kant's transcendental turn is a new kind of self-observation with which thinking examines its own actions. Kant was aware that he was following with it a turning point that had already occurred in astronomy with the Copernican turn, when he had become clear how the world view is dependent on the respective position of the observer. The world presents itself differently when it is no longer seen in thought from the earth, but from the sun. Materially everything remains the same, but the movements of the sun, the planets and the moon are mathematically much easier to bring into equations of motion. They indicate patterns (Kepler's laws), which cannot be seen from the earth with bare eyes.

With Einstein, the speed at which an observer moves plays a role in how he perceives other movements. In quantum mechanics it is feared that nothing can be measured as it is because it is changed by the measurement, i.e. the measuring process interferes with the measurement.

The observer is of even greater importance for psychology and sociology, and it is not without reason that Spencer-Brown's teachings have been taken up there. Kant considered scientific psychology to be impossible. For him it was questionable whether chemistry could ever become a science with mathematical methods, but for psychology he considered that to be completely impossible. Every thinking being will notice when it is observed and react accordingly. Therefore, the observer can never be sure that the other is only behaving the way he does because he's attuned to him. Psychoanalysis has partially taken this into account since it is aware of the phenomenon of the transference (Übertragung) recognized by Freud, when a patient adapts unnoticed to the analyst's expectations and the analyst behaves unconsciously as the patient expects of him (countertransference, Gegenübertragung).

»But not even as a systematic art of analysis, or experimental doctrine, can it (psychology, t.) ever approach chemistry, because in it the manifold of internal observation is only separated in thought, but cannot be kept separate and be connected again at pleasure; still less is another thinking subject amenable to investigations of this kind, and even the observation itself, alters and distorts the state of the object observed. It can never therefore be anything more than an historical, and as such, as far as possible systematic natural doctrine of the internal sense, i.e. a natural description of the soul, but not a science of the soul, nor even a psychological experimental doctrine. « (Kant The Metaphysical Foundations of Natural Science, Preface, MAN, AA 04:471.22-32, translated by Ernest Belfort Bax)

Has this been confirmed for Spencer-Brown in his own psychotherapeutic experience? In contrast to Kant's belief, can patterns be recognized in the behavior of the observer and the observed in psychological practice that go beyond the respective personality? I do not know whether and how intensively Spencer-Brown was known about the teachings of Husserl, Heidegger, Sartre and others, who have investigated this question. Luhmann and Scheier understand the logic of Spencer-Brown from this context.

Here the question of re-entry takes on a new meaning. In psychological and sociological practice, re-entry is not simply a possibly purely mechanical resonance, feedback, or feedback that, as in self-referential systems, takes place almost automatically, but the observer receives feedback from the observed on his behavior and reacts to it.

– Process and result

When Spencer-Brown emphasizes pragmatic action from his professional experience, he questions the traditional relationship between process and result. Since traditional logic knows only timeless statements that are always valid and never change, it has not examined the logic of processes. For Aristotle, logic and physics were clearly separated. He systematically examined the timeless statements in his both works on the Analytics (Analytica protera and Analytiká hystera), whereas in physics, as the doctrine of ephemeral things and beings subject to Coming to Be and Passing Away (the title of another text from Aristotle, Peri geneseôs kai phthoras), time, movement and change occur. In his Physics he introduced with energeia and entelecheia new concepts that apply only to the course of processes and consciously leave the framework of the Analytics and come close to a new logical understanding. (They have since become basic concepts in physics and biology as energy and entelechy. Heidegger had described them as existentials (Existentialien), but considered it impossible to find a formalization comparable to traditional logic.)

Spencer-Brown wants to base logic on concepts in which process and result are not yet clearly separated. He shares this concern with Hegel, who has systematically used such concepts. 'Beginning': This can be both the process of beginning and the result that was started. Similar to determination, development, measurement etc.: A measurement can be the process of measuring, as well as the value that was measured. Determination can be the process of determining as well as the result that something is so and not differently determined.

Spencer-Brown bases his logic with distinction and indication on similar terms. This can be the process of naming and distinguishing as well as the result when a distinction is made and a name is given. It is the process of distinguishing something, and in the result the sign (mark) and the value that is given to the something with the distinction.

If one begins with concepts in which the distinction between process and result is consciously kept open, then there must be a clearly recognizable and comprehensible way within logic to introduce time and movement there and to separate this openness into its two components. I see this motive especially in the distinction between primary arithmetic and primary algebra as in the introduction of re-entry and the time given with re-entry.

– The signs and their medium

If it is unusual for a mathematician to accept the changed meaning of variables in the temporal course of assignment in the programming and logic of Spencer-Brown on the one hand and the timelessly conceived mathematical equations on the other hand, it requires an even more radical rethinking to regard the relationship between sign and medium as a reciprocal process. It is almost absurd for a mathematician to assume that the medium in which the mathematical signs are inscribed could influence the signs and their operations. Basically, every mathematician sees the signs of his formulas completely independently of the medium in which they are represented, preferably as objects in an entirely immaterial, purely intellectual space. If he needs a blackboard or a sheet of paper or a screen to write down his formulas there, these are only tools without influence on the contents of calculation and proof. It is possible that a formula is made blurred on a bad background and is therefore misread with results that are misleading. But this is nothing more than a disruptive factor that can ideally be excluded. The statements of mathematical propositions apply in principle independently of the medium in which they are written.

This attitude was shaken in 1948 by the work of Claude Shannon (1916-2001) on the mathematical foundations in information theory. Shannon was like Spencer-Brown a mathematician and electrical engineer. In his study of data transmissions, he has demonstrated how any medium generates background noise that interferes with the transmitted characters. To this day, mathematics has not perceived or not wanted to perceive the elementary consequences of this for mathematics and logic. To this day, mathematics is regarded as a teaching that is independent of the medium in which it is written and through which it is transmitted. Nobody can imagine that the medium could have an influence on the signs and their statements. Mathematics is regarded as a teaching that is developed in a basically motionless mind.

For Spencer-Brown, this relativizes itself. For Spencer-Brown, the design of circuits added a fundamentally new experience that goes far beyond mere programming. At first glance, schematics are nothing more than a graphic, descriptive language of formulas whose logical status should resemble the signs of mathematics and ultimately correspond to them. But for him, the circuits that he designed and worked with showed that each circuit contains its own dynamics that support the statements and results designed with the circuit. It is possible to describe in words and formulas which input a circuit processes and to which output it leads, but it is not possible to fully explain how these results come about. Obviously, the circuit contains a kind of self-organization that influences the result by itself. Whoever designs a circuit has a clear target in mind and can design and realize the circuit, but he cannot completely predict what will happen in the circuit. On the contrary, he relies on the circuit itself to stabilise, giving the result the expected certainty. Even if on closer examination many of the processes that have led to the desired result in the circuit can be explained, there is always a residual. This property does not result from the graphic form and its formal elements, but from the medium in which the circuit is realized. Spencer-Brown compares it with chemistry: chemical formulas can never be used to describe completely what happens in a chemical reaction. The real chemical process has its own dynamics that can always lead to surprises despite all the precautions taken. From today's perspective, cell biology can be cited as another example. It has been shown that even the complete decoding of the DNA does not lead to a comprehensive prediction of the processes in a cell. All these examples are texts or text-like forms (the schematic diagram, the chemical reaction equations, the DNA code) which are fundamentally incomplete and cannot completely represent the medium in which they are realized. Is this experience in a kind of limit process also to be transferred to mathematics and logic, or at least to be taken into account there?

This leads Spencer-Brown to the fundamental question of the relationship between the signs and the medium. The result is certainly not that the mathematics known today becomes »wrong« because it has overlooked its medium and its interaction. Rather, the result is that mathematics must be understood before the horizon of an overarching doctrine of medium and sign, from which the special status of mathematics and the conditions for why it applies in the way we know it can be understood.

For me, this is the most difficult motif of Spencer-Brown to understand and at the same time the one with the most serious consequences. If a logic is designed that self-reflectively understands its own medium, in which it is founded, inscribed and transmitted, then this can lead to the design of a Logic of medial Modernity that does justice to man's constitution today. Scheier in Luhmanns Schatten in particular pointed out the importance of the medium for understanding our culture. Spencer-Brown can help to work this out further.

– Neural Networks

While the paradoxes and antinomies of Russell and Gödel are intensively discussed in philosophy, the development of a new logic, which was based on an upswing in neurophysiological research and led to neural circuits, remained largely unnoticed. Only with spectacular successes such as the Go playing program AlphaGo from 2015-16 does this change abruptly. Programs like these are the result of more than 100 years of development. I see Spencer-Brown in this tradition too.

Warren McCulloch (1898-1969) played a key role. He studied philosophy, psychology and medicine, worked primarily as a neurophysiologist and sees himself both in the tradition of American philosophy, which has always been practically oriented and pragmatically thinking, and in the neurophysiology that emerged around 1900. Of American philosophy, he names Charles Sanders Peirce (1839-1914) and Josiah Willard Gibbs (1839-1903) in particular, then, based on them, the discussion during the 1940s with Norbert Wiener (1894-1964), Arturo Rosenblueth (1900-1970), John von Neumann (1903-1957) and Walter Pitts (1923-1969). Most of them certainly saw themselves more as specialists than philosophers, because they no longer expected any stimulating impulses from the philosophy of their time. Many of them deliberately remained outside the academic field or were not accepted there (e.g. Norbert Wiener). If, nevertheless, deep philosophical questions and conversations did arise and were discussed, it is mainly thanks to McCulloch.

For decades McCulloch had been searching for a new logic with ever new approaches that went beyond the tradition of Aristotle to Frege and Russell. In 1920 he wanted to extend the copula 'is', which appears in all classical statements according to the pattern ›S is P‹, into three groups of verbs: (i) verbs that lead from an object to a subject and come from the past, while the copula 'is' occurs only in a timeless present; (ii) verbs that lead from a subject to an object and thus into the future; (iii) verbs that describe an ongoing state. But this attempt did not lead to success. This was followed by efforts to search for the smallest units in psychic thought in a similar way as physics had done with electrons, photons and other physical elements: He called them psychons. They should be connected in a way other than the traditional objects and relationships of logic in order to represent the psychic events and the diseases examined by psychiatry and to find therapies. This, too, did not lead to the expected results. The turning point came when he became familiar with the work of neurophysiology and took an active part in it as a researcher. From the very beginning, he saw the possibility of giving Kant's transcendental logic a new ground with them (when the network of neurons from the sense organs to the brain takes the place of transcendental apperception and explains the pre-influence of all sensual stimuli before it comes to thinking in ideas and concepts). At the end of his life he published an anthology of important works with the programmatic title Embodiments of Mind. From 1946 to 1953 he was a leading participant in the Macy conferences. Many ideas had arisen during the Second World War within the framework of the Office of Strategic Services (OSS) founded in 1941 by US President Roosevelt, in which Marcuse, Bateson, Sweezy and other representatives of the later New Left also participated. After 1945, the OSS gave rise to the CIA on the one hand, and on the other, many participants sought to continue their work within the framework of a civilian application. They were convinced that they were at the beginning of a completely new development.

Already in 1931 McCulloch had heard about the new findings of Kurt Gödel (1906-1978) and from the beginning he dealt with the results of Alan Turing (1912-1954), which were published since 1937. That was the beginning of a completely new kind of logic, and it is undoubtedly McCulloch's special achievement to have linked the diversity of these currents together.

His breakthrough came with neurophysiology. It began with the physician Ramón y Cajal (1852-1934). He draw the first diagrams of sensory organs and the nerve pathways they emanated from, which he had obtained from thorough studies of the nervous system and thorough dissections of frogs and other animals. The physician, psychiatrist and psychoanalyst Lawrence Kubie (1896-1973) used their example 1930-41 to describe for the first time circuits (closed reverberating circuits) in order to understand memory.

For me, two results are particularly important with regard to Spencer-Brown: Two important principles were recognized in the example of the nerve tracts from the frog's eye to the frog's brain, which have an indirect effect on the Laws of Form.

Principle of Additivity: Only when a sufficient number of neural pathways report an input is it passed on to the brain. Individual, isolated inputs, on the other hand, are ignored. If the selection does not work, the brain is flooded and overtaxed. This is one cause of epilepsy.

McCulloch Additivität    McCulloch Memory

Memory: In addition, a spinning top is set up when a threshold is exceeded. There, information runs in a circle and each return reminds us that there was originally a stimulus that triggered this cycle. The activated circle remembers that something has happened. It serves as additional input in the nervous system. It not only indicates that enough neural pathways have been stimulated by the same event, but that this event has already taken place in the past. This is the technical basis and realization of a learning system.

In a similar way, Spencer-Brown speaks of the memory function and represents it through closed circuits. Spencer-Brown has sought a more abstract level that fits within the framework of logic. I don't know if he explicitly referred to this tradition, but it is clear to me how he stands in this tradition. He was no longer able to elaborate these ideas, but they appear in decisive places in the Laws of Form. In England he may have been more impressed by the Homeostat introduced in 1948 than by neurophysiology. The developer was the English psychiatrist and biochemist William Ross Ashby (1903-1972). Spencer-Brown has in a strange way brought elements of the new findings of neurophysiology in his remarks on re-entry and circuits, without it becoming clear which intuition had led him.

McCulloch had in 1945, in a short but fundamental article The heterarchy of values determined by the topology of nervous nets, pointed to fundamental philosophical aspects of the new circuits that arose in the context of the first ideas for neural networks. In a network there is no longer a controlling, hierarchical centre, but a heterarchy of concurrent and circularly interconnected developments. This seems to me to be the basic idea that Niklas Luhmann (1927-1998) has taken up in his work on systems theory – even if he only makes very marginal reference to McCulloch (as in Soziale Systeme, 565 and Wissenschaft der Gesellschaft, 365), and why he hoped with Spencer-Brown to be able to give his systems theory a new logical basis that differs from traditional subject philosophies and their paradoxes.

– The Gordian knot

Is it possible to solve all these questions together as in a Gordian knot? That seems to me to be Spencer-Brown's motive.

Spencer-Brown's motto is a verse from Laozi Tao Te Ching. Laozi begins his writing with four verses, of which Spencer-Brown has chosen the third verse.

wu ming tian di zhi shi
Wu ming tian di zhi shi
Nothing/Without Name Heaven Earth from Beginning

»Since Chinese is an isolating language and not an inflectional language, the Chinese sentence has an ambiguity that is lost in Western languages through the choice of a word type. Ambiguity depends on the grammatical role of the first word 'wu:' 'not, without, nothing'«. (SWH, 65, my translation)

The verse can therefore be translated in two ways:

– Without name is the beginning of heaven and earth
– 'Nothing' is the name of the beginning of heaven and earth.

All four verses are related:

The way that is really the way is different from the unchangeable way.
The names that are real names are other than unchangeable names.
Nameless/Nothing is the beginning of heaven and earth.
The name is the mother of ten thousand things. (quotes SWH, 65, my translation)

»Distinction is Perfect Continence«

Somewhere, Spencer-Brown's logic must begin with basic concepts that cannot be further questioned. He searches for them below the usual mathematics and language. Before every speaking and arithmetic, there are indication and distinction. With them he wants to establish a proto-logic and proto-mathematics (primary logic, primary mathematics), a calculus of indication and distinction.

»We take as given the idea of distinction and the idea of indication. We take, therefore, the form of distinction for the form.« (LoF, 1)

Without having made distinctions, neither language nor calculations are possible: In order to be able to calculate and to form sentences, the characters used in calculating and writing must be differentiated from each other. In an equation like ›2 + 3 = 5‹ operands (in this example the numbers 2, 3 and 5) and operators (operation signs) (in this example + and = for addition and equation) are linked and a statement is formed with them. Similar it is in the language. In a proposition like ›S is p‹ S and p are operands and the copula ‘is’ an operator. The signs 2, 3, 5, = and + must be distinguishable from each other, and it must be assumed that they do not change during the calculation. If this were not possible, then the equation would be meaningless. Therefore it is obvious to start with differentiation as the proto-operation before the operations of calculating and speaking and to develop the usual logic and arithmetic from it step by step.

For Spencer-Brown there must be a sign that this level meets and precedes the known signs for operands (numbers and letters) and linking verbs (like the copula ‘is’ and arithmetic symbols like +). He goes even a step beyond: He looks for a sign that precedes the differentiation into operands and operators and can serve as both operator and operand. For him, this elementary sign is the cross:

    spencer brown call

This sign has a multiple meaning:
– Execution of an elementary operation (drawing this character, draw a distinction)
– Highlighting an interior (mark)
– Marking of an inner area (marked space) (asymmetry of inside and outside)
– Drawing a boundary (boundary with separate sides)
– Distinguishing a ground from the sign drawn in the ground (ground, medium)
– Calling the border the sign  ⃧   (indication, call)

In the latter meaning, it is self-referential: With its outward appearance, it clearly shows the difference between an inside and an outside, and at the same time this sign is the name for this difference. This gives value to the inside.

The amazing thing is that with this single character not only the operands but also the operators can be represented. Unlike traditional logic and mathematics, Spencer-Brown does not distinguish between signs for operands and operators, but implicitly defines the operations (the calculation and the formation of statements) by the mere arrangement of the operands: These are the two operations 'repetition' and 'nesting'. With them the further logic can be constructed completely.

    spencer brown recalling           spencer brown recrossing

All further signs are constructed step by step from these two signs and their two arrangement possibilities, just as in Euclidean geometry all constructions result from straight lines and circles.

»Axiom 1: The law of calling.
The value of a call made again is the value of the call.« (LoF, 1)

 ⃧   ⃧   =   ⃧    

Read: If something is mentioned twice by its name, the name does not change.

»Axiom 2: The law of crossing
The value of crossing made again is not the value of the crossing« (LoF, 2)

spencer brown recrossing  =     .

Read: If the boundary is crossed twice, the initial state is restored. The repetition of the crossing has a different value than the simple crossing. This is because there is a reversal in between. With crossing the side is changed, with recrossing this action is undone.

The introduction of these two axioms may be more understandable if one remembers that Spencer-Brown has been starting from circuits. (1) If a switch is opened in a circuit on a connection, the distance is interrupted and changes from the 'on' state to the 'off' state. If several switches are opened, the 'off' state is retained. The result is the same as opening one or more switches. (2) If a switch is first opened and then closed again, the original state 'on' is restored.

spencer brown schalter

Axiom 2 implicitly introduces the medium: If the Recrossing is understood as reflection, with which in the literal sense a border crossing is reflected (thrown back), then Axiom 2 results in the empty space. It is what remains when something is undertaken and undone. What remains behind is no longer simply nothing: It is the space that has arisen through the double movement of becoming and passing: the border was drawn first. That is becoming. Then the becoming was undone by a passing. What remains is the ground on which this movement of becoming and passing away took place. – In other words, an observer may not see anything, but he can remember the movement of crossing and recrossing. A trace on which the memory is based can remain in his memory. Compare Hegel's statements about the sublation of becoming:

»This result is a vanishedness, but it is not nothing; as such, it would be only a relapse into one of the already sublated determinations and not the result of nothing and of being. It is the unity of being and nothing that has become quiescent simplicity. But this quiescent simplicity is being, yet no longer for itself but as determination of the whole.« HW 5.113, translated by George di Giovanni)

With the becoming and passing of the boundary in crossing and recrossing, time is implicitly set from the beginning. It is already contained in the original prompt »draw a distinction«. Spencer-Brown is even more consistent than Hegel: He not only chooses, like Hegel, terms that simultaneously describe a movement and its result (differentiation, designation can be understood both as the process of differentiating and designating as well as the result, that is the differentiation and designation achieved by the process of the differentiating and designating), but with the cross he chooses a sign that simultaneously describes the operation (movement) as well as the operand.

Remark: In the two operations of recalling and recrossing I formally see the two movements of continuing and turnabout (reversal, Umkehrung) which Hegel distinguished (HW 5.446, 453) which are central for me for the understanding of absolute indifference as well as the transition from the doctrine of being to the doctrine of essence and further the concept of idea.

But how should the sign be named in German? Spencer-Brown introduces it as »crossing the boundary« and as »calling« (LoF, 1), and after the first drawing calls it a »mark« (LoF, 3). In German translations the English word 'cross' is sometimes retained or a neutral expression like 'token', 'mark' or 'character' is chosen. The literal German translation 'Kreuz' (cross) is mostly avoided, because the outward appearance is not a cross kreuzprodukt, but an angle. However, an angle is in typically intuition seen as opened to the right and presented with an opening of less than 90°, as is the angle sign in the HTML markup language: ∠. The word 'hook' is also possible. However, all this does not apply to Spencer-Brown's double meaning of a sign and a movement (cross, crossing). Also the German word 'Quere' (transverse) is misleading, because a transverse is rather a sign like \ or / that runs transversely, and by traversing is often meant another kind of movement, such as traversing an empty square, a river, or possibly a shortcut if a rectangle is traversed along the diagonal. Despite reservations, the name 'hook' is the most catchy for me in conversations and demonstrations on a blackboard, because it best hits the outer shape. Therefore I use it in German original. In the English translation I keep the expression 'cross' despite all reservations, because it is the most common expression in the works on Spencer-Brown.

For the other sign, which consists only of a void, Spencer-Brown has long searched for a suitable name. Only in the 2nd edition of the German translation of 1999 did he introduce the term 'ock' and the sign Ɔ.

»Shortly after it was first published, I received a telephone call from a girl named Juliet who, while expressing her enthusiasm for what I had done, also expressed her frustration at not being able to speak the empty space over the telephone. [...] She was absolutely right, and I subsequently invented the word ock (from the Indo-European okw = the eye), symbolized by a rotated lower-case letter Ɔ, to indicate in a speakable form the universal recessive constant that all systems have in common. Ordinary number systems naturally have two ocks, zero for addition and the unit for multiplication. [...] But in this book, in which its true nature was made clear for the first time, it was important to leave the ock nameless and empty, because it was the next one I could reach to emphasize that it is nothing at all, not even empty. His invention was more powerful than the invention of zero, and sent shock waves through the entire mathematical community, which have not yet subsided.« (Spencer-Brown, 1999, xvf, quoted by Rathgeb, 118, my translation)

In the secondary literature on Spencer-Brown, this has not been taken into account to this day, with the exception of Rathgeb, but I will follow him as it seems to me the easiest and best way to name the two basic signs and in words.

Summary: Spencer-Brown explains the intention of these first steps:

»Definition
   Distinction is perfect continence.« (LoF, 1)

What is meant by the unusual word 'continence': SWH comment: (a) It can mean abstinence, moderation and self-control (continent as the opposite of incontinent). In this sense continence ensures that something does not diverge and disperses, becomes frayed and loses its own identity. (b) And it can mean cohesion in the sense of the continuum, whereby for example in geometry the infinity of points is held together on a line. (SWH, 70) This seems to me to meet Spencer-Brown's concern very well.

»Distinction is Perfect Continence«

Spencer-Brown presented and discussed his understanding of arithmetic and algebra at the AUM conference in Session Two on March 19th, 1973.

– Arithmetic is demonstrated on physical objects. Already in kindergarten or primary school it is learned by examples like ›3 apples and 2 apples are 5 apples‹. Everyone can take the two smaller amounts of 2 and 3 apples, combine them and recount that there are 5 apples now. Examples of this kind are so intuitive and striking that hardly anyone notices what is implicitly assumed and learned as if incidentally. The individual apples remain independent and do not change through counting. If they would react and merge with each other like in a chemical process, or transform or divide into other objects, counting would be impossible. Spencer-Brown therefore formulates his own rules to ensure this (what he calls canons).

Only gradually does this become abstract arithmetic with numbers, in which no explicit reference is made to specific objects. The numbers themselves are not physical objects. At school this is implicitly learned, and it is still an undecided question whether numbers exist independently of counted objects, whether they are one of the properties of counted objects, or whether numbers exist only in the mind of rational beings such as humans, who have introduced numbers in the course of their development and can operate with them in thought. Philosophers and logicians have never reached agreement on this. I agree with Aristotle, for whom numbers consist only in thinking.

In what way can be spoken of proto numbers in proto arithmetic, and how can they be illustrated in a similar way to counting apples on concrete objects? Spencer-Brown goes one level deeper and understands the two signs cross  ⃧   and ock Ɔ in a double meaning, in which they serve both as numbers and as operation signs. The usual arithmetic knows expressions like ›7 + 5‹ or ›120 − 13‹. For Spencer-Brown these are character strings, which are formed according to fixed rules from a certain character set, which consists of the digits 0, 1, 2, …, 9 and the operation signs like + and − in the decimal system. In the proto-arithmetic, it reduces the character set to the two characters cross  ⃧   and ock Ɔ, from which the arithmetic expressions are composed. Strictly speaking, they are neither proto-digits nor proto-numbers, but mathematical proto-characters, from which mathematical expressions are formed, in which they get both the meaning of digits (and the numbers composed of them according to the positional notation) and of operation signs. In order not to complicate the way of expression too much, one can still speak of proto-numbers, even if this is literally wrong. Spencer-Brown says that proto-arithmetic calculates with them. From the two characters cross  ⃧   and ock Ɔ more complex characters are composed according to the two rules of repetition and nesting. In an inaccurate sense, these are the proto-numbers. Spencer-Brown does not speak of arithmetic operations, but of transformations (»primitive equations«, »procedue«, »changes«; LoF, 10f) and introduces the sign ⇀ (LoF, 8). It means that the second expression is derived from the first expression by a transformation (LoF, 11).

The ordinary numbers are illustrated and understood as abstraction from counting objects. With the two signs cross  ⃧   and ock Ɔ, objects are not counted in the first step, but just differentiated. Distinguishing precedes counting. If objects are not distinguished (sortal), they cannot be counted. The cross  ⃧   describes that there is a difference from something to something else, and the ock Ɔ describes the background against which the boundary of something stands out from its exterior. The process of abstraction, however, remains similar: just as the understanding of numbers is gained by counting objects, the understanding of the two signs cross  ⃧   and ock Ɔ is gained by distinguishing objects from each other and against the respective background.

Just as conventional arithmetic introduces the natural numbers 0, 1, 2, 3, …, 9, and forms numbers and calculates with them, Spencer-Brown regards his primary arithmetic as a simplified and preceding arithmetic, which only knows the two signs cross  ⃧   and ock Ɔ. With these two signs we shall calculate in a comparable way, as we are familiar with ordinary arithmetic, in order to recognize deeper arithmetic connections, from which the ordinary arithmetic can be derived step by step.

– The arithmetic takes into account the individuality of the respective numbers. In number theory every single number is something special. What has been proven in arithmetic for a certain number cannot be easily transferred to the other numbers. Only algebra will study rules that apply to all numbers independent of their individual properties. This is also the case with proto-arithmetic, with which the special features of the two signs cross  ⃧   and ock Ɔ as well as the signs composed of them are examined.

Spencer-Brown wants to return to the original understanding of numbers, which is contained in arithmetic and precedes algebra. If you learn to calculate and use numbers, you will initially appreciate the peculiarities of each number. Each new number has something of its own. For example, children learn how the two and the multiples of two, the three and the multiples of three, the five and the multiples of five each have something special. So for example all multiples of five have as last digit only the 0 or the 5 (5, 10, 15, 20, …), with the multiples of 3 the checksum is divisible by 3 (the checksum is the sum of all digits, e.g. the checksum of 75342 is calculated as ›7 + 5 + 3 + 4 + 2 = 21‹, and since 21 is divisible by 3, 75342 is divisible by 3 too). Another example is the number 6, which is identical with the sum of its divisors ›6 = 1 · 2 · 3 = 1 + 2 + 3‹. Thus, specific properties can be recognized for each number. Everyone who is enthusiastic about mathematics is fascinated by playing with numbers and thinking about the peculiarities of the individual numbers (thaumazein, or ‘wondering’ from ‘wonder’) and has his favourite numbers and numbers that somehow seem strange to him. Even if an outsider may hardly be able to understand it, this way of playing with numbers shows me the same spontaneous joy and attitude as with canon singing. The apt example comes from the Piaget school: I say a number and you say a number. Whoever said the greater number won. Both have to laugh when they understand the trick of why the second player wins all the time, and yet repeat the same game over and over again, with continuing pleasure. For me, this is the first and simplest example of an arithmetic proof idea with which the children without knowing it found a procedure for proving that there are infinitely many numbers.

The result are rules like the so-called donkey bridges, which apply to certain numbers and are not transferable to other numbers. Everyone will have his or her favourite calculation methods, which he or she succeeds in using particularly well, comparable to their favourite colours. Spencer-Brown wants to return to this original (naive, magical) way of handling numbers, because it is only from this that he will be able to see the formal way in which numbers were operated in the 20th century in a new light.

In Piaget's view, arithmetic belongs to the second phase of symbolic and pre-conceptual thought, which he wants to preserve in its peculiarity and uniqueness. The transition to algebra will therefore in many ways be similar to the transition to operational arithmetic in the III. phase (at the age of 7 to 12 years).

– Within arithmetic there are propositions (theorems). Theorems for Spencer-Brown are ideas that are found in individual, unique elements. They show mathematics in the stricter (proper) sense. Algebra, on the other hand, is formulated for variables independent of the individuality of the values used in each of the variables. Therefore proto-arithmetic for Spencer-Brown cannot formally derive theorems from the axioms, but only intuitively grasp them and only in the second step prove them with formal methods and deduce consequences. On the other hand, the approach of algebra can in principle be adopted by computers. In the 20th century, mathematicians like Russell equated mathematics with the algebraic way of operating. For Spencer-Brown, this is a great step backwards, which atrophied mathematics, mathematical skills and the joy of mathematics.

What is meant by theorems is best illustrated by the example given by Spencer-Brown: Euclid proved the theorem that there are infinitely many prime numbers. His proof was: If there are only finitely many prime numbers, there is one largest prime number. Its name would be N. If the product of all prime numbers from 1 to N is formed, the result is ›M = 1 · 2 · 3 · 5 · 7 · 11 · … · N‹. For M it can be shown that ›M + 1‹ is a prime number. Therefore there is another prime number that is greater than N, and so on. This idea cannot be deduced as a consequence of the arithmetic rules, but had to be found. Two things are important for Spencer-Brown: The proof shows both an original idea of proof and a procedure with which the idea of proof is presented. Most people will only be convinced of the procedure, why the sentence is correct, and understand the idea of proof from it. Such an idea of proof results from the arithmetic typical way of calculating, which has a deep understanding (not to say compassion) of the numbers and their inner rhythms and infinitely diverse relationships and kinships. No computer can do that. Computers, on the other hand, provide a different kind of insight, which for Spencer-Brown belongs to algebra.

– Spencer-Brown had started from the two-element Boolean algebra and had sought the arithmetic which precedes it and whose algebra it is. Usually we learn arithmetic from natural and real numbers and then all the algebraic rules how to calculate with them. George Boole (1815-1864) had found a simplified algebra in 1847, which calculates with the elementary logical operations AND and OR and whose variables in the simplest case only stand for the two numbers 0 and 1 (or in the Boolean algebra for 'on' and 'off'). This was Spencer-Brown's starting point and he wanted to go one step further: Just as the arithmetic of natural and real numbers underlies the complex theories of number theory and analysis, it should be possible to find a simplified arithmetic for Boolean algebra which is only formulated for the numbers 0 and 1. As primary arithmetic (proto-arithmetic), it precedes the usual arithmetic just as much as Boolean algebra precedes ordinary algebra. In this way, he succeeded in developing an arithmetic whose character set is further restricted by the two characters cross  ⃧   and ock Ɔ, and which therefore precedes Boolean algebra.

– From Spencer-Brown's point of view, Gödel's groundbreaking statements can be read differently and anew: Kurt Gödel (1906-1978) showed in 1931 that algebra is no longer able to say everything that is possible in the arithmetic that precedes it when a certain complexity of numbers is given. Arithmetic, with the individuality of its signs, contains a surplus that cannot be completely grasped and captured by any algebra. Compared to arithmetic, algebra is fundamentally incomplete. However far algebra may be developed, arithmetic and the infinite stock of its individual numbers can provide new insights that go beyond the known algebra. – When Spencer-Brown speaks of completeness in Chapter 9, he means something else. Theorem 17 says: »The primary algebra is complete« (LoF, 50). This does not mean, however, that it contains everything that can be obtained in arithmetic discoveries, but only that every arithmetic discovery can also be formulated algebraically. »That is to say, if a = ß can be proved as a theorem about the primary arithmetic, than it can be demonstrated as a consequence for all a, ß in the primary algebra.« (LoF, 50) Spencer-Brown therefore deliberately speaks of a theorem and not of a consequence, i.e. of a statement that can be proved with the methods of arithmetic and not with those of algebra.

»So, to find the arithmetic of the algebra of logic, as it is called, is to find the constant of which the algebra is an exposition of the variables--no more, no less. Not, just to find the constants, because that would be, in terms of arithmetic of numbers, only to find the number. But to find out how they combine, and how they relate -- and that is the arithmetic. So in finding -- I think for the first time, I don't think it was found before, I haven't found it -- the arithmetic to the algebra of logic -- or better, since logic is not necessary to the algebra, in finding the arithmetic to Boolean algebra, all I did was to seek and find a) the constants, and b) how they perform.« (AUM, Session Two)

– An example of calculating and proving from proto arithmetic

»Theorem 1. Form The form of any finite cardinal number of crosses can be taken as the form of an expression.« (LoF, 12)

The statement is hardly to be understood directly in this choice of words. It means that any form constructed from a finite number of crosses and ocks can be converted into either a single cross or a single ock. This statement can also be formulated differently: Any shape, however complicated, composed of crosses and ocks, can be considered as a whole form. This can be illustrated by an example that Spencer-Brown later cites for the 3rd theorem (LoF, 17):

beispiele LoF 17

It is shall be shown how this figure can be reduced to a cross or an ock. The idea of proof is to go to the deepest nesting level. There are only two possibilities: Either a cross repeats itself several times, or two crosses are nested into each other. If it repeats itself, it can be condensed into a single cross according to axiom 1. If two crosses are nested, they can be converted into an ock according to axiom 2 (cancellation). The example also gives an impression of how the proto-arithmetic calculates. The crosses that can be contracted or deleted in the next step are highlighted in red. The character ⇀ introduced by Spencer-Brown is used for the conversions.

beispiele LoF 17a     changedto     beispiele LoF 17a2

beispiele LoF 17b     changedto     beispiele LoF 17b2

beispiele LoF 17c     changedto     beispiele LoF 17d

A comparison with networks shows how important this result is. If you have a confusing network and want to simplify it in a similar way, deadlocks can occur where any further simplification is blocked, see as an example the possibility of deadlocks in petri nets. This is not possible in the calculation chosen by Spencer-Brown. Each complex character can be converted to one of the two basic characters.

– Successor ordinal

The main difference between proto-arithmetic in Spencer-Brown and the usual arithmetic is in the successor ordinal and the mathematical induction. All natural numbers n have the property that they have a successor n + 1. This means that the natural numbers can be arranged on the number line. Multiple execution of the successor relation leads to adding, and from this result the further arithmetic operations and with them the extensions of the natural numbers by the negative and rational numbers and so on. The induction-axiom says: If a statement A is valid for an initial element n₀ (beginning of induction, base case) and if it can be shown that it can be concluded from A(n) to A(n + 1) (induction step), then it is valid for all natural numbers.

However, Spencer-Brown does not even ask the question about the arrangement of all characters of proto-arithmetic. In a first attempt, the characters resulting from the repetitions of the crosses can be arranged like the strokes on a beer mat:

    |, ||, |||, …

     ⃧  ,  ⃧   ⃧  ,  ⃧   ⃧   ⃧  , …

But there is also the nesting of the crosses. While the multiplication and the higher arithmetic operations can be reduced step by step to the addition, it is not possible to derive the nesting from the repetition. Both are independent. Instead, it is reasonable to choose a lexicographical (canonical) order for the arrangement of all characters considered in proto-arithmetic, as for two independent letters a and b:

    a, b, aa, ab, bb, ba, aaa, aab, …

    aufzählung

In this order, there is no simple successor relation that can be used to describe the relationship between the following character and its predecessor. Correspondingly, there is no simple linking rule with which a new character can be formed from two characters. Bernie Lewin, however, in Enthusiastic Mathematics, has explained numerous further possibilities of the arrangements of the two symbols of Spencer-Brown's calculus, which go back to the original ideas of Pythagoras and his school, and has shown how they can be used to derive the operations of the usual arithmetic. His book also shows that arithmetic has not only an inexhaustible wealth of ideas of proof and sentences, but also of symbolic and graphic signs with which new ideas of proof can be created.

– Opacity and transparency of arithmetic and algebra

The relationship between arithmetic and algebra raises far-reaching philosophical questions: Arithmetic is unique and opaque (dark, blurred, impervious to light), algebra is transparent and translucent. Within algebra, everything can be derived from the given basic rules, which Spencer-Brown calls initials. For him, the algebraic derivations are not proofs with their own ideas of proof, but just consequences which result formally from the initials and which can also be derived by a computer. In arithmetic, on the other hand, there is an infinite abundance of possibilities which can never be fully exhausted by a finite being like man and his art of arithmetic. There are always still unknown possibilities that can be discovered by new coming mathematicians. This corresponds to the statement of Gödel's incompleteness theorems: Arithmetic will never be able to fully describe all the possibilities it offers. Thus it always contains something that remains hidden (opaque) for algebra.

Arithmetic is used to recognize value charts and their underlying rules. The simplest case is the axiom of natural numbers, according to which two adjacent natural numbers are separated by a distance of 1. On the other hand, it is still unclear what efficient formula for prime numbers exists. Euclid proved that there are infinitely many prime numbers, but nobody knows how their distances evolve. One of the most researched and yet unproven mathematical assumptions comes from the mathematician Bernhard Riemann (1826-1866), who postulated an important estimate for the distribution of prime numbers based on imaginary numbers in 1859.

The prime numbers and their properties are the best example of what Spencer-Brown means by the opacity of arithmetic. Despite all the advanced methods of mathematics, no solution is found. For Spencer-Brown, this is an indication that it will never be possible to completely overlook the abundance of arithmetic. He brought this into the unimposing statement: »Principle of transmission: With regard to the oscillation of a variable, the space outside the variable is either transparent or opaque.« (LoF, 48). The term »oscillation of a variable« refers to the value progression of prime numbers, for example.

This understanding has radically changed the relationship between mathematics and philosophy. Since Descartes, modern philosophy has been convinced that it must be possible for a completely transparent method to see through all things. It is the task of philosophers to develop this method, and things were assumed to be transparent in principle, i.e. metaphorically speaking, not to resist the light of a superior method. Perhaps, it is even in the history of nature that man's task is to find such a method and to free things from their concealment. In a way, they are waiting to be released from their darkness by man.

The opposite position goes back to Heraclitus, from whom the aphorism is preserved: »Nature loves to conceal Herself« (Fragments, B 123, others translate: »Concealment accompanies Physis«). While since Descartes the philosophers have expected support from mathematics to find a method with mathematical rigour that opens everything up, it is Spencer-Brown who reverses the relationship. For him, the relationship between opacity and transparency is not only repeated within mathematics, which is why mathematics cannot simply be taken on the side of transparency, but for him it was fatal and a radical paralysis of mathematical creativity to seek only what is transparent in and with mathematics. For him, the triumph of the mathematized sciences since the 19th century has been like a house of cards that immediately collapses when one looks at how mathematics works inside. It has lost its vitality for him with the dominance of the mere derivation of consequences in an algebra. For him, the Principia Mathematica by Russell and Whitehead, published in 1910-13, are an example of this way of thinking.

Kant was the first to question Descartes' prospects of success with his doctrine of the thing in itself, but he left open the question of whether the unfathomability of the thing in itself can nevertheless be determined paradoxically. For him, mathematics stood on the side of transparent science and was the guarantor of reason. Whenever he wanted to describe the freedom of reason, he reverted to images from mathematics, e.g. in his idea of a sphere of the concept that can be overlooked from a certain location and is thus transparent (KrV, B 886). However, he recognized that this is only true if the sphere is not curved. If knowledge is arranged on a curved surface, there is no point from which it can be completely overlooked. The curvature of the point prevents one from seeing what lies beyond the horizon. Therefore, there are good reasons to understand curvature as the elementary property of the opacity of things.

In the sense of Spencer-Brown, arithmetic is located on a curved surface, and with algebra you can never overlook everything that exists on the curved surface. Nothing else remains to be done but to change one's position on the curved surface. Whoever moves on the curved surface may open up new areas that become visible, but with the same movement other areas withdraw again and disappear behind the horizon. Never everything can be overlooked.

After Kant, the paths of philosophy and the natural sciences diverged in two directions: (i) Some are looking for a new method from which no object can escape. Most important are Hegel with the dialectical method (HW 6.551) and all those who follow him. Or they hope to install mathematical functions that would guarantee their full transparency. (ii) The others drew the consequence from the doctrine of the thing in itself that in thinking its principal limits are provable. Nietzsche hit the nail on the head when he spoke of the »superstition of logicians«, they could recognize with Descartes from their own thinking that there is an 'I' who thinks, while for him it is just as conceivable »that a thought comes when 'it' wishes, and not when 'I' wish« (Nietzsche Beyond Good And Evil, Part 1, Chapter 17). The ego can never overlook the overabundance of all thoughts and must always reckon with the fact that new thoughts will appear of their own accord and become accessible to the ego.

(i) To this day, the natural sciences after Kant hope to find a mathematical measure for the curvature of the system and to be able to transform its opacity into transparency by recognizing and using the measure of the curvature. Whoever succeeds in changing his position that he no longer looks at the curved surface but at its measures, could obtain a full overview of the measures on the way. I see first ideas in this direction in Hegel's theory of real measures, but so far there have been no realizations.

Another approach was developed by the theory of self-referential systems. If, in a curved system, various observers succeed in matching their respective local views to a global image, then a global image would be possible for them all together. This shifts the original question to a new level: Although the local horizons of all observers cover the entire system as a whole, the question arises as to which rules their views can be coordinated with each other.

Luhmann can be seen from a higher perspective in the tradition of Hegel. He assumes that the system as a whole emerges into a self-description that is no longer bound to individual participants of the system, but results from the laws of movement (Hegel would say: reason) of the system as a whole. He expects this path to be opened with the help of the logic introduced by Spencer-Brown and in particular its self-reference given with re-entry (Luhmann Gesellschaft der Gesellschaft, 877 and in numerous other places in Gesellschaft der Gesellschaft and Wissenschaft der Gesellschaft). It is planned to take a closer look at this in a separate section.

(ii) Conversely, Nietzsche's successors speak of the opacity of thought. For them it necessarily follows from the opacity of things that thinking also loses its transparency, which Descartes still believed when thinking is able to investigate itself as its own object. Contrary to Descartes, Heidegger sees in Being and Time »the identity of human life [...] not only in the transparency of purely rational self-representation and self-determination, but also in the unavailable opacity of its moods, for which the state of mind is supposed to be the ontological condition of possibility«. (Volpi, 43, my translation). For Sartre »the reflective consciousness reifies its je to a centre of compression (centre d'opacité), which seems to lead a life of its own in the reflected consciousness as 'the psyche' (Moi/Ego): consciousness (je) - Moi - body« (Scheier, Luhmanns Schatten, 69 with reference to Sartre Transcendence of the Ego). And Adorno turns directly against Hegel:

»He [Hegel, t.] did not need language in the emphatic sense, because to him everything, even what is devoid of language and opaque, is supposed to be Spirit and the Spirit, the context. This supposition is beyond salvation. That which is resolvable, which is not in any previously-thought context, does indeed transcend its self-enclosed nature out of itself, as what is non-identical. It communicates with that from which the concept separated it. It is opaque only for the totality-claim of identity; it resists the latter's pressure. As such however it seeks expression. Through language it dispels the bane of its selfness.«. (Adorno Negative Dialectic, 162, Translation by Dennis Redmond)

At Spencer-Brown, this conflict is taken up in mathematics. For Spencer-Brown, mathematics can only awaken to new life when it gives up the claim to complete transparency and the associated self-limitation to methods that satisfy this claim. And philosophy does not have to stop at either demanding or denying the opacity of things, but can, thanks to its own creativity and the creativity of other sciences such as mathematics, see this question from ever new angles. For me, it shows the deplorable condition in which not only mathematics, but also philosophy finds itself today, when it does not even take note of Spencer-Brown's work except through the philosophical-sceptical sociologist Luhmann.

Re-entry

The doctrine of re-entry is at the heart of the Laws of Form. It is also the only idea of Spencer-Brown that has gained a certain popularity. In a first approach, re-entry can be understood as something like resonance, recursion, feedback, self-reference as well as reflexivity of terms or of people communicating with each other who refer to each other. It has been taken up above all by the theory of self-referential systems as well as by applications in sociology and psychology, in particular by radical constructivism and system theories of various kinds up to systemic psychology.

These applications dissolve the idea of re-entry from the context of proto-arithmetic and proto-algebra in which it stands at Spencer-Brown. Without further concern for the difficult arithmetic and algebraic operations in the Laws of Form, they adopt the basic idea that Spencer-Brown considers a movement that detaches itself from a system and can return to it by re-entry. When it returns, it is not only picked up again by the system, but the re-entry induces an inner movement there and must – in Lacan's parlance – be sutured. What happens here in detail is examined more closely in the context of the system under consideration, be it a family, a company organization or society as a whole. In all cases, something can be pushed aside and isolated until it decouples. This causes a crisis both in the system and in the isolated element, which can be solved by re-entry. Since Spencer-Brown was very interested in questions of psychology, I assume that he always had examples and applications of this kind in mind. In Laws of Form, however, he does not refer to them, but limits himself to questions of logic and mathematics. Two things are therefore important to me:

(i) To which mathematical questions does Spencer-Brown give a completely new and original answer? For me, these are above all the transition from natural numbers to transfinite numbers and the algorithms with which irrational numbers are arbitrarily approximated but never reached. In both cases there are  processes  which differ fundamentally from the usual arithmetic operations. A re-entry for Spencer-Brown is only possible with infinite processes and not with ordinary, finite arithmetic operations.

(ii) What requirements must be met for Spencer-Brown to be re-entryable? A re-entry is only possible for him if a figure repeats itself infinitely often uniformly, while arithmetic basically only works with constructions that contain a finite number of steps (9th canon; LoF, 54). The transition to infinity only succeeds if variables are considered in algebra that are independent of their respective content, and if, thanks to the transparency within algebra, there is the certainty that there are no side effects and internal deviations in the infinite number of repetitions. The individual repetitions are completely independent of each other. Re-entry shows the originality (and thus the relative autonomy) of algebra in relation to arithmetic. In proto-arithmetic there is still no re-entry. On the contrary: Re-entry leads to »loss of connexion with arithmetic« (LoF, 58).

With the process that leads to re-entry, there is a kind of inner musicality that is introduced with re-entry into the order from which it emerges. It is not only a scar or a crack that is sutured at the place of re-entry, but the system, previously examined only in its spatial, timeless relationships, is set into vibration as a whole (begins to oscillate), from which time emerges and with it a new order (emergence).

With the transitions to transfinite and irrational numbers are connected not only the two concepts of infinity common in mathematics (Hegel calls them simple – often, but for me not correctly translated as »bad« or »spurious« – and true infinity), but also the two elementary properties of numbers known since Aristotle: The transitivity of the numbers arranged on the number axis and the continuity of the real numbers. (The less-than-relation is an example of transitivity: If x < y and y < z, x < z also applies). Transitivity ensures that counting can continue indefinitely, and thanks to continuity, the limit value of approximation processes to irrational numbers exists. If Spencer-Brown's idea of re-entry hits the heart of mathematics and the philosophy dealing with mathematics, this should have fundamental consequences for the basic understanding of mathematics. These can only become visible if the re-entry is not only seen as a general property of systems with feedback, but with Spencer-Brown more precisely as a specific interrelation of the internal arithmetic and algebra given with these systems.

– Processes in ordinary mathematics and at Spencer-Brown

In my opinion, models for the re-entry are the construction of the first transfinite number ω at Cantor and the limit crossing that leads to the approximation of irrational numbers such as wurzel 2.

Transfinite numbers When counting 1, 2, 3, … n, n+1, … the same successor relation from n to n + 1 is repeated infinitely. Beyond all countable natural numbers lies the transfinite number ω. It can never be reached by counting and has no predecessor, but it can be formally defined as the result of the rule of counting in whole. There must be an overarching number class (a ground) into which the limit value can be included by re-entry.

Where is ω located, and how is ω included in the set of numbers? The mathematician Riemann had found a very descriptive idea for this: If a sphere is placed on the number line at the zero point, whose center is identical with the number zero and whose surface intersects the number axis at the two points −1 and 1, a point on the sphere can be clearly assigned to each number by drawing a connecting line between the north or south pole and the respective number, which intersects the sphere at exactly one point. For numbers outside the sphere (in the following figure these numbers are greater than 1 or less than −1) the intersection is on the upper hemisphere, for numbers inside the sphere (the numbers in the interval [−1, 1]) on the lower hemisphere. The north pole corresponds to the number ω which is infinitely distant on the number line and is included in the number class of all points on the sphere with this construction method. On the south pole the 0 is mapped.

Riemann Zahlenkugel


Source: Wikipedia, called on March, 4th, 2017
Origin by Jean-Christophe BENOIST - Eigenes Werk. Merci à Friedrich A. Lohmüller pour sa bibliothèque POV., CC BY-SA 3.0, Link

The picture clearly shows how ω is infinitely far away on the number line, but finds its place on the number sphere at P(ω). How is this possible? The number sphere implicitly introduced new properties of numbers that are not yet given in the number line: The number sphere has a second, independent dimension (its height, which can also be interpreted as an imaginary axis or as the time-axis), it is symmetrical to rotations (cyclic), and it is curved. These are properties that go beyond the previously known properties of numbers (transitivity, continuity) and already descriptivly suggest that an extension and new definition of numbers is necessary if processes are to be allowed that lead to a re-entry. (It will be shown further how this image and these properties lead to the innovative theory of Spencer-Brown about observers).

Irrational numbers For irrational numbers like wurzel2 there is no decimal representation. It is approximately 1.414… and can only be determined step by step more accurately. With each step, the interval in which it must lie is defined more precisely. For example, the expression 1.414… says that it lies somewhere in the interval [1.414, 1.415]. And just as Cantor had to find its own symbols for the transfinite numbers with ω, its own symbol had to be agreed for the irrational numbers with the root sign wurzel.

Spencer-Brown mentions the irrational numbers in the Notes when he speaks of second-degree equations there. He demonstrates this at the square function x² or in his calculus aa = a and then executes it at the elementary polynomial of second order. (LoF, 97f)

What happens at the two transitions to the transfinite and irrational numbers? It is no longer an ordinary arithmetic operation. An arithmetic operation like ›7 + (10 − 3) / 91‹ can be arbitrarily complicated. Ultimately, however, it is always a finite string consisting of operands and operators (in the simplest case of the natural numbers as operands and the usual rules of calculation +, −, · and / as operators), which leads to a clearly defined rational number. With infinite incrementing and with the approximation algorithms for determining irrational numbers, on the other hand, an arithmetic operation is  repeated  (iterated) infinitely often. Only the repetition (iteration) creates a process. – Mathematics has so far not been noticed, – or it does not want to notice -, that the process of repetition implicitly implies time. To this day, mathematics insists that its formulas are basically timeless. It was Spencer-Brown who made the long overdue turn and consistently developed how time is implicitly contained in the process leading to re-entry. This will be explained in more detail below.

In the arithmetic operations, the respective number class is not left. It returns a new number and can be identified with it. It is the same to say ›12‹ or to say ›5 + 7‹. With processes, on the other hand, a new number is defined that was not yet contained in the number class previously considered. Neither ω is a natural number nor wurzel2 is a rational number. With them a new concept of numbers emerges: Numbers are all elements with which one can calculate like with natural and rational numbers. What is a number and belongs to the totality of numbers is opened up step by step. The idea of numbers is in itself infinite, and it can be assumed that new processes are always found to recognize new numbers with properties that were previously unknown.

Echelon without limit How can Spencer-Brown transfer the infinite processes of counting or approximation to an irrational number into his calculus? In proto-arithmetic, there is no successor relation as in natural numbers, but a variety of possibilities to convert one sign into another. When Spencer-Brown speaks of calculation, he means conversion. Calculating with natural numbers is for him an example of transformations, which he generalizes in his logic. In order to find a process which corresponds to the infinite counting of natural numbers, a transformation must be determined which, like counting, can be repeated infinitely often, whereby all individual steps are formally the same (self-similar). In proto-algebra, Spencer-Brown constructs a rule which, comparable to the of the continuously repeated (iterated) successor relation from n to n + 1, can be continued arbitrarily and for which, analogous to the successor relation, an infinite-far element can be defined and included in a larger number space by re-entry: He explains how a sequence of 5 single steps can be applied to the string e1 pre whereby the string doubles. (These are in his terminology the 5 single steps C5 iteration, C1 reflection, J2 distribution, C4 occultation, C1 reflection; LoF, 55). If this sequence is executed, the output character is appended at the end with the result (LoF, 56):

    e1 pree1 step2.

The symbol ⇀ indicates that both signs (forms) have the same value. The same sequence of 5 single steps can again be applied to the new sign (form), whereby it is extended again. In this way, a sequence of steps is achieved that corresponds to the successor relation and can be repeated infinitely. The result for Spencer-Brown is »an echelon without limit«, which is derived from the »Consequence C7 Echelon« proven for proto-algebra (LoF, 34).

»The next decisive step is to imagine, if possible (let us imagine, if we can, LoF, 55), that the process of this sequence of steps continues end- or timelessly. In space this would result in an echelon of a's and b's without limit. [...] Since the determination of the value of an expression is bound to the rule of finite demonstrability, no statement can be made about the value of an echelon without limits.« (SWH, 175, my translation)

With these steps not all signs (forms) of the proto-algebra are enumerated and put in an order, but only a certain subset. It is to be assumed that there are infinitely many other possibilities of similarly forming infinite consequences from the inexhaustible reservoir of forms. However, it is sufficient to have found an example as simple as possible and to use it to introduce re-entry into proto-algebra.

As a result, Spencer-Brown writes down a diagram with which the re-entry is defined:

spencer brown re-entry prozess

E1 Re-entry
(LoF, 56)

The diagram E1 is unusually to be read block by block from right to left. Spencer-Brown considers in E1 the stepwise crossing from a to b to a to ba and b stand as variables of proto-algebra for values that can be used from proto-arithmetic: In Spencer-Brown's calculus these are only the cross and the ock. The transition from a to b is repeated as often as desired. For the repetition Spencer-Brown introduces first with the three dots … and then with f two new symbols, which until now were neither known in his proto-arithmetic nor in his proto-algebra. He takes over the three dots from the corresponding processes known from ordinary arithmetic, when there, beyond n and n + 1, counting is continued without limit or an irrational number is approximated arbitrarily exactly.

The symbol f describes the result of a limit crossing. For Spencer-Brown, a can be regarded as an input that can be saturated with the two values possible from proto-arithmetic: the cross and the ock. For this input a rule is found which is repeated infinitely often. The result is an output f which cannot simply be re-entried into the same set of numbers where the input a was, but which expands this set of numbers in a similar way as the transfinite numbers expand the set of natural numbers and the roots expand the set of rational numbers.

By f is meant both the function (the algorithm, the process) with which a limit value is approximated in infinitely many steps, and the result of the process (this limit value itself). Spencer-Brown consistently holds out his idea of introducing symbols which, in multiple meanings, can be both an operation and the result of that operation.

The summary of the infinite number of crossings from a to b to a to b etc. in f is bound to three conditions:

(a) »a clearly defined sequence of steps, so there is a clear construction instruction«,
(b) it »shall be continued continuously«,
(c) it contains a »pattern of self-equality« (SWH, 175, my translation).

»The key is to see that the crossed part of the expression at every even depth is identical with the whole expression, which can thus be regarded as re-entering its own inner space at any even depth.« (LoF, 56)

The symbol f reminds of the functions f(x) introduced in the usual mathematics. f is defined by the value progression of an infinite sequence of formally identical steps that approach a limit value. But unlike the usual function f(x), f is at the same time the limit value itself. Like the transfinite number ω or the root Wurzel 2, f is a limit value which was not present in the previous character set. f is neither the cross nor the ock. With f the set of signs (forms) in proto-arithmetic is extended in a similar way as the character set of natural numbers by transfinite numbers and rational numbers by roots.

Spencer-Brown formulates this negatively: There is no clear place for the re-entry. It is indefinite in the given calculus where it can be taken up (»indeterminacy« LoF, 57). In this context, Spencer-Brown describes the transition from a to f in greater generality as the transition from an expression e (for expression) to an expression e', which is to be included by re-entry:

»We have just introduced to e' a degree of indeterminacy in respect of its value which is not (as it was in the case of indeterminacy introduced merely by cause of using independent variables) necessarily resolved by fixing the value of each independent variable.« (LoF, 57)

There is no value for f in the initial quantity where it could be included. Therefore, there are only two possibilities: Either the initial set is extended for the re-entry of f (following the example of ω and Wurzel 2), or f errs around and cannot find a defined place. This indefinite wandering around will lead Spencer-Brown to the oscillation with which he introduces the concept of time. (So I understand how he comes from re-entry to time. In a final step, he will extend the path from a form a to a limit f to circuits that convert an input a into an output f.)

It is therefore misleading if the re-entry is interpreted as if somehow a process would emerge from a base set that separates itself from the base set until it delivers a value that leads back to the base set and meets a value already present there. Instead, the re-entry of f extends the already known set of signs (forms). However, the old rules of the base set remain valid, and it must be ensured that with f can be calculated as with all other numbers. For Spencer-Brown this is given in his calculation: »It is evident that J1 and J2 hold for all equations, whatever their degree.« (LoF, 57)

More accurate is the comparison of f with the imaginary numbers. To understand this, two steps are necessary. In a first step the set of all arguments is to be extended into a set, in which all value courses of the functions can be represented, which operate on the basis set. Based on this, the second step is to understand how to get from the set of all arguments a to the set of all functions f operating on them and how to return from it to the set of arguments by a re-entry.

(i) Transition from arguments to value courses: On one axis all signs (forms) of the proto-arithmetic are arranged, and on the axis perpendicular to it all signs (forms), into which they can be transformed, are arranged, like in the usual mathematics on the x-axis all arguments and on the y-axis all function values y = f(x) are represented. This results in a two-dimensional tableau in which the values courses can be entered.

(ii) The value courses can be considered for-themselves. Each value course f is in itself a mathematical object, and the set of all value courses f can be examined. The set of value courses f is independent of the set of arguments x on which the functions f operate. (Scheier speaks of a difference between the functions and their arguments. [Scheier Luhmanns Schatten, 67]). I understand the fundamental thought of Spencer-Brown to mean that there is a way for him by which the set of arguments emerges into the set of functions perpendicular to the set of arguments. This is a movement which leads from the set of arguments into the set of functions and from it through a re-entry back into the set of arguments. This is only possible if the arguments can be regarded as rudimentary functions (by understanding each argument a as an identity function a = a), and if there is an operation in the set of all functions which leads back to the set of arguments. If I understand it correctly, Spencer-Brown sees a model for this in the imaginary numbers. Just as the rules of complex algebra with the imaginary numbers lead to a rotation, which goes from the real numbers o the imaginary numbers and from there back to the real numbers, there is for him a way from the arguments a to the functions f and through a re-entry back into a new argument. This thought is only hinted by Spencer-Brown, and it seems to me a fruitful task to formalise it more mathematically. It is a completely new view on the mathematical theory of the function spaces. It could lead further thought to embed the Hilbert space into another space, which preserves the properties of the Hilbert space, but is based on the proto-arithmetic introduced by Spencer-Brown and with the re-entry lets understand new properties, which are not given in the usual Hilbert space. Since Hilbert space is also the mathematical tool for quantum theory, a new view on quantum theory can result, which could resolve the known aporia in a similar way as Spencer-Brown resolved Russell's paradox.

For Spencer-Brown, Russell's paradox exists only at the level of the set of all arguments and is unsolvable there. It becomes solvable for him only with the transition to the imaginary numbers, and he has given the basic idea how this is possible in the already quoted preface to the American edition of Laws of Form. The arguments a and the functions f may have a similar relationship as the real and the complex numbers. As shown, the transition into the imaginary space of all functions is not arbitrary, but bound to fixed rules. Thanks to these rules, a return to the space of arguments is possible, generating new arguments there, and in this way the paradoxes which Russell and Whitehead wanted to avoid, so to speak, with the brute force method with the prohibition of negative self-referential, impredicative judgements (vicious circle principle) can be avoided. Re-entry takes the critical place of negative self-reference and should therefore enable the often requested formalization of the dialectical method, if it is defined by its negative self-reference. (It was often attempted to find a formalization for Hegel's method of Aufhebung [sublate, abolish, preserve, transcend]. In my opinion this is possible on the path opened by Spencer-Brown, with which he went far beyond the ideas of Gotthard Günther, who pursued a similar goal.)

It is certainly still a long way to go to work out these ideas. Before this can be started, further ideas from Spencer-Brown need to be understood in order to overview his intentions in their full extent. These are his comments on the introduction of time and the circuits he designed.

– Waves and their time (Time)

With the 3 dots ‘…’ and the variable f Spencer-Brown is obviously oriented on symbols known from the usual mathematics. The peculiarity of his calculation, however, lies in the fact that instead of the usual numbers and calculation symbols (operation signs) he knows only the two signs  ⃧   and Ɔ for cross and ock, which he uses in the double meaning as operand and operator. Thanks to this simplification he succeeds in showing properties at limit crossings (Grenzübergänge) which go far beyond the known mathematics. This leads to a shift in perspective in two steps:

– For him, the smallest units of the limit crossing are no longer the points along which the process underlying the limit crossing runs until it determines the border value as a new point – as is known from the example when Achilles approaches the turtle in front of him point by point –, but the  shape  of the limit course as a whole. That's the wave to him. What makes him think that? If the limit value f is wandering around at the moment of re-entry and does not find a fixed place, this can be understood as a swinging motion that oscillates continuously between cross and ock. This results in a wave, where the wave crest stands for the cross and the wave trough for the ock. There are no more signs in his proto-arithmetic. Limit crossings can move directly to a certain point or oscillate constantly between the two values available in the proto-algebra. Spencer-Brown anticipates what was later explained in more detail in catastrophe theory with its bifurcations and attractors, and in a sense describes their innermost logic.

– When he separates himself using the image of the wave from the traditional symbols of the limit crossing and takes a new view of its course, he is able to nest limit crossings for the first time. This does not mean that limit crossings are considered uniformly in several dimensions, as in the case of multiple integrals, but that within a limit crossing a contrary limit crossing of second or higher order can occur (which he calls equations of second degree). From the interrelationship of two limit crossings time arises for him.

The wave representation and the time contained in it are introduced in a complex diagram which is to be read and interpreted line by line:

spencer brown time

Time as a Key
Quelle: LoF, 63, Figure 3

In the first line the echelon without limit is drawn with the variables a and b. Its result is the output f. In the following lines, special values are used for a and b. m and n do not stand for new variables, but m (marked state) refers to the cross, n (unmarked state) to the ock. The two constants m and n were first introduced for the Rule of dominance (LoF, 15) and then taken up again in LoF, 56. The accumulation of names and symbols for cross and ock is confusing and puzzling at first glance. Although Spencer-Brown has good reasons to emphasize certain aspects more clearly with these different names and representations, it would be easier to stick as far as possible with the two original signs cross  ⃧   and ock Ɔ.

In the second line, the ock (n, unmarked state) is used for both a and b. The continuous crossing of the infinite echelon e1 pre starts with the unmarked state (wave below), changes to the marked state (wave above) and so on, until after an odd number of steps the unmarked state n is reached again.

In the third line the cross (marked state, m) for a and the ock (unmarked state, n) for b are inserted. In this case the wave starts at the top with the marked state and ends after an odd number of steps back at the top with the marked state.

The decisive step takes place in the fourth to sixth line: Waves can overlap each other. Spencer-Brown vividly speaks of a »dominant pulse« and a »short pulse« (LoF, 63, 64). The dominant pulse describes a fundamental which is preserved, and the short pulse runs recognizably through the fundamental. Lines 4 to 6 show how the run through can be read at the three points in time t₁, t₂, t₃.

With the nesting of contrary limit crossings, time is created within logic. For him, the first limit crossing is spatial (as depicted in the spatial figure of infinite staggering), while the second is temporal.

»Since we do not wish, if we can avoid it, to leave the form, the state we envisage is not in space but in time.« (LoF, 58)

Without the distinction between dominant pulse and short pulse, the time would remain incomplete. It would remain at a level that Felix Lau aptly describes:

»In his review of the Laws of Form, Heinz von Foerster euphorically pointed to the discovery or development of time from the axioms. (Lau, 93) [...] Oscillation is not accompanied by a measure that is indispensable for our common concept of time. This 'first time' has no duration, no measure; it is merely the change, the back and forth between the states. Oscillation does not yet have a speed for which units of space and time are needed.« (Lau, 94, my translation)

Note: Only by using the short pulse can the measure of time be described. Dominant pulse and short pulse for me can be understood as the background noise of a system (system dynamism, dominant pulse) and movement within the system (short pulse). However, Spencer-Brown does not further consider their relationship. In extension of his work, a common inner measure can be found for both pulses based on the mechanics with the square. (See on the concept of square and the square of velocity the two articles Force, Impulse, Energy and Dynamic Numbers).

This corresponds to the classical idea of time and is brought to the most general possible level of logic by Spencer-Brown: Time and motion can only be described if a single object moves against a background that is either resting or moving uniformly, such as the motion of the moon against the background of the starry sky.

Spencer-Brown is aware that his symbols ‘…’ and ‘f’ are misleading for re-entry. He therefore forms his own symbol for the re-entry with great suggestive power:

spencer brown reentry     spencer brown modulator

The symbol shows the movement that is running back. Spencer-Brown was convinced that with this discovery he had found the Gordian knot he was looking for. He found a uniform formalization of limit crossings, which can also be nested into each other. With them he developed the imaginary numbers and, within logic, time in its distinction from space. And he is able to integrate complex oscillating figures in circuit diagrams, which are already very close to the kind of programming he was doing in technical applications.

Remark: In another vivid picture he understood the re-entry as tunneling. This is already shown by the symbol when the new character undercuts the two variables a and b and returns to the beginning as if through a tunnel. A further commentary explains how a deeper understanding of reason and medium can be found with the tunnel and the tunneling.

– Inertia and self-organization of circuits

In a final step, Spencer-Brown extends the algorithms to circuits. This step does not simply come from the blue sky or is a surprising reminder of his time as a circuit programmer, but consistently continues the consideration of the relationship between arguments a and functions f operating on arguments a. In his calculation, he is concerned only with those functions f which, following the model of the successor relation, uniformly repeat a transformation until they lead to a limit value (in the language of Heinz von Förster their eigenvalue). After he has shown how time can be introduced from the oscillating movement at the moment of re-entry of f within logic, he generalizes this in a final step by looking at networks that transform an input a into an output f. The transformation should again take place uniformly (self-similarly). Therefore he speaks of a modulation from a wave into a new wave. He can refer both to technical appliances which, for example, modulate sounds in sound engineering, as well as to a basic idea in mathematics, if there all functions can be approximated arbitrarily exactly by wave functions (Stone-Weierstrass-theorem, whose original version was proved in 1885 by Karl Weierstrass [1815-1897]).

Even if this is not directly visible, this is the most radical step of his logic: He not only leaves the field of classical logic with it, but also of human language and its symbology in general, and relies decisively on the interaction of logical ideas and the inherent dynamics of the technical systems designed with their help. The circuits sketched by him deliver the desired results only in their realization, thanks to the physical inertia and self-organization given with them. This, however, can only be seen indirectly from the circuit diagrams he presents. Circuits are certainly designed and constructed more precisely today than at the time of Spencer-Brown, and it would be wrong to assume that his designs reflect the current state of art. But it would also be wrong to fundamentally question his approach from today on. With his formalized schematic diagrams, he meets a level that shows the basics within logic, from which both the electrical circuits and the neural networks emerge, which are used by neurophysiology and the learning systems of artificial intelligence. Although I am not aware of any direct references from Spencer-Brown to McCulloch and other authors of cybernetics, neurophysiology, and neural networks who published their most important works between 1930 and 1960, in my view Spencer-Brown is unmistakably to be seen in the new horizon they opened up. His logic has a similar relationship to them as Leibniz and Newton had founded the differential calculus from the experiences of classical physics, and Aristotle was more aware than almost any other philosophy of its historical position against the background of Greek natural philosophy. With the schematic diagrams Spencer-Brown leaves the one-dimensional reasoning, as it dominated until Frege, and finds with the circuits an approach to understand the inner structures in the transition from ground to form.

With the circuits, he designs a kind of experimental logic that combines their ideas in a similar way in interaction with the material and substances they process, as did the chemists and early engineers and programmers of the first circuits, and as can be seen today in the design of artificial-intelligent systems and their self-learning processes. Initially, logic relied with its first technical designs and their realization on as yet unknown material properties, which were only clarified in more detail in the course of further investigations and experience. With the help of logic, technical ideas are developed whose course and outcome can initially only be predicted with uncertainty. It no longer applies as it does for Kant that »the object conforms to the nature of our faculty of intuition« (Kant, KrV, B XVI, Trans. J.M.D. Meiklejohn), but both are of equal importance and mutually condition each other. Anyone who, like Spencer-Brown, designs a circuit cannot be sure in the beginning whether and how it will work. It is known from chemical research that most discoveries came unexpectedly and surprisingly, and so Spencer-Brown also sees it in the design of circuits and carefully searches for the basic logic of this approach, which is pretending to be in an as yet unknown area. It is not simply a matter of pretending, but of initiating processes whose outcome is uncertain, and the researcher's new findings depend on how attentive he is to unexpected sequences of the events he initiates. That brings logic into a completely new framework and, in my opinion, was the original aim of Spencer-Brown's own circuit design experience. For me, it is of the same fundamental importance as Frege's Begriffsschrift, with which the linguistic turn was founded. When the Laws of Form have reached the design of circuits, Spencer-Brown no longer seeks an understanding of how the language of a researcher's monologizing thought can be described grammatically or syntactically, but rather of the language that develops in the process of the researcher's interaction with his object. The schematic diagrams designed by him contain a degree of freedom that gives sufficient leeway to the actual process of the technical systems realized with them. Those who have programmed themselves or worked with engineers will be surprised at some point by the experience that their own programs and technical systems are used completely differently and show properties that the developer originally did not expect.

The symbols and diagrams introduced by Spencer-Brown are so unusual that it takes some effort and concentration to understand his concern. Those who are afraid of all graphic representations that have to do with mathematics and technology must overcome a hurdle similar to that which was necessary when the spontaneous narrative flow of people was formalised for the first time in grammatical rules and logical inferential forms.

Circuits are used to look at technical systems that may be in different states. Each new input can put the circuit into a new state. Just as proto-arithmetic has infinitely many forms that can never be completely overlooked, no one can completely predict what input a machine can be provided with and what state this input will put it in. With the transitions of a circuit from one state to the other, there is a third, independent axis in addition to space and time (Hegel speaks of a nodal line, Knotenlinie). During the formation of a new state, two things happen: First, there is a certain inertia to maintain the old state as far as possible. This also applies to the observer, who orients himself as far as possible on what remains and which changes occur. On the other hand, self-organization (self-optimization, balancing) can take place. In the new state, the largest possible, coherent sub-areas are formed, which are internally consistent and connected to each other with contradiction-free paths. Spencer-Brown only casually hints at the introduction of a new dimension and does not investigate its consequences further: »We are now in difficulties through attempting to write in two dimensions what is clearly represented in three dimensions.« (LoF, 66)

Remark: The relatedness of a circuit to its states can be compared to the relatedness of a substrate to its states. Standard example is the substrate water with the states ice, liquid and gas. With each change of state a new order is created. The transition from one state to another is understood as emergence. The respective order can be explained by a self-organization of the substrate in the new state. I can imagine that it should be possible to continue Spencer-Brown's explanations in this direction and to give a logical basis to the theory of self-organization and emergence.

How this is meant, however, is by no means self-evident. As an example, Spencer-Brown chooses a circuit with which a wave is modulated. The modulation changes an incoming wave (the input a) into two outgoing waves (the output f and p), which each have only half the frequency, but are shifted in phase against each other. In order to be able to describe the task better, I have not only taken over the names of input and output, but – following Kauffman's further explanations – named all markers (switching elements) with letters. In contrast to Spencer-Brown, the arrow for output p is also added: Just as the switching element e leads to output f, so the switching element j leads to the second output p.

spencer brown schaltkreis

E4 Schematic diagram of a reductor (modulation)
The reductor reduces (modulates) the frequency of a wave a by half and generates with f and p two variants of a wave of half the frequency.
Source: LoF, 67 with own addition of the names for the markers and the arrow to the second output p

The circuit contains one input a, two outputs f and p as well as the 8 switching elements (markers) b, c, d, e, g, h, i, j. Each marker inverts the incoming signal into a predefined value: This can in Spencer-Brown's calculation be only a cross or an ock, whereby in this circuit diagram the cross is represented as crest of waves and the ock as trough of waves. If a wave crest encounters a marker that converts into a wave trough, then the conversion can take place. If, on the other hand, a wave crest encounters a marker that converts into a wave crest, the conversion is not possible and the path is blocked at this point by the marker.

The task of the circuit is clearly visible and intuitive: an incoming wave a is to be converted into an outgoing wave f of half the frequency. But what happens to the individual switching elements? Here the spontaneous intuition is misleading: The complete incoming wave is not transformed step by step at each switching element into another wave, as it is shown in the circuit diagram, until the output f (or p) is reached. But to understand the schematic, a hidden third dimension is to be recognized. Spencer-Brown explained this later in an unpublished text. Above all, however, Louis Kauffman succeeded in clarifying the facts better.

The incoming wave a does not enter the circuit in one blow, but step by step. In this example it is composed of 4 elements (4 elementary inputs): wave crest - wave trough - wave crest - wave trough. Each element of the incoming wave enters the circuit individually. Spencer-Brown therefore speaks of 4 strokes (impulses) (R, 10). For each individual stroke, we have to consider how it is converted at the individual switching elements and in which state it puts the circuit. If this idea is understood, the following picture shows what happens at the first stroke. For this the first section of the waves drawn in the diagram must be evaluated. In the graphical representation I follow Kauffman 2006. For the individual strokes he represents the wave crest (cross) by a full circle and the wave trough (ock) by an empty circle. The result is very vivid, because it shows how each switching element converts full circles into empty circles and empty circles into full circles.

spencer brown schaltkreis stroke 1 z

E4-1 Possible paths after the first stroke

This complex diagram is to be explained step by step:

– With continuous green lines, the two simplest possible paths are highlighted that lead from the input to the two outputs for the first stroke: These are the ways:

a → b → c → d → e → e → f
This path was probably the starting point from which Spencer-Brown constructed the schematic.

a → g → j → p
This way leads directly to the second output.

The two markers h and i are not used yet. They are only necessary during the following steps.

– In addition, further paths are possible which are highlighted in green ultrafine-dashed. For example, it is also possible to reach the output f in another way:

a → g → j → h → e → f

– And feedback loops are possible that are marked green-dashed.

d → e → d ... and i → j → i …

Loops of this kind constantly revolve around themselves and never reach the output. They are called memory by Spencer-Brown, because in their circular course the same value always returns anew. To reach the two outputs f and p, these loops are not necessary. I suspect that Spencer-Brown attached them in a similar way to the homeostates introduced by Ross Ashby in 1948, to which memory units are attached in order to stabilize the system. The homeostat was intensively discussed in the 1950s and seen as a guide to the design of novel systems.

e → b
This jump back enables further loops. It is just as unnecessary as the memory units. While the memory units are reminiscent of Ashby's homeostates, for the jump backs the feedback loops of cybernetic systems should be considered. Spencer-Brown's schematic thus contains all the ideas that revolutionized scientific research at the time. Spencer-Brown gives no indication as to whether and how homeostatic or cybernetic systems can be systematically derived from his schematic. But he has obviously prepared this possibility.

In summary: There are many ways and internal loops for the first stroke. The actual paths taken are found according to the principle of the shortest path. For me, the example of the first stroke already provides two essential insights:

(i) Spencer-Brown already graphically continues the basic idea of Frege's terminology. He does not limit logic to one-dimensional final chains, but designs two-dimensional final chains with bifurcations and alternatives.

(ii) He was able in transferring the principle of the shortest path known from classical physics into a finite (discrete) representation and to break it down to the level of elementary logic. Classical physics considered continuous movements of mass particles moving through time and space. It can be imagined that the reductor, according to this principle, automatically finds the shortest path and can be called a self-organization.

The paths possible for the first stroke are no longer possible for the second stroke. Example: At the switches a, b, c and d empty circles follow each other, which must be excluded. Instead, the following paths are found for the second stroke:

spencer brown schaltkreis stroke 2 z

E4-2 The two ways after the second stroke

In order to achieve the desired output at f, the new element h must be added to the switching elements used in the first stroke. Since a full circle has already been reached at g, the full circle must first be converted into an empty circle at h before the full circle desired in output f is reached from h to e. The full circle must then be converted into an empty circle at h, before the full circle desired in output f is reached.

This is how the two paths result:

a → g → h → e → f
a → g → j → p

Further ways are not possible for the second stroke, because the way from a to b is blocked and only the way from a to g can be used. Also the return jump from e to b does not help, because from there all possible ways are blocked somewhere. The second stroke is therefore a critical section in this circuit.

The second stroke shows the third essential insight:

(iii) It is not possible to go the same way for all strokes. Instead, each stroke changes the state of the circuit diagram. Each new stroke must find its own way.

This becomes even clearer with the third stroke. It may be expected that with the third stroke the first stroke is simply repeated, because with a a wave crest enters the circuit again. If this were the case, however, the target would be missed, because then at f a wave crest (full circle) would be generated again. Instead of generating a wave of half the frequency, a wave would be generated that consists only of wave crests in a straight line.

spencer brown schaltkreis stroke 3 z

E4-4 The two ways after the third stroke

Since in this state it is not possible to go from b to c or from g to j, the element i is added. After j, the two paths branch to the two desired outputs.

a → b → i → j → h → e → f
a → b → i → j → p

The presentation of further paths is waived for the third and fourth stroke.

For the fourth stroke, the simplest path can again be selected for the first output at f. The first output can be selected for the third stroke and the fourth for the fourth stroke. For the output at p, a branch to b must be made. This results in the two paths:

spencer brown schaltkreis stroke 4 z

E4-4 The two ways after the fourth stroke

These two ways are already known from the first and third stroke.

Spencer-Brown calls a dodge like b over i to c or g over j to h the introduction of an imaginary axis. The direct connection must be left, and a new (imaginary) axis must be opened in order to draw the roundabout way. The return to the original chain of evidence can be explained by the rotation of the imaginary numbers.

Changes of state and self-organizations of this kind are not possible in the usual mathematics. They must be built up in the course of a short period of time, whereas in conventional mathematics it is assumed that everything takes place within a single instantaneous moment of time. Mathematics does not know equations that contain errors and resolve themselves in the course of temporal processes. Spencer-Brown's logic, on the other hand, introduces a medium with these capabilities in the form of circuits. It interacts with the medium and relies on its inertia and self-organisation to contribute to the solution of the project. This is a fundamentally new approach to scientific work that has prompted Spencer-Brown to shift from Western to Buddhist thinking.

»Mathematically, since there is no time in mathematical expressions, everything happens all at once, the short signal from i to j is never there, since there is no time for it to exist. The propagation-time of the signal through the leads and markers has not just become very small: in mathematics it has vanished completely, and this why we say the signal is ‘imaginary’. But like the other imaginary i in numerical algebra, which has similar 'magical' characteristics, the fact that it exists in the mathematical circuit, not as a reality, but as a threat, should j even consider doing the wrong thing, is sufficient to keep j honest. The threat never has to be carried out in reality, in fact mathematically it cannot be. But its mere presence is sufficient. It is what we mean, in chess, when we say 'the threat is more powerful than the execution'.« (R, 9)

Is there an explanation for this peculiar behavior? The schematic shows only the result, but gives no explanation. Apparently it was an empirical observation that the circuits designed and realized by Spencer-Brown in 1961 behaved in such a way that he could not explain it. Kauffman has investigated it further in several papers (and he has even succeeded in designing new circuits with the same behaviour and in entering into a kind of friendly competition with Spencer-Brown who is able to construct the simplest circuit with the desired properties). For him, too, however, it remains »a black box B with input line z and output line d with this behaviour« (Kauffman 1995, 92), whereby, in contrast to Spencer-Brown, he designates the input with z instead of a. For him, these states are caused by time delays due to technical conditions of the circuit, which, however, are not further explained.

»As a model for action we assume that each change in z is held fixed long enough for the automaton to accomplish its transition to the next state. In terms of applications this means that the model assumes delays associated with each inverter. There are no delays associated with the connecting lines in the graph. This method of distributing the delays is a mathematical abstraction, but it is sufficiently realistic so that these circuits can actually work at the hardware level. In any given instantiation the delays are given up to the variation in the components. If the automaton is mathematically determinate (as in this example), then it will behave in the same way for any choice of actual delays – so long as the input varies more slowly than the time needed for internal balancing.« (Kauffman 1995, 91f)

Spencer-Brown argues in a similar way in his later, unpublished 1992 paper, repeating the basic idea of how crosses on switching elements are converted into ocks by transistors, and why a circuit can gain a »life of its own«:

»In hardware the mark behaves like an idealized transistor 'nor' gate: If any of its inputs is connected to an EMF (say 1 or mark), then its output is grounded (say 0 or ock).
    Some engineering designs with these units will 'work' even if their mathematics is incorrect, because of delays in signal travel-time etc. These are bad designs: they might suddenly fail if the material are improved.
    The beauty of the present mathematics is this: if it is correct, anything designed from it in transistor units will certainly work.« (R, 5)

It is easy to understand why the success of these circuits was received in disbelief. Spencer-Brown reports on the reaction of his colleagues when he first realized a circuit of this kind in 1961: »My colleagues were so shocked that they wouldn't look, even though we constructed it in transistor hardware and watched it counting on the cathode-ray tube. 'There must be something wrong with it.'« (R, 7) Ultimately, in a comparable circuit, it refers to the technical properties of the circuit.

It is left to the skill of the engineer to test and apply the behavior of the technology. It cannot be »programmed out« and appears in the circuits as conflicts that balance themselves in reality. The art of constructing a circuit (»the art of making a reductor« and its »ingenuity«, R, 10, 13) lies in designing a circuit as a whole with which both cases can be implemented and technology can play its part. He compares their work with that of chemists. They also test different chemical processes and ultimately cannot say exactly why the process they find succeeds. It is not possible to fully describe and understand the multitude of processes and substances that may act as catalysts. (When I talked to researchers in the chemical industry, this was confirmed to me. The reality of chemical research and its success is very different from what an outsider would expect.)

»As in a chemical formula, it gives the number of elements (markers and leads) without saying how they are actually connected, so if they can be connected a different way in the same formula we call it an isomer« (R, 21)
    »The same is true for reductors, as mathematical entities. Like numbers, they don't actually exist in any space we call Euclidean. Yet unless we draw them in some way, even though no way we can draw them is relevant to what they actually are, we cannot use them, nor can we even imagine them. Even the specification of the reductor is not, mathematically, what it really is: it is just a more succinct way of drawing it, getting rid of all the irrelevant ways it can be diagrammed as a circuit.« (R, 21)
    »Mathematics, like musics, is meaningless unless it is performed.« (R, 23)

The reference to music seems striking to me: Just as a composer and a music player can never completely calculate what sound an instrument will produce, but try it out and play with the special, often unexpected sound effects of the instrument that result from its material and construction, so too do the researchers. It is a great achievement for me not simply to juxtapose the creative process with logical repetition and routine, in the course of which it will gradually become clear why which states occur (today, a circuit is undoubtedly no longer designed into the indefinite after it has proved itself many times), but to show a way into logic, how the interaction of the developer and his instruments and materials is to be thought of and paradigmatically illustrated using examples such as the circuits mentioned by Spender-Brown.

In summary, the innovation of this logic for me lies in three points:

– Introduction of the imaginary axis

Spencer-Brown particularly emphasizes this point. The imaginary axis means that in the circuit the linear following is left. A distinction is made between different paths, some of which run parallel and some of which are connected and fed back to each other. This requires a new axis compared to that of linear closing, which for Spencer-Brown is compared with the introduction of imaginary numbers, although he nowhere calculates with imaginary numbers.

– Inertia and Self-Organization of the Circuit

That is the even more important consequence for me. If a state in the circuit is balanced after a stroke, there seems to be a tendency to maintain this state during the next stroke. However, conflicts cannot be avoided. This leads to a kind of self-organization: The circuit disintegrates into parts that are free of conflict, and only certain transitions are permissible. They result in a harmony which Spencer-Brown called musicality.

– Interaction of the developer and the works of art created by him

Note (musicality and technique): What Spencer-Brown introduced in logic and circuitry was surprisingly introduced into music at the same time as an avant-garde technique: Loops. Its model were mechanical instruments such as musical clocks. This technique is what makes minimalism so appealing and, especially in the pieces by Steve Reich (* 1936), it is directly audible to me and I see it as a musical counterpart to the Laws of Form. For example, the 1973 published piece Music for Pieces of Wood is available with a very clear visualization on YouTube. Anyone who finds it difficult to understand Spencer-Brown's ideas should start with this video sequence. There you can immediately see and hear how further rhythms are built up on a canonical basic rhythm, until step by step your own individual sound impression is built up. While at the beginning the individual rhythmic lines can be clearly distinguished from the ear, at a certain point this jumps into a general impression of a harmonious acoustic overall picture, in which also the statically included basic rhythm seems to vibrate.

If you want to deepen this topic using the works of Kauffman, you should pay attention: While it is very descriptive for the graphic representation to depict the cross and the ock as a full circle and an empty circle, it is easier for a more mathematical spelling in list form to set the cross as 1 and the ock as 0. Then the respective state of the circuit can be described by the values 0 and 1, which are assumed at the respective markers. For this a number tuple of the input a and the 8 switching elements is formed (where Kauffman ignores the two outputs f and p):

{a, b, c, d, e, g, h, i, j}

The concrete values are used after each stroke. If 1 is set for the full circle (cross, marked) and 0 is set for the empty circle (ock, unmarked), this results for the first way to the output at f:

  a, b, c, d, e, g, h,  i,  j   Path from a to f   Path from a to j [p]
Stroke 1    1, 0, 1, 0, 1, 0, 0, 0, 1   a → b → c → d → e → f   a → g → j → p
Stroke 2    0, 0, 0, 0, 1, 1, 0, 1, 0   a → g → h → e → f   a → g → j → p
Stroke 3    1, 0, 0, 1, 0, 0, 1, 1, 0   a → b → i → j → h → e → f   a → b → i → j → p
Stroke 4    0, 1, 0, 1, 0, 0, 0, 0, 1   a → b → c → d → e → f   a → b → i → j → p

The values at a, e and j are highlighted in half bold to make input and output clear, and it is easy to see how the frequency of the incoming wave at a and at marker e or j is halved before the output in f or p respectively. In this way the circuit is represented in Kauffman 2006, whereby I have added the paths to the two outputs. Spencer-Brown also originally wrote the circuit in a similar way (see R, 7, Figure 1.1).

Already in 1995 Kauffman emphasizes: The symbols introduced for the markers have double meaning as elements of a process and at the same time as its results. Each marker is both an element in the schematic, and at the same time the marker shows the value into which the sign arriving at the marker is to be converted or into which a blockage occurs.

»In using the interlock algebra, one regards the link diagram as a circuit whose parts (the arcs in the diagram) are both carriers of circuit values and operators that process these values. This duality is the core of the interrelationship with topology. In actual applications of digital circuitry, there is usually a sharp distinction between circuit elements as operators and circuit elements as carriers of signals.« (Kauffman 1995, 88)

Chemistry The schematic can be transferred to chemical process chains. Input a describes the external resources that must be introduced into the process, output f the outgoing materials that are to be produced with the process. Each marker describes a chemical reaction that occurs at this point. All in all, the schematic diagrams describe chemical process chains and their possible branching patterns.

In the terminology introduced by Hegel according to chemistry, each state can be understood as elective affinity (Wahlverwandtschaft), in which the chemical reactions that are elective are linked with each other. The transition from one state to the next can be called Hegel's nodal line.

Organization The schematic can stand for organization models. Input and output describe the external resources and the product produced by the organization. Each marker is literally a position in the organization that assumes a certain task and interacts with the other positions.

Within the organization, there are sub-processes that are processed sequentially, as well as branches that can lead to equivalent products. In terms of organisation, the task can be set to optimise the paths for the individual strokes in such a way that, as far as possible, no point is on the critical path several times in succession, but can regenerate itself.

There is no controlling unit in the process, but self-organization, which takes place after certain principles like e.g. the principle of the shortest way. Instead of a central control system, a decentralized control system can be assumed if each unit searches for local paths in its own environment until the individual local paths originating from the units connect with each other and result in a system solution.

It is assumed that the process has organized itself over the four steps. But it is possible to ask an external consultant to analyze the process for possible weaknesses and improve the spontaneous result of the self-organization. This can go as far as modifying the network plan or even creating additional virtual jobs that allow ways that were previously closed.

Neural networks and cybernetics The overall solution can be programmed recursively: Starting from a, every possible path is examined recursively. If it runs into a cul-de-sac, the branch into the cul-de-sac is marked and the next possible path is searched for until the first continuous solution is found. This is similar to recursive programming when on a chess board for a horse the path is searched on which it enters all fields exactly once.

The significance of the two memory units introduced by Spencer-Brown as well as the feedback loops are to be examined more closely and make an application to cybernetics and neural networks possible.

Physics and Cosmology Spencer-Brown describes bypasses that lead him into the imaginary axis. It is also conceivable to allow tunnels and wormholes with which distant locations can be directly connected under precisely defined conditions. Re-entries are a possible example of wormholes: If a circular path is repeated infinitely often in the schematic with the help of feedback in the network, this iteration can generate a direct connection between two opposing points in the circle by re-entry. (This is how the Greek mathematicians introduced the roots. The diagonal in the square cannot be calculated directly with the help of rational numbers, but thanks to the fixed rule with which the length of the diagonal can be approximated more precisely, a new number can be created with wurzel aus 2 and introduced by re-entry. In the picture of the circuit diagrams considered here, this is a wormhole or a tunneling).

Observer

»An observer, since he distinguishes the space he occupies, is also a mark.« (LoF, 76)

Spencer-Brown wants to follow the Copernican turn consistently into logic and design a logic within which the position of the observer and his relativity can be represented. In order to better illustrate this, he chooses new signs: instead of a cross, a circle stands for the boundary of something. Repetition is represented by two circles lying side by side, nesting by circles lying inside each other. In a presentation by Kauffman Laws of Form and the Logic of Non-Duality, 31, the two axioms take shape:

spencer brown kreise

At first this does not result in any new knowledge, but in the representation with circles it is better to show that different observers see the same nesting and repetitions differently. What looks like a nesting to one observer may look like a repetition to another from another position. If, for example, the equator on earth is chosen as a circle and an arbitrary circle, which is completely on the southern hemisphere, is chosen as a second circle, then this second circle for an observer viewing from the northern hemisphere lies within the equator. For an observer in the southern hemisphere, who is between the equator and the second circle, both circles are side by side (see LoF, 102f). In the following pictures the respective position of the observer is marked with a blue square and with a small square, where he sees the other observer:

spencer brown beobachter

2 circles from different perspectives
left: An observer in the northern hemisphere sees the two circles in each other and the second observer in the space between the circles.
right: An observer in the southern hemisphere sees the two circles next to each other and the first observer in the larger circle.

Each representation of logical elements and their relationships and operations is dependent on the perspective of the observer, who at the same time makes himself an image of his own position within logic. Spencer-Brown calls this kind of observer-dependent representation »experiments« (LoF, 70-75).

The inclusion of an observer leads into a new infinite loop. In the first step, an observer draws an image of how he sees the system from his position. In a second step, the observer enters his own location, i.e. the place where he is within the map. Everyone knows this from walking maps attached to nature, which show with a highlighted point the location where the map is located within the landscape. In the third step, you can ask from which location a map containing the location could be drawn. It must be beyond the observer and his environment. A map can also be drawn for this location from a location even further away, e.g. a location on the left.

This becomes just as clear in structural arrangements. The patient places a representative of himself in the room and surrounds him with other figures from his social system. The patient now sees himself, as it were, in the system surrounding him. The therapist, in turn, sees from the outside how the patient reacts to his self-image and interacts with it, when the people in the room begin to express themselves and the patient shifts or changes their position. A supervisor can observe the therapist's work, etc. – In debates it is a popular game when the disputants alternately want to step out of the clinch and judge the other from a meta-level. This provokes a corresponding reaction, until a cycle without end occurs. – In meetings, the process is sometimes disrupted when someone raises both arms and asks procedural questions. Then someone else can come forward and question the admissibility of procedural questions and so on.

beobachterstandort e     verfahrensdiskussion e

Therefore, a final step is needed to demonstrate that the results are valid regardless of the location of the observers. There must be transformation rules that translate one position into another. What transformations are there between the different representations and what are their properties? For Einstein this was the question about the Lorentz transformations, in economics it is the question about the currency transformations and for me there is a related question about the truth transformations at Wittgenstein.

Equally important is the question of the lowest system that Spencer-Brown called (s₋₁). Here, too, a reference to the psychotherapeutic applications should suffice at the moment. How is it to be explained that the persons involved in a structural arrangement can feel and say how the person they represent feels and what changes in the system they wish to see (e.g. that other persons should be closer or further away from them, or that there are further persons who are missing and are to be included in the arrangement) although they have not received any further information about the persons set up? Apparently on the lowest level – in this example this is the physicality (body) of the involved persons – there is a level of perception which in its openness and infinity goes beyond all conscious information. For me, this is an example of the small perceptions meant by Leibniz.

Luhmann, on the other hand, seems to be primarily concerned with the fact that logic and truth only emerge from the communication between observers. True is no longer in the classical sense what corresponds to the thing, but what is established in the communication of the observers.

»With the re-entry we generally mean the reintroduction of a distinction into the realm that allows it to be distinguished. An example: According to Niklas Luhmann's systems theory, the scientific system is differentiated on the basis of the distinction true/not-true. If one develops a theory of science that observes the use of this distinction with the distinction true/not-true, a re-entry is carried out with the theory of science. With regard to the system of science, one finds oneself in such a theory of science on a meta-level, since one can now ask the question about the truth of the operation of science - the truth of the distinction true/not-true. By being the same distinction that is applied to itself, a situation arises 'in which the distinction is simultaneously the same (as the particular distinction of the operations of this system) and another (as the observed distinction)'. (Claudio Baraldi, Giancarlo Corsi, Elena Esposito GLU: Glossary of Niklas Luhmann's Theory of Social Systems, Frankfurt am Main 1997: 152).« (Lau, 55f, my translation)

Sigles

AUM = The G. Spencer-Brown – AUM-Conference 1973, Esalen 1973; Link

HW = Georg Wilhelm Friedrich Hegel: Werke in 20 Bänden. Auf der Grundlage der Werke von 1832-1845 neu ediert. Red. E. Moldenhauer und K. M. Michel. Frankfurt/M. 1969-1971; Link

KrV = Immanuel Kant: Kritik der reinen Vernunft (1781, 1787)

LoF = George Spencer-Brown: Laws of Form, New York 1972 (Julian Press) [1969]; Link

R = George Spencer-Brown: An Introduction to Reductors, unveröffentlichtes Typoscript 1992

SWH = Tatjana Schönwälder-Kuntze, Katrin Wille, Thomas Hölscher: George Spencer Brown, Wiesbaden 2009 [2004]

Literature

Gregory Bateson: Ökologie des Geistes, Frankfurt am Main 1981 [1972]

Dirk Baecker (Hg.): Kalkül der Form, Frankfurt am Main 2016 [1993]

Dirk Baecker: George Spencer-Brown wird 90; catjects

Ulrich Blau: Die Logik der Unbestimmtheiten und Paradoxien, Heidelberg 2008

Thomas Collmer: Hegels Dialektik der Negativität, Gießen 2002

Louis H. Kauffman und Francisco J. Varela: Form Dynamics
in: Journal of Social and Biological Structures 1980 3, S. 171-206

Louis H. Kauffman 1995: Knot Logic
in: ders. (Hg.): Knots and Applications, Singapore 1995, 1-110

Louis H. Kauffman 2006: Laws of Form – An Exploration in Mathematics and Foundations, Rough Draft

Louis H. Kauffman 2009: Laws of Form and the Logic of Non-Duality, San Rafael 2009; Link

Felix Lau: Die Form der Paradoxie, Heidelberg 2012 [2005]

Bernie Lewin: Enthusiastic Mathematics, Melbourne 2018

Niklas Luhmann: Beobachten
in: Luhmann: Die Wissenschaft der Gesellschaft, Frankfurt 1992 [1990], S. 68-122

Niklas Luhmann: Die neuzeitlichen Wissenschaften und die Phänomenologie, Wien 1996

Niklas Luhmann: Die Gesellschaft der Gesellschaft, Frankfurt am Main 1998 [1997]

Warren McCulloch: Verkörperungen des Geistes, Wien, New York 2000 [1965]

Warren McCulloch: The heterarchy of values determined by the topology of nervous nets
in: The Bulletin of Mathematical Biophysics 7(1945), 89-93; deutsche Übersetzung bei Vordenker

Martin Rathgeb: George Spencer Browns Laws of Form zwischen Mathematik und Philosophie, Siegen 2016; Link

Claus-Artur Scheier: Ästhetik der Simulation, Hamburg 2000

Claus-Artur Scheier: Luhmanns Schatten, Hamburg 2016

The Telegraph: George Spencer-Brown, polymath who wrote the landmark maths book Laws of Form – obituary; Telegraph vom 13.9.2016

Franco Volpi: Der Status der existenzialen Analytik (§§ 9-13)
in: Thomas Rentsch (Hg.): Martin Heidegger - Sein und Zeit, Berlin 2001, S. 29-50

 


© tydecks.info 2017-2019