# Divination Methods and Programming Languages

A few years back, I made a post about a theory of divination, where methods of divination can range from the purely intuitive (e.g. clairvoyance) to the purely technical (e.g. meteorological forecasting as seen on the Weather Channel).  Most forms of divination fall somewhere in-between, that combine some aspect of intuition with some aspect of technique or technology (e.g. Tarot, runes, geomancy).  Anyway, in that post, I brought up a few points that I think all people involved in divination should bear in mind, but also a bit about how divination methods are like programming languages.  Being educated as a computer scientist and laboring as a software engineer, I’m prone to using metaphors about the things I’m most knowledgeable in, but I think it can be expanded about how I view divination methods and what they can overall achieve for us.

So, how are methods of divination like programming languages?  Well, what is a programming language?  It’s a system of symbols and a grammar that are used as input to a computer to make it do something.  Punching in numbers and symbols into a calculator, for instance, can be considered a very simple form of programming language: you tell the computer to add these two numbers, divided by this other number, save it to memory, start a new calculation, involve the value stored in memory, and display the output.  Most programming languages (PLs, for short) are much more complicated than this, but the idea is the same: you’re giving the computer a set of instructions that maybe take some input, do something, and maybe give some output.  Computers of any and all kinds exist to interpret some sort of PL, whether it’s just pure binary telling it to turn on or off some set of flashing lights, or whether it’s something elaborate and arcane to simulate intelligence; computers are essentially machines that take in PLs to do other things.  The study of PLs is, in effect, the study of cause and effect: tell the computer to do something, and the computer will do exactly that.  If the computer fails to do the thing, then either the commands given were incorrect (the computer understood them but you didn’t give it the right commands) or invalid (the computer couldn’t understand what you told it to do).

In computer science, there’s a thing called Turing completeness.  If we consider an idealized abstract computer stripped down to its most basic parts (a universal Turing machine), it can compute anything that is, well, computable; by definition, a universal Turing machine can simulate any computable algorithm, any computable programming language, and any computer.  Any computer you see or interact with, including your smartphone or laptop or video game console, is a concrete implementation of a Turing machine.  Turing completeness is a property that applies to computers and, by extension, PLs: if a concrete computer or programming language (let’s call it A) can simulate a universal Turing machine, then because a universal Turing machine can simulate any other type of computation or computation method , then the computer/programming language A can simulate any other computer/programming language.  This is called Turing completeness.

What this boils down to is saying that any Turing-complete programming language can do anything that any other Turing-complete language can do: C is functionally equivalent to ML, which is functionally equivalent to Lua, which is functionally equivalent to lambda calculus.  What this does not say, however, is that any given Turing-complete PL is as easy to use as any other Turing-complete PL.  Thus, what is easy to do in C is problematic in Lisp, which might be outright unwieldy and frightening in some other language.  It may not be impossible, just different; each PL is a different tool, and different tools are good for different ends.  It is totally possible to fix pipe plumbing issues with a hammer, but it’s easier with a wrench; it’s totally possible to just build a house with a wrench, but it’s easier with a hammer.

However, this metaphor of divination methods and PLs can show other things, too.  A geomancy student of mine recently came to me with an interesting question about a detail of a technique that I don’t personally use, but is documented in an old manuscript.  I don’t put any faith in that technique, so I won’t describe it here, but he wanted to know why I didn’t use it, and how we might find out more about it.  He asked me whether I’ve ever asked geomancy about itself before, like to do a reading to confirm or deny certain techniques.  I…honestly can’t see the point of doing so, but to explain why, it’s time to go back to computer science.

In addition to Turing completeness, there’s this other notion in mathematics that applies to computer science and PLs called Gödel’s incompleteness theorems.  It’s a little heady and obtuse, but here’s the gist: say you have some system of describing information, like arithmetic or physics.  This system has a logic that allows certain things to be proved true (“if P, then Q; P, therefore Q”), and can disprove things that are false (“if P, then Q; P, therefore not Q”).  Given any such system, you might want it to be the best possible system that can prove everything that is true while simultaneously disproving anything that is false.  However, there’s an issue with that: you can either have consistency or completeness, but not both.

• Consistency is showing that your logic is always sound; you never end up proving something that is false.  Thus, we can only prove true things.  However, this is too restrictive; if you have perfect consistency, you end up with things that are true that you cannot prove.  Your logic, if consistent, can never be complete.
• Completeness is showing that your logic is always full; you always end up proving everything that is true can be proved true.  The problem with this, however, is that it’s too permissive; sure, everything that is true can be proved true, but there are also things that are false that end up being proved even though they’re contradictions.  Your logic, if complete, can never be consistent.

When it comes to logical systems, of which there are many, we tend to strive for consistency over completeness.  While we’d love a system where everything that could be true is shown as true, we also lose faith in it if we have no means to differentiate the true stuff from the false stuff.  Thus, we sacrifice the totality of completeness in favor of the rigor of consistency.  After all, if such a system were inconsistent, you’d never be sure if 2 + 2 = 4 and 2 + 2 != 3, a computer would work one second or start an AI uprising the next, or whether browsing your favorite porn site would actually give you porn or videocall your mother on Skype.  Instead, with a consistent system, we can rest assured that 2 + 2 can never equal 3, that a computer will behave exactly as told, and that porn websites will only give you porn and not an awkward conversation with your mom.  However, the cost to this is that I have this thing that is true, but it can’t be proven to be true using that system you like.  Unfortunate, but we can make do.

As it turns out, Gödel’s incompleteness theorem applies to any system described in terms of itself; you cannot prove (which is a stronger, logical thing to do than simply giving examples) that a given computer, PL, or system of mathematics is consistent by using that selfsame system.  If you attempt to do so and end up with such a proof, you end up proving a contradiction; thus, your system of logic has an inconsistency within that system of logic.  In order to prove something on the system itself, then, you need something more expressive than that system itself.  For instance, to describe actions, you need sounds; in order to describe sounds, you need language, and in order to describe language, you need thought.  Each of these is less expressive than the next, and while you can describe things of less expressiveness, you cannot describe it in terms of itself.  So, if I have this thing that is true and you can’t prove it to be true using that system you like, then you need something more powerful than that system you like.

So how do you learn more about techniques for a divination method?  Well, as above, if you have a particular system of knowledge and you want to describe it, you need something more powerful than that system.  What’s more powerful than, say, geomancy?  Something more inclusive and expressive than geomancy; like, say, human language.  If you have a question about geomantic techniques, you can’t really go to geomancy to ask about it; you go to a teacher, a mentor, an ancestor, a discussion group to figure it out by means of logic, rationality, and “looking out above” the system itself.  You have to inspect the system from the outside in order to see how it works inside, and generally, we need something to show us where to look.  That something is usually someone.

Programming languages are not, of course, divination methods.  Yes, dear reader who happens to know more about mathematics and the philosophy thereof than I do, I know I’m uncomfortably mixing different types of concepts in this post; divination methods are not instructions, nor are programming languages able to predict the future, barring some new innovation in quantum computing.  The point stands, and the concepts introduced in this post hold well and are generalizable enough for my ends here.  There are enough parallels between the two that give me a working theory of how divination works, and also of the limits of divination.  Just as with the relationship between regular expressions and context-free grammars, where the latter is strictly more expressive and powerful than the former, we need something more expressive and powerful than a divination system to learn how to divine with it.  Humans, for instance, fill that role quite nicely; all divination can do is “simulate” human situations, but it cannot simulate every possible situation uniquely.  There are human situations that cannot be accurately simulated by divination.  Divination, too, is inherently incomplete if we want to place certain faith in our techniques; if we allow, on the other hand, for divination to be complete, then we have to scrap the techniques which then become inconsistent and be more intuitive instead.  In that case, sure, you might be able to get insight on techniques, but it’s not by means of the techniques of the divination system itself; you sidestepped that matter completely.

# Hail, Alan Turing, Hero!

As part of my new grammatomantic lunar calendar rituals, I’m setting aside three days each lunar month for the veneration of the dead in my life.  The first day is given to my Ancestors of Kin, those to whom I am descended from by blood.  The final day is given to the Ancestors of the Great, culture heroes and other Mighty Dead who shaped the world we all live in.  The second day, however (associated with the letter Qoppa, and held this lunar month on June 16), I give to my Ancestors of Work, famous people to whom I look up to for the things I do in my life.  They’re like my family ancestors, but with ties of labor and field rather than blood and kin, a family linked together by the things we do rather than who we are.  As a magician, I put people like Pythagoras, Orpheus, Cornelius Agrippa, Crowley, and the like in there, but magic isn’t the only thing I do.  My day-job professional and academic career is based in computer science, and today, on the 60th anniversary of his death, I’d like to recognize Alan Turing, one of the greatest computer scientists the world has ever had.

Born on June 23, 1912, Alan Turing came from Irish, English, and Scottish family, and had a natural inclination towards mathematics from a young age.  This didn’t serve him too well in public schools at the time, when education focused more on classics than what we’d consider hard sciences today; still, even at 16 and not only reading but expanding on the work of none other than Albert Einstein, the dude was pretty cool at the things he was good at.  His work really shone through in the early development of computer science, working on one of the most famous problems of mathematics, the Entscheidungsproblem, or “Decision Problem”, the solution to which was that there was no solution at all.  Not only would this have surprised some of the most famous mathematicians of the time, but it’s become a central topic in computer science taught from the beginning ever since.

Not only was he a brilliant computer scientist and mathematician, but Turing also served the British Army, especially helping during World War II.  With his extensive knowledge of mathematics and science, Turing became one of the foremost codebreakers and leaders in deciphering enemy ciphers.  Not only did he produce general means to break German codes, while other methods used at the time were fragile and relied on too many assumptions, he also provided efficient means of breaking various types of code, helping to critically fight the German war machine (in several senses).  After WWII, he furthered the field of computer science as well as that of artificial intelligence, and pursued several advances in chemistry.

Despite having chatted with the Austrian philosopher Ludwig Wittgenstein, occasionally ran the 40 miles from his office to London, and inducted into the Most Excellent Order of the British Empire by King George VI, and basically invented modern computer science in an accessible manner, the world at that time effectively condemned him: he was gay.  After having his house robbed by an acquaintance of a lover of his, and noted that fact to the police, he was charged with indecency, since homosexuality was still illegal at that time in Britain.  Charged with this non-crime, he pleaded guilty (despite having no guilt nor shame for being gay, as he damn well shouldn’t’ve), he was given the choice of imprisonment or probation with chemical castration; he chose the latter, which would allow him to continue working, but it rendered him impotent and caused gynaecomastia.  This, combined with reparative treatment to “cure” his homosexuality (which we know nowadays from the “ex-gay” movement never works and only causes further harm), did nothing good.

Adding insult to injury, he lost his security clearance and was barred from continuing with cryptographic research with the government (even though he pretty much won WWII for them), and was even barred from entering into the United States.  He died on June 7, 1954, only at 41 years old.  Two years after his conviction and beginning of hormone treatment, with an investigation reporting that he committed suicide by cyanide poisoning with a half-eaten apple near his body, which is thought to be (but never confirmed) to be how he killed himself.  Rumor has it that this is where the original rainbow-colored partially-bitten apple logo came from for the Apple computer company, but that’s not the official story.

Today, I honor Alan Turing especially as a hero in my life.  An incredible amount of the technology I use and work I produce is indebted to him, not only because he helped develop the computer, but also because he helped turn the tides of war that could’ve endlessly shaped the world some 70 years ago.  His brilliance shines as a light for me, as a computer scientist but also as a human being.  Being a gay man myself, my heart breaks every time I recall how the world back then treated him for being the same way, and I pray that neither I nor anyone else has to undergo that sort of blatant bigotry and persecution.  Like Turing himself, though, I bear no guilt nor shame for who I am, and I take only joy in the work I do.  I’ll likely never run 40 miles nor ever care to, but hey, more power to Turing for doing that, too.

Ave, Alan Turing.  May your memory never be forgotten, and may your name and spirit always live on.  Guide our minds to know what can be known, and guide our hearts to love whom we will love, both without fear and without scorn.  Help us and be with us in our work, and may we thank you every time information flows through the fruits of your labors to us.