A few years back, I made a post about a theory of divination, where methods of divination can range from the purely intuitive (e.g. clairvoyance) to the purely technical (e.g. meteorological forecasting as seen on the Weather Channel). Most forms of divination fall somewhere in-between, that combine some aspect of intuition with some aspect of technique or technology (e.g. Tarot, runes, geomancy). Anyway, in that post, I brought up a few points that I think all people involved in divination should bear in mind, but also a bit about how divination methods are like programming languages. Being educated as a computer scientist and laboring as a software engineer, I’m prone to using metaphors about the things I’m most knowledgeable in, but I think it can be expanded about how I view divination methods and what they can overall achieve for us.
So, how are methods of divination like programming languages? Well, what is a programming language? It’s a system of symbols and a grammar that are used as input to a computer to make it do something. Punching in numbers and symbols into a calculator, for instance, can be considered a very simple form of programming language: you tell the computer to add these two numbers, divided by this other number, save it to memory, start a new calculation, involve the value stored in memory, and display the output. Most programming languages (PLs, for short) are much more complicated than this, but the idea is the same: you’re giving the computer a set of instructions that maybe take some input, do something, and maybe give some output. Computers of any and all kinds exist to interpret some sort of PL, whether it’s just pure binary telling it to turn on or off some set of flashing lights, or whether it’s something elaborate and arcane to simulate intelligence; computers are essentially machines that take in PLs to do other things. The study of PLs is, in effect, the study of cause and effect: tell the computer to do something, and the computer will do exactly that. If the computer fails to do the thing, then either the commands given were incorrect (the computer understood them but you didn’t give it the right commands) or invalid (the computer couldn’t understand what you told it to do).
In computer science, there’s a thing called Turing completeness. If we consider an idealized abstract computer stripped down to its most basic parts (a universal Turing machine), it can compute anything that is, well, computable; by definition, a universal Turing machine can simulate any computable algorithm, any computable programming language, and any computer. Any computer you see or interact with, including your smartphone or laptop or video game console, is a concrete implementation of a Turing machine. Turing completeness is a property that applies to computers and, by extension, PLs: if a concrete computer or programming language (let’s call it A) can simulate a universal Turing machine, then because a universal Turing machine can simulate any other type of computation or computation method , then the computer/programming language A can simulate any other computer/programming language. This is called Turing completeness.
What this boils down to is saying that any Turing-complete programming language can do anything that any other Turing-complete language can do: C is functionally equivalent to ML, which is functionally equivalent to Lua, which is functionally equivalent to lambda calculus. What this does not say, however, is that any given Turing-complete PL is as easy to use as any other Turing-complete PL. Thus, what is easy to do in C is problematic in Lisp, which might be outright unwieldy and frightening in some other language. It may not be impossible, just different; each PL is a different tool, and different tools are good for different ends. It is totally possible to fix pipe plumbing issues with a hammer, but it’s easier with a wrench; it’s totally possible to just build a house with a wrench, but it’s easier with a hammer.
This is what brings me to divination methods. I claim that, barring the direct influences of gods or cultural notions thereof, any divination method can answer the same questions that any other divination method can. Call it a divinatory Turing-completeness if you will; if a divination method can account for and describe some set of circumstances, situations, events, and results, then other divination methods can, as well. This is why you can go to a geomancer, a Tarot reader, a bone reader, a clairvoyant, or other types of readers and still walk away satisfied with good information despite the radical differences in style and method. That said, each method is better at different types of queries or better at different types of answer deliveries than others. Geomancy, for instance, excels at binary queries (“yes” or “no”), while Tarot is good for descriptions and feelings. Geomancy answers exactly the question you ask, while Tarot answers the question you should be asking. Geomancy gives you the answer up front and the details later, while Tarot gives you the details first and leaves the overall answer to be judged from them. I’m not trying to shill for geomancy, I’m just giving examples of how geomancy does divination differently than Tarot; after all, I can answer with geomancy anything a Tarot reader can, but I may phrase certain queries differently, or develop an answer differently. The overall result is the same, when all is said and done.
However, this metaphor of divination methods and PLs can show other things, too. A geomancy student of mine recently came to me with an interesting question about a detail of a technique that I don’t personally use, but is documented in an old manuscript. I don’t put any faith in that technique, so I won’t describe it here, but he wanted to know why I didn’t use it, and how we might find out more about it. He asked me whether I’ve ever asked geomancy about itself before, like to do a reading to confirm or deny certain techniques. I…honestly can’t see the point of doing so, but to explain why, it’s time to go back to computer science.
In addition to Turing completeness, there’s this other notion in mathematics that applies to computer science and PLs called Gödel’s incompleteness theorems. It’s a little heady and obtuse, but here’s the gist: say you have some system of describing information, like arithmetic or physics. This system has a logic that allows certain things to be proved true (“if P, then Q; P, therefore Q”), and can disprove things that are false (“if P, then Q; P, therefore not Q”). Given any such system, you might want it to be the best possible system that can prove everything that is true while simultaneously disproving anything that is false. However, there’s an issue with that: you can either have consistency or completeness, but not both.
- Consistency is showing that your logic is always sound; you never end up proving something that is false. Thus, we can only prove true things. However, this is too restrictive; if you have perfect consistency, you end up with things that are true that you cannot prove. Your logic, if consistent, can never be complete.
- Completeness is showing that your logic is always full; you always end up proving everything that is true can be proved true. The problem with this, however, is that it’s too permissive; sure, everything that is true can be proved true, but there are also things that are false that end up being proved even though they’re contradictions. Your logic, if complete, can never be consistent.
When it comes to logical systems, of which there are many, we tend to strive for consistency over completeness. While we’d love a system where everything that could be true is shown as true, we also lose faith in it if we have no means to differentiate the true stuff from the false stuff. Thus, we sacrifice the totality of completeness in favor of the rigor of consistency. After all, if such a system were inconsistent, you’d never be sure if 2 + 2 = 4 and 2 + 2 != 3, a computer would work one second or start an AI uprising the next, or whether browsing your favorite porn site would actually give you porn or videocall your mother on Skype. Instead, with a consistent system, we can rest assured that 2 + 2 can never equal 3, that a computer will behave exactly as told, and that porn websites will only give you porn and not an awkward conversation with your mom. However, the cost to this is that I have this thing that is true, but it can’t be proven to be true using that system you like. Unfortunate, but we can make do.
As it turns out, Gödel’s incompleteness theorem applies to any system described in terms of itself; you cannot prove (which is a stronger, logical thing to do than simply giving examples) that a given computer, PL, or system of mathematics is consistent by using that selfsame system. If you attempt to do so and end up with such a proof, you end up proving a contradiction; thus, your system of logic has an inconsistency within that system of logic. In order to prove something on the system itself, then, you need something more expressive than that system itself. For instance, to describe actions, you need sounds; in order to describe sounds, you need language, and in order to describe language, you need thought. Each of these is less expressive than the next, and while you can describe things of less expressiveness, you cannot describe it in terms of itself. So, if I have this thing that is true and you can’t prove it to be true using that system you like, then you need something more powerful than that system you like.
Okay, that’s enough heady stuff. How does this apply to divination methods, again? My student wanted to know why I didn’t ask geomancy about itself; the answer is that geomancy can’t answer about itself in terms of itself. Like programming languages’ problem from Gödel, I don’t think a system of knowledge—any system, whether it’s Peano arithmetic or lambda calculus or geomancy—can accurately answer questions about its own internal mechanism and algorithms. And, moreover, because whatever is divinable by one divination method is divinable by any of them, and whatever is not divinable by one isn’t divinable by any of them, if we can’t ask about how methods of divination work by means of a particular divination method (Tarot with Tarot, geomancy with geomancy, Tarot with geomancy, geomancy with Tarot), the question about how divinatory methods work cannot be divined.
So how do you learn more about techniques for a divination method? Well, as above, if you have a particular system of knowledge and you want to describe it, you need something more powerful than that system. What’s more powerful than, say, geomancy? Something more inclusive and expressive than geomancy; like, say, human language. If you have a question about geomantic techniques, you can’t really go to geomancy to ask about it; you go to a teacher, a mentor, an ancestor, a discussion group to figure it out by means of logic, rationality, and “looking out above” the system itself. You have to inspect the system from the outside in order to see how it works inside, and generally, we need something to show us where to look. That something is usually someone.
Programming languages are not, of course, divination methods. Yes, dear reader who happens to know more about mathematics and the philosophy thereof than I do, I know I’m uncomfortably mixing different types of concepts in this post; divination methods are not instructions, nor are programming languages able to predict the future, barring some new innovation in quantum computing. The point stands, and the concepts introduced in this post hold well and are generalizable enough for my ends here. There are enough parallels between the two that give me a working theory of how divination works, and also of the limits of divination. Just as with the relationship between regular expressions and context-free grammars, where the latter is strictly more expressive and powerful than the former, we need something more expressive and powerful than a divination system to learn how to divine with it. Humans, for instance, fill that role quite nicely; all divination can do is “simulate” human situations, but it cannot simulate every possible situation uniquely. There are human situations that cannot be accurately simulated by divination. Divination, too, is inherently incomplete if we want to place certain faith in our techniques; if we allow, on the other hand, for divination to be complete, then we have to scrap the techniques which then become inconsistent and be more intuitive instead. In that case, sure, you might be able to get insight on techniques, but it’s not by means of the techniques of the divination system itself; you sidestepped that matter completely.
The primary distinction that I make between a programming language and divination is that the former is the means to solve a (set of) problem(s), whilst the latter is to answer a (set of) question(s) that may or may not be used as part of problem solving.
Totally correct, too! A program is essentially a mathematical proof of a hypothesis (“if P then Q; P, therefore Q”), while a reading is a description used to make a judgment (“if you want Q, you should do P, which can trigger Q”).
That is a good way to summarise it.
Would you consider a ritual/working to be like applying an algorithm? Some scale better than others.
Depends on how you define “algorithm”. If you allow for distinctly mystical, non-logical entities and forces to follow their own internal logic, then absolutely, a ritual can be considered a type of algorithm. Thinking it like this, however, can easily lead some people to think that magic is no more than “push button, receive bacon” kind of deal, or as Jason Miller calls it, “there’s a god for that” app-mentality of magic. Magic doesn’t always behave to our expectations, but then, maybe we’re not aware of enough things that determine whether a given ritual will work in a particular way, if at all.
Come to think of it, many traditions do have rituals by rote, as it were, but each competent system also has a means of error-checking and contingency resolution. Like, in Santeria, you check up with the ancestors first to see if they need anything or if anything needs handling before the main ritual, then you do the main ritual for one or more orisha, and then you check with the orisha to see what they need. If there’s something extra needed, or if something goes wrong, it’s taken care of, or arranged for it to be taken care of. In this sense, competent traditions have rituals with a kind of exception handling or edge case handling to achieve a particular end. And, if need be, the process is aborted if something goes completely haywire and handled by some external ritual process; in such a case, the ritual algorithm wouldn’t have been a good fit for the given inputs to achieve a particular set of outputs. Interesting.
I suppose it’s certainly not wrong to consider ritual as implemented algorithm, but I’d be wary of saying that too loudly, since our awareness of the cosmos is significantly more limited than we care to admit and some people may want to take this at face value. For a mathematical proof where all the axioms and hypotheses are in front of you on paper, everything that matters in your little mini-cosmos is made explicit and there are no side effects; we’re nearly never so lucky when handling spiritual forces.
Interesting. I would qualify the algorithm == ritual with the caveat that it includes all the knowns and unknowns of our reality in the data set.
The algorithm/ritual is targeted to work within a specific time/space, and as Jason states the practitioner should be specific on expected exit criterias. Problems arise due to lack of specificity and/or it manifesting in time/space with unanticipated effects.
Exactly; as with any algorithm, there is a notion of “state” that we, as finite beings, have only a limited awareness of. Without knowing fully the state of the cosmos we’re in (we can’t even do that with a strictly physical system on a small scale!), we cannot perfectly predict the exact outcome of a ritual (or, for that matter, any action we take in any way). We can certainly use probabilistic methods to figure out the general outcome, often with a tolerably small enough range, but we can’t be totally and perfectly specific.
Pingback: Linkage: Community, ethics, and ancient wines || Spiral Nature