One of the most interesting aspects of Janus notation is trivalent logic. The only natural language with trivalent logic, as far as I know, is Aymará, a relative of Quechua spoken on the altiplano of Bolivia (see www.aymara.org) whose trivalent logic was elucidated by Iván Guzmán de Rojas. However, Aymará only has trivalent modal logic, while Janus logic is completely trivalent.
This chapter will present trivalent logic in more depth than you need to use it, because it's interesting. In preparation, we'll recap ternary numbers and bivalent logic for you in the first three sections.
Many people are familiar with binary, or base 2 numbers, which are the basis for both digital computing and classical (Boolean or Aristotelian) propositional logic. In binary notation, only the digits 0 and 1 are used, and each successive place represents another power of two. Binary is also the justification for octal (base 8) and hexadecimal (base 16) arithmetic, since those powers of two can represent groups of binary digits.
This section will introduce you to another notation: ternary numbers. It's called ternary because there are three digits and each successive place represents another power of three, but instead of using the digits 0, 1, and 2 (which I would call trinary or base 3), ternary uses the digits 0 and 1 and the minus sign −, which represents −1. Since the biggest benefits of this approach result from the symmetry of 1 and −1, this notation is also called balanced ternary. Just like a single binary digit is called a bit, a single ternary digit should be called a tert (not trit or tit).
In ternary, the numbers zero and one are represented by 0 and 1, as in binary and decimal (base 10) notation. It first becomes interesting at the number two, which is represented in ternary as 1−. The digit 1 is in the 3s place, representing the value 3, and the digit − is in the 1s place, representing the value −1. Adding 3 and −1 together gives us 2. Likewise, three is written 10, and four is written 11. Here are the first few numbers in ternary (in bold):
0  0  1−−−  14  1001  28 

1  1  1−−0  15  101−  29 
1−  2  1−−1  16  1010  30 
10  3  1−0−  17  1011  31 
11  4  1−00  18  11−−  32 
1−−  5  1−01  19  11−0  33 
1−0  6  1−1−  20  11−1  34 
1−1  7  1−10  21  110−  35 
10−  8  1−11  22  1100  36 
100  9  10−−  23  1101  37 
101  10  10−0  24  111−  38 
11−  11  10−1  25  1110  39 
110  12  100−  26  1111  40 
111  13  1000  27  1−−−−  41 
In decimal notation, negative numbers are preceded by a minus sign, which is kind of an 11th digit, since you can't write all the integers without it. (It's too bad that we don't use a leading 0 to represent negative numbers instead, since the two uses are disjoint.) In binary computers, negative integers are represented in two's−complement notation, which requires an upper limit on the size of integers. If 16−bit integers are used, the highest bit, which would normally represent 32,768 (2 to the 15th power), instead represents −32,768! This makes the 16−bit binary representation of −1 a series of sixteen 1s: 1111111111111111, which is to be interpreted as 32767−32768. Arcane!
In contrast, −1 in ternary notation is simply −, while −2 is −1 and −3 is −0. In general in ternary notation, negative numbers start with −, and the negation of any number just replaces 1s with −s and vice versa. The number eight is 10−, and negative eight is −01. As you saw in Janus notation for numbers, a balanced notation makes negative integers much cleaner.
We won't go much deeper into ternary numbers here, but we will display the addition, subtraction, and multiplication tables (with A on the left, positive numbers in blue, negative numbers in red, and zero in green):




And here are some examples:



While many of you are familiar with binary numbers, few wouldn't need a quick recap of classical bivalent propositional logic before proceeding to the trivalent case. This logic is also called Boolean logic after the logician George Boole, although many thinkers from Aristotle to Bertrand Russell made major contributions. The word bivalent means twovalued: in this system, propositions are either true or false.
We aren't going to concern ourselves with the formal aspects of logic, the notions of proof or the development of theorems from axioms. Instead, we're going to offer a whirlwind tour of bivalent notation to prepare you for what follows.
The relationship of bivalent logic to binary arithmetic is one of homology, meaning that entities and relationships in one field often correspond to similar ones in the other. For example, consider the bivalent operations of disjunction ∨ and conjunction ∧, the logical operations corresponding to our words or (in its inclusive sense, where both choices might be true) and and.
logical disjunction

logical conjunction

The way to interpret the table on the left is that if proposition A is false and proposition B is false, then proposition A∨B (A or B) is false; otherwise, it's true. In other words, if either of two propositions is true, then their disjunction is, too. Likewise, the interpretation of the table on the right is that if proposition A is true and proposition B is true, then proposition A∧B (A and B) is true; otherwise, it's false. In other words, if either of two propositions is false, then their conjunction is, too.
There are actually two homologies with binary arithmetic. The first matches the two operations above with the binary max and min operations, which return the larger and smaller of two numbers, respectively. The number one is assigned to the truth value true, and zero is assigned to the truth value false.
binary maximum

binary minimum

The other homology with arithmetic matches the two logical operations with binary addition and multiplication, but involves the signs of the numbers, not the numbers themselves. In this case, all the positive numbers are assigned to the truth value true.
binary addition

binary multiplication

There is even a third leg to this homology: set theory. In elementary set theory, we are concerned with whether an element e is in set A, symbolized as e∈A, or not, symbolized as e∉A. The set of all elements which are in either of two sets is called their union ∪, while the set of all elements which are in both of two sets is called their intersection ∩. Here are their tables:
union of sets

intersection of sets

These two operators, disjunction/addition/union and conjunction/multiplication/intersection, are called binary or twoplace operators because they depend on two arguments, or inputs. They're also called connectives because they connect two values. An operator that depended on only one input is called unary or oneplace, and an operator that doesn't depend on any inputs is called a constant or zeroplace operator.
There are five other twoplace operators in bivalent logic that are interesting to us, plus a single oneplace operator. Here are their truth tables:
logical negation

logical subtraction

logical alternation
 
logical implication

logical equality

logical inequality

Negation changes 1s to 0s and vice versa. Surprisingly, its homologue in binary arithmetic is not negation, but complementation: 1A, which is also its homologue in set theory. We'll be talking much more about negation below.
Subtraction is homologous with the same operations in arithmetic and set theory. A B is synonymous with A∧¬B, just as it is with A×B and A∩B. (Note that in ordinary arithmetic, AB means A+B.)
Alternation is also called exclusiveor or XOR. It is the equivalent of the usual English meaning of the word or, which excludes both. If you say "Would you like red wine or white?", you are usually offering a choice, not both. In bivalent logic, this is synonymous with A∨B  A∧B.
Implication, also called the conditional, is the operation most fraught with profound meaning, which unfortunately we can't explore here. Its homologues are arithmetic lessthanorequalto ≤ and set inclusion ⊆. In bivalent logic, this is synonymous with ¬A∨B: either A is false or B is true. I didn't bother showing the reversed version <=/≥/⊇, which just points in the other direction.
Equality, properly called equivalence, coimplication, or the biconditional, is a synonym for implication in both directions: A<=>B means A<=B and A=>B. The homologues are equality = in both arithmetic and set theory.
Inequality is the negation of equality, homologous with inequality ≠. Note that its truth table is identical with that of alternation, but I have shown them both since they differ in the trivalent case.
There's one more, degenerate, unary operator, Assertion, which is the Identity operator: the assertion of a true proposition is true, and the assertion of a false proposition is false. I mention it to be complete, and also because it's interesting to know that asserting a proposition is equivalent to asserting that it's true.
These truth tables can also be expressed in columnar form, where each entry has its own row, but we don't need to do that here. They can also be expressed as rules of inference, along the lines of "If A is true and B is true, then A∧B is true". Rules of inference might also work backwards, like "If A∧B is true, then A is true (and B is true)". Or they could do both, like "If A is true and A→B is true, then B is true". This last one is called modus ponens, and is the fundamental rule of inference.
Using either truth tables or rules, we could construct proofs of propositions, where each step derives logically from previous steps. Often, these proofs start with certain assumptions and try to deduce the consequences, but sometimes there are no assumptions, and so the proposition is universally true.
One universal truth in classical logic is ¬(A ∧ ¬A), which is called the law of noncontradiction. It states that a proposition can't be true and false at the same time: that would be a contradiction.
Another universal truth is A ∨ ¬A, called the law of the excluded middle or the law of bivalence, which asserts that every proposition is either true or false  there's no other choice. It could be expressed as A  ¬A, since it's never true that both A and ¬A are true at the same time. In fact, if you assume B and are able to derive a contradiction such as A∧¬A , there must be something wrong with B  this type of reasoning is called reductio ad absurdum.
Oddly enough, this type of thing isn't what logicians do! Instead, they abstract a level or two, and make assertions about whole logical systems at once. The bivalent logic presented in this section is one such system, and in fact is used as a basis for many systems, such as predicate logic, which introduces quantifiers like for all ∀ and there exists ∃, and modal logic, which introduces modal operators like □ Necessary and ◇ Possible.
In the 1930s, a logician named Kurt Gödel proved that any classical logical system powerful enough to be useful would either be inconsistent (meaning it could prove contradictions) or incomplete (meaning it would leave some propositions unprovable). This result, which is called the Incompleteness Theorem, was quite a blow to philosophy, as it seems to state that some things must always remain unknowable.
Among the alternatives that have been explored are bivalent logics that reject the law of bivalence. At least one of these systems (Fitch) can be proven to be both consistent and complete, and it's powerful enough to serve as the basis for arithmetic. It's not that there's any alternative to a statement being either true or false  the system is still bivalent  but that bivalence isn't a law, or axiom, and thus reductio ad absurdum doesn't work.
That's too bad, because if you had a system where reductio ad absurdum worked, and you showed (using the Incompleteness Theorem) that the law of bivalence led to inconsistency, then you would have proven that the universe isn't bivalent!
The logic above can be extended to cover cases when we don't know whether a proposition is true or not, for example because it refers to the future. This is called modal logic, and the two traditional modal operators are Necessity and Possibility, represented by □ and ◇, respectively. We also use the word Impossible as a shorthand for "not Possible". By definition, if a proposition is Necessary, then it must also be Possible.
For example, □Barça will win the Champions League means it is necessary that Barça win the Champions League, or Barça will necessarily win the Champions League, or simply Barça must win the Champions League. ◇ Barça will win the Champions League means it is possible that Barça will win the Champions League, or Barça will possibly win the Champions League, or simply Barça might win the Champions League.
There is a set of relationships between these two operators, mediated by negation:
For instance, the first one says that if p must be true, then p can't be true (it's not possible that p be true).
There is a strong homology here with quantification, and in fact modal logic can't be seen as quantification over a set of possible future worlds, or possible unknown facts:
So Necessary, Possible and Impossible correspond to English All/Every/Each, Some/A and No/None/Not Any, and □ ◇ correspond to ∀ ∃.
I'm going to take the bivalent case one step further, so you'll recognize it when you see it in the trivalent case.
The proposition p, whose truth value is unknown to us, can be described as Necessary, Possible or Impossible. If it's Necessary, it's also Possible, so I'm going to introduce a new term, Potential, to mean "Possible but not Necessary". Then a proposition must be Necessary, Potential or Impossible, but only one of the three. Likewise, the negation of a proposition could be Necessary, Potential or Impossible, but only one of the three.
On the face of it, that gives us nine combinations. But given the laws of bivalence and noncontradiction, it turns out that there are only three viable combinations :
I'm going to call these three combinations Modalities, and give them the following names:
In Janus logic, there are three truth values: true, false, and wrong. True means the same in Janus as it does in classical logic, but false means something different. In classical logic, false means not true, but in Janus A is false means the negation of A is true.
That's a subtle difference, but consider the proposition "The King of France is Japanese". That statement is clearly not true, so in classical logic it's false. But its negation, "The King of France is not Japanese", is also not true, since there is no King of France (not since 1848). So in Shwa, both the original statement and its negation are wrong. That's what wrong means: that neither the statement nor its negation are true.
[To be fair, some classical logicians would say that "The King of France is Japanese" is not a proposition, since it has no referent. Others would say that the statement is false, and that its negation is "It's not true that the King of France is Japanese", which is true. Yet others would say that it means "There is a King of France, and he's Japanese", which is false and has the negation "There is no King of France or he's not Japanese", which is true. Still others would say that it means that anybody who is the King of France is also Japanese, in other words that being the King of France implies being Japanese, and since the premise of the conditional is always false  there is no King of France  then the proposition as it reads is true, and so is its apparent negation!]
If a proposition is Wrong, that doesn't mean it's meaningless, like "Colorless green arrows sleep furiously". The sentence "The King of France is Japanese" isn't meaningless; there just happens not to be a King of France right now. This is a case of missing referent, but not all Wrong propositions lack referents. For instance, "Shakespeare left Alaska by airplane" isn't missing any referents: both Shakespeare and Alaska existed, and so do airplanes. But Shakespeare never went to Alaska, so he could never have left it by airplane or any other way. You could say the proposition is false, but it's a funny kind of false, since its negation, "Shakespeare didn't leave Alaska by airplane", is also false. That's called presupposition failure.
But the best examples of wrong statements are in the middle between true and false. Imagine that it's not really raining, but it's drizzling, so it seems wrong to say "It's not raining". Or the proposition that zero is a natural number, or that i ≥ i (where i represents √1) : they're neither true nor untrue. Finally, you can use Wrong to respond to a query where neither yes nor no seem to be truthful, for instance if I ask you whether the stock market went up after the Great Crash of 1929. Well, yes it did, but first it fell, and it remained below its previous levels for many years afterwards. It doesn't really matter how a Wrong statement is untrue, as long as it's negation is also untrue.
But a proposition isn't Wrong just because you don't know whether it's true or not, for instance because it's in the future. Both future events and simple ignorance are examples of modal statements, which we'll discuss below.
Bivalent logic is deeply embedded in English, which makes it difficult to express trivalent statements. To compensate, I'll use the English words false, negation, and not only to indicate falseness, and the words wrong, objection and neither to indicate wrongness, as in "It's neither raining (nor not raining)".
By the way, the three truth values of Janus ternary logic  True, False and Wrong  are homologous with the three digits of ternary numbers 1 − 0 , and also with the three signs (plus, minus, and zero) of real arithmetic. Because of that, from now on I'll put Wrong before False, so the normal order will be True  Wrong  False, with Wrong in the middle.
Now that you know what Wrong means, and how it's different from False, let's consider how trivalent logic works.
The most important operators are ¬ and ~: ¬p means "the negation of p", and ~p means "the objection of p". Here are their truth tables:
Proposition  Negation  Objection  Assertion 

p  ¬p  ~p  p 
true  false  wrong  true 
wrong  wrong  true  wrong 
false  true  wrong  false 
I added a third unary operator, Assertion, to the end of the chart. It's not very important, except to note that saying something is equivalent to saying it's true. For example, "Roses are red" is equivalent to "It's true that roses are red".
Let's also discuss some twoplace connectives. The first two are straightforward extensions of the bivalent forms:
trivalent disjunction

trivalent conjunction

These two connectives are homologous with ternary maximum and minimum, respectively.
The implication connective is homologous with ≤, and coimplication with =. As in the bivalent case, coimplication is the homologue of equality. In other words, if A<=B and A=>B, then A<=>B.
trivalent implication

trivalent coimplication

As you were warned, trivalent inequality is not equivalent to alternation.
trivalent alternation

trivalent inequality

The trivalent alternation operator I show here has an advantage over the bivalent. Bivalent alternation cannot be chained as can conjunction and disjunction: A  B  C will be true if they're all true, not just one (no matter how you associate it). But trivalent alternation can be chained: A  B  C will be true if and only if just one of the three is true, and it will only be false if all of them are false.
There are two other connectives in trivalent logic that have no bivalent homologues, although they do in ternary arithmetic: addition (ignoring carries) and multiplication:
trivalent addition

trivalent multiplication

As in the bivalent logic described above, Janus logic also has a law of trivalence, which states that a proposition must be either true or false or wrong, and a law of noncontradiction which states that it can be only one of the three. And we can derive the Janus equivalent of DeMorgan's Laws:
As I mentioned above, wrong has nothing to do with the question of whether you know a truth value. Propositions are wrong because they are neither true nor false, not because you don't know whether they're true or false. Instead, there is a whole set of modalities to specify precisely how much you know about a proposition.
In the section above on bivalent modal logic, I introduced two modal operators, □ and ◇. We use the same two in trivalent logic, with the same meanings. However, the relationships linking them via negation are weaker. In particular, if you can eliminate one of the truth values, you can't assume it's the other, as you can in the bivalent case.
In fact, there are a total of seven possible modalities:
For example, consider the sentence "Santa Claus likes milk and cookies". Well, if he exists, it's true, but we don't know whether he really exists (I've seen him many times, but I'm still skeptical). If he doesn't exist, the sentence is wrong. Since we don't know which it is, the sentence is Certainly Not False. But if it turns out that he does exist but he doesn't like milk and cookies, then the sentence was false, and it was also false to say that it was Certainly Not False, but it wasn't wrong to say so!
You may be wondering what the benefit is of all this complication. First of all, it brings sentences like "I did not have sex with that woman" into the purview of logic, as opposed to dismissing them as aberrant.
More interestingly, trivalent logic can actually draw sure conclusions from unsure premises. In bivalent modal logic, there is no middle ground between knowing nothing about the truth of a proposition and knowing everything about it. But in trivalent modal logic, we can know something about the truth of a proposition.
< Shwa Calculator  Home > 
© 20022016 Shwa  shwa@shwa.org  03oct16 