Difference between revisions of "Directory:Jon Awbrey/Papers/Cactus Language"

MyWikiBiz, Author Your Legacy — Thursday December 05, 2024
Jump to navigationJump to search
(→‎The Cactus Language : Syntax: nobody here but us operators)
(update + waybak links)
 
(340 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
{{DISPLAYTITLE:Cactus Language}}
 
{{DISPLAYTITLE:Cactus Language}}
 
+
'''Author: [[User:Jon Awbrey|Jon Awbrey]]'''
<div class="nonumtoc">__TOC__</div>
 
  
 
==The Cactus Patch==
 
==The Cactus Patch==
Line 9: Line 8:
 
<p>Thus, what looks to us like a sphere of scientific knowledge more accurately should be represented as the inside of a highly irregular and spiky object, like a pincushion or porcupine, with very sharp extensions in certain directions, and virtually no knowledge in immediately adjacent areas.  If our intellectual gaze could shift slightly, it would alter each quill's direction, and suddenly our entire reality would change.</p>
 
<p>Thus, what looks to us like a sphere of scientific knowledge more accurately should be represented as the inside of a highly irregular and spiky object, like a pincushion or porcupine, with very sharp extensions in certain directions, and virtually no knowledge in immediately adjacent areas.  If our intellectual gaze could shift slightly, it would alter each quill's direction, and suddenly our entire reality would change.</p>
 
|-
 
|-
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
+
| align="right" | &mdash; Herbert J. Bernstein, &ldquo;Idols of Modern Science&rdquo;, [HJB, 38]
 
|}
 
|}
  
Line 42: Line 41:
 
In the usual way of proceeding on formal grounds, meaning is added by giving each grammatical sentence, or each syntactically distinguished string, an interpretation as a logically meaningful sentence, in effect, equipping or providing each abstractly well-formed sentence with a logical proposition for it to denote.  A semantic interpretation of the cactus language is carried out in Subsection 1.3.10.12.
 
In the usual way of proceeding on formal grounds, meaning is added by giving each grammatical sentence, or each syntactically distinguished string, an interpretation as a logically meaningful sentence, in effect, equipping or providing each abstractly well-formed sentence with a logical proposition for it to denote.  A semantic interpretation of the cactus language is carried out in Subsection 1.3.10.12.
  
==The Cactus Language : Syntax==
+
===The Cactus Language : Syntax===
  
 
{| align="center" cellpadding="0" cellspacing="0" width="90%"
 
{| align="center" cellpadding="0" cellspacing="0" width="90%"
Line 55: Line 54:
 
As a temporary notation, let the relationship between a particular sign <math>s\!</math> and a particular object <math>o\!</math>, namely, the fact that <math>s\!</math> denotes <math>o\!</math> or the fact that <math>o\!</math> is denoted by <math>s\!</math>, be symbolized in one of the following two ways:
 
As a temporary notation, let the relationship between a particular sign <math>s\!</math> and a particular object <math>o\!</math>, namely, the fact that <math>s\!</math> denotes <math>o\!</math> or the fact that <math>o\!</math> is denoted by <math>s\!</math>, be symbolized in one of the following two ways:
  
{| cellpadding="8"
+
{| align="center" cellpadding="8" width="90%"
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 
 
|
 
|
 
<math>\begin{array}{lccc}
 
<math>\begin{array}{lccc}
1. & s & \rightarrow & o. \\
+
1. & s & \rightarrow & o \\
 
\\
 
\\
2. & o & \leftarrow  & s. \\
+
2. & o & \leftarrow  & s \\
 
\end{array}</math>
 
\end{array}</math>
 
|}
 
|}
Line 67: Line 65:
 
Now consider the following paradigm:
 
Now consider the following paradigm:
  
{| cellpadding="8"
+
{| align="center" cellpadding="8" width="90%"
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 
 
|
 
|
 
<math>\begin{array}{llccc}
 
<math>\begin{array}{llccc}
Line 104: Line 101:
 
|}
 
|}
  
{| cellpadding="8"
+
{| align="center" cellpadding="8" width="90%"
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 
 
|
 
|
 
<math>\begin{array}{llccc}
 
<math>\begin{array}{llccc}
Line 141: Line 137:
 
|}
 
|}
  
<pre>
+
When I say that the sign "blank" denotes the sign "&nbsp;", it means that the string of characters inside the first pair of quotation marks can be used as another name for the string of characters inside the second pair of quotes. In other words, "blank" is a higher order sign whose object is "&nbsp;", and the string of five characters inside the first pair of quotation marks is a sign at a higher level of signification than the string of one character inside the second pair of quotation marks.  This relationship can be abbreviated in either one of the following ways:
When I say that the sign "blank" denotes the sign " ",
 
it means that the string of characters inside the first
 
pair of quotation marks can be used as another name for
 
the string of characters inside the second pair of quotes.
 
In other words, "blank" is a HO sign whose object is " ",
 
and the string of five characters inside the first pair of
 
quotation marks is a sign at a higher level of signification
 
than the string of one character inside the second pair of
 
quotation marks.  This relationship can be abbreviated in
 
either one of the following ways:
 
  
|   " "     <-<  "blank"
+
{| align="center" cellpadding="8" width="90%"
 
|
 
|
|  "blank>->  " "
+
<math>\begin{array}{lll}
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} &
 +
\leftarrow &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} &
 +
\rightarrow &
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} \\
 +
\end{array}</math>
 +
|}
  
Using the raised dot "·" as a sign to mark the articulation of a
+
Using the raised dot "<math>\cdot</math>" as a sign to mark the articulation of a quoted string into a sequence of possibly shorter quoted strings, and thus to mark the concatenation of a sequence of quoted strings into a possibly larger quoted string, one can write:
quoted string into a sequence of possibly shorter quoted strings,
 
and thus to mark the concatenation of a sequence of quoted strings
 
into a possibly larger quoted string, one can write:
 
  
 +
{| align="center" cellpadding="8" width="90%"
 
|
 
|
|  " "  <-<  "blank=   "b"·"l"·"a"·"n"·"k"
+
<math>\begin{array}{lllll}
|
+
^{\backprime\backprime}\operatorname{~}^{\prime\prime}
 +
& \leftarrow &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{b}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{l}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{a}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{n}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{k}^{\prime\prime} \\
 +
\end{array}</math>
 +
|}
  
This usage allows us to refer to the blank as a type of character, and
+
This usage allows us to refer to the blank as a type of character, and also to refer any blank we choose as a token of this type, referring to either of them in a marked way, but without the use of quotation marks, as I just did.  Now, since a blank is just what the name "blank" names, it is possible to represent the denotation of the sign "&nbsp;" by the name "blank" in the form of an identity between the named objects, thus:
also to refer any blank we choose as a token of this type, referring to
 
either of them in a marked way, but without the use of quotation marks,
 
as I just did.  Now, since a blank is just what the name "blank" names,
 
it is possible to represent the denotation of the sign " " by the name
 
"blank" in the form of an identity between the named objects, thus:
 
  
 +
{| align="center" cellpadding="8" width="90%"
 
|
 
|
|  " "  =   blank
+
<math>\begin{array}{lll}
|
+
^{\backprime\backprime}\operatorname{~}^{\prime\prime} & = & \operatorname{blank} \\
 +
\end{array}</math>
 +
|}
  
With these kinds of identity in mind, it is possible to extend the use of
+
With these kinds of identity in mind, it is possible to extend the use of the "<math>\cdot</math>" sign to mark the articulation of either named or quoted strings into both named and quoted strings.  For example:
the "·" sign to mark the articulation of either named or quoted strings
 
into both named and quoted strings.  For example:
 
  
|   " "       =   " "·" "       =  blank·blank
+
{| align="center" cellpadding="8" width="90%"
 
|
 
|
|  " blank=   " "·"blank=   blank·"blank"
+
<math>\begin{array}{lclcl}
|
+
^{\backprime\backprime}\operatorname{~~}^{\prime\prime}
|  "blank =   "blank"·" "  =   "blank"·blank
+
& = &
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime}
 +
& = &
 +
\operatorname{blank} \, \cdot \, \operatorname{blank} \\
 +
\\
 +
^{\backprime\backprime}\operatorname{~blank}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime}
 +
& = &
 +
\operatorname{blank} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime}\operatorname{blank~}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \, \cdot \,
 +
\operatorname{blank}
 +
\end{array}</math>
 +
|}
  
 
A few definitions from formal language theory are required at this point.
 
A few definitions from formal language theory are required at this point.
  
An "alphabet" is a finite set of signs, typically, !A! = {a_1, ..., a_n}.
+
An ''alphabet'' is a finite set of signs, typically, <math>\mathfrak{A} = \{ \mathfrak{a}_1, \ldots, \mathfrak{a}_n \}.</math>
  
A "string" over an alphabet !A! is a finite sequence of signs from !A!.
+
A ''string'' over an alphabet <math>\mathfrak{A}</math> is a finite sequence of signs from <math>\mathfrak{A}.</math>
  
The "length" of a string is just its length as a sequence of signs.
+
The ''length'' of a string is just its length as a sequence of signs.
A sequence of length 0 yields the "empty string", here presented as "".
 
A sequence of length k > 0 is typically presented in the concatenated forms:
 
  
s_1 s_2 ... s_(k-1) s_k,
+
The ''empty string'' is the unique sequence of length 0.  It is sometimes denoted by an empty pair of quotation marks, <math>^{\backprime\backprime\prime\prime},</math> but more often by the Greek symbols epsilon or lambda.
 +
 
 +
A sequence of length <math>k > 0\!</math> is typically presented in the concatenated forms:
 +
 
 +
{| align="center" cellpadding="4" width="90%"
 +
|
 +
<math>s_1 s_2 \ldots s_{k-1} s_k\!</math>
 +
|}
  
 
or
 
or
  
s_1 · s_2 · ... · s_(k-1) · s_k,
+
{| align="center" cellpadding="4" width="90%"
 +
|
 +
<math>s_1 \cdot s_2 \cdot \ldots \cdot s_{k-1} \cdot s_k</math>
 +
|}
  
with s_j in !A!, for all j = 1 to k.
+
with <math>s_j \in \mathfrak{A}</math> for all <math>j = 1 \ldots k.</math>
  
 
Two alternative notations are often useful:
 
Two alternative notations are often useful:
  
1.  !e! =   @e@  =   ""   =  the empty string.
+
{| align="center" cellpadding="4" style="text-align:center" width="90%"
 
+
|-
2.  %e%  = {!e!}  = {""}  =  the language consisting of a single empty string.
+
| <math>\varepsilon\!</math>
 +
| =
 +
| <math>{}^{\backprime\backprime\prime\prime}\!</math>
 +
| =
 +
| align="left" | the empty string.
 +
|-
 +
| <math>\underline\varepsilon\!</math>
 +
| =
 +
| <math>\{ \varepsilon \}\!</math>
 +
| =
 +
| align="left" | the language consisting of a single empty string.
 +
|}
  
The "kleene star" !A!* of alphabet !A! is the set of all strings over !A!.
+
The ''kleene star'' <math>\mathfrak{A}^*</math> of alphabet <math>\mathfrak{A}</math> is the set of all strings over <math>\mathfrak{A}.</math>  In particular, <math>\mathfrak{A}^*</math> includes among its elements the empty string <math>\varepsilon.</math>
In particular, !A!* includes among its elements the empty string !e!.
 
  
The "surplus" !A!^+ of an alphabet !A! is the set of all positive length
+
The ''kleene plus'' <math>\mathfrak{A}^+</math> of an alphabet <math>\mathfrak{A}</math> is the set of all positive length strings over <math>\mathfrak{A},</math> in other words, everything in <math>\mathfrak{A}^*</math> but the empty string.
strings over !A!, in other words, everything in !A!* but the empty string.
 
  
A "formal language" !L! over an alphabet !A! is a subset !L! c !A!*.
+
A ''formal language'' <math>\mathfrak{L}</math> over an alphabet <math>\mathfrak{A}</math> is a subset of <math>\mathfrak{A}^*.</math>  In brief, <math>\mathfrak{L} \subseteq \mathfrak{A}^*.</math>  If <math>s\!</math> is a string over <math>\mathfrak{A}</math> and if <math>s\!</math> is an element of <math>\mathfrak{L},</math> then it is customary to call <math>s\!</math> a ''sentence'' of <math>\mathfrak{L}.</math> Thus, a formal language <math>\mathfrak{L}</math> is defined by specifying its elements, which amounts to saying what it means to be a sentence of <math>\mathfrak{L}.</math>
If z is a string over !A! and if z is an element of !L!, then it is
 
customary to call z a "sentence" of !L!.  Thus, a formal language !L!
 
is defined by specifying its elements, which amounts to saying what it
 
means to be a sentence of !L!.
 
  
One last device turns out to be useful in this connection.
+
One last device turns out to be useful in this connection. If <math>s\!</math> is a string that ends with a sign <math>t,\!</math> then <math>s \cdot t^{-1}</math> is the string that results by ''deleting'' from <math>s\!</math> the terminal <math>t.\!</math>
If z is a string that ends with a sign t, then z · t^-1 is
 
the string that results by "deleting" from z the terminal t.
 
  
 
In this context, I make the following distinction:
 
In this context, I make the following distinction:
  
1.  By "deleting" an appearance of a sign,
+
# To ''delete'' an appearance of a sign is to replace it with an appearance of the empty string "".
    I mean replacing it with an appearance
+
# To ''erase'' an appearance of a sign is to replace it with an appearance of the blank symbol "&nbsp;".
    of the empty string "".
 
  
2.  By "erasing" an appearance of a sign,
+
A ''token'' is a particular appearance of a sign.
    I mean replacing it with an appearance
 
    of the blank symbol " ".
 
  
A "token" is a particular appearance of a sign.
+
The informal mechanisms that have been illustrated in the immediately preceding discussion are enough to equip the rest of this discussion with a moderately exact description of the so-called ''cactus language'' that I intend to use in both my conceptual and my computational representations of the minimal formal logical system that is variously known to sundry communities of interpretation as ''propositional logic'', ''sentential calculus'', or more inclusively, ''zeroth order logic'' (ZOL).
  
The informal mechanisms that have been illustrated in the immediately preceding
+
The ''painted cactus language'' <math>\mathfrak{C}</math> is actually a parameterized family of languages, consisting of one language <math>\mathfrak{C}(\mathfrak{P})</math> for each set <math>\mathfrak{P}</math> of ''paints''.
discussion are enough to equip the rest of this discussion with a moderately
 
exact description of the so-called "cactus language" that I intend to use
 
in both my conceptual and my computational representations of the minimal
 
formal logical system that is variously known to sundry communities of
 
interpretation as "propositional logic", "sentential calculus", or
 
more inclusively, "zeroth order logic" (ZOL).
 
  
The "painted cactus language" !C! is actually a parameterized
+
The alphabet <math>\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P}</math> is the disjoint union of two sets of symbols:
family of languages, consisting of one language !C!(!P!) for
 
each set !P! of "paints".
 
  
The alphabet !A!  = !M! |_| !P! is the disjoint union of two sets of symbols:
+
<ol style="list-style-type:decimal">
  
1.  !M! is the alphabet of "measures", the set of "punctuation marks",
+
<li>
    or the collection of "syntactic constants" that is common to all
+
<p><math>\mathfrak{M}</math> is the alphabet of ''measures'', the set of ''punctuation marks'', or the collection of ''syntactic constants'' that is common to all of the languages <math>\mathfrak{C}(\mathfrak{P}).</math> This set of signs is given as follows:</p>
    of the languages !C!(!P!).  This set of signs is given as follows:
 
  
    !M= {m_1, m_2, m_3, m_4}
+
<p><math>\begin{array}{lccccccccccc}
 +
\mathfrak{M}
 +
& = &
 +
\{ &
 +
\mathfrak{m}_1 & , &
 +
\mathfrak{m}_2 & , &
 +
\mathfrak{m}_3 & , &
 +
\mathfrak{m}_4 &
 +
\} \\
 +
& = &
 +
\{ &
 +
^{\backprime\backprime} \, \operatorname{~} \, ^{\prime\prime} & , &
 +
^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} & , &
 +
^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} & , &
 +
^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} &
 +
\} \\
 +
& = &
 +
\{ &
 +
\operatorname{blank} & , &
 +
\operatorname{links} & , &
 +
\operatorname{comma} & , &
 +
\operatorname{right} &
 +
\} \\
 +
\end{array}</math></p></li>
  
        =  {" ", "-(", ",", ")-"}
+
<li>
 +
<p><math>\mathfrak{P}</math> is the ''palette'', the alphabet of ''paints'', or the collection of ''syntactic variables'' that is peculiar to the language <math>\mathfrak{C}(\mathfrak{P}).</math>  This set of signs is given as follows:</p>
  
        {blank, links, comma, right}.
+
<p><math>\mathfrak{P} = \{ \mathfrak{p}_j :  j \in J \}.</math></p></li>
  
2.  !P! is the "palette", the alphabet of "paints", or the collection
+
</ol>
    of "syntactic variables" that is peculiar to the language !C!(!P!).
 
    This set of signs is given as follows:
 
  
    !P!  = {p_j  :  j in J}.
+
The easiest way to define the language <math>\mathfrak{C}(\mathfrak{P})\!</math> is to indicate the general sorts of operations that suffice to construct the greater share of its sentences from the specified few of its sentences that require a special election. In accord with this manner of proceeding, I introduce a family of operations on strings of <math>\mathfrak{A}^*\!</math> that are called ''syntactic connectives''. If the strings on which they operate are exclusively sentences of <math>\mathfrak{C}(\mathfrak{P}),\!</math> then these operations are tantamount to ''sentential connectives'', and if the syntactic sentences, considered as abstract strings of meaningless signs, are given a semantics in which they denote propositions, considered as indicator functions over some universe, then these operations amount to ''propositional connectives''.
  
The easiest way to define the language !C!(!P!) is to indicate the general sorts
+
Rather than presenting the most concise description of these languages right from the beginning, it serves comprehension to develop a picture of their forms in gradual stages, starting from the most natural ways of viewing their elements, if somewhat at a distance, and working through the most easily grasped impressions of their structures, if not always the sharpest acquaintances with their details.
of operations that suffice to construct the greater share of its sentences from
 
the specified few of its sentences that require a special election.  In accord
 
with this manner of proceeding, I introduce a family of operations on strings
 
of !A!* that are called "syntactic connectives".  If the strings on which
 
they operate are exclusively sentences of !C!(!P!), then these operations
 
are tantamount to "sentential connectives", and if the syntactic sentences,
 
considered as abstract strings of meaningless signs, are given a semantics
 
in which they denote propositions, considered as indicator functions over
 
some universe, then these operations amount to "propositional connectives".
 
  
Rather than presenting the most concise description of these languages
+
The first step is to define two sets of basic operations on strings of <math>\mathfrak{A}^*.</math>
right from the beginning, it serves comprehension to develop a picture
 
of their forms in gradual stages, starting from the most natural ways
 
of viewing their elements, if somewhat at a distance, and working
 
through the most easily grasped impressions of their structures,
 
if not always the sharpest acquaintances with their details.
 
  
The first step is to define two sets of basic operations on strings of !A!*.
+
<ol style="list-style-type:decimal">
  
1.  The "concatenation" of one string z_1 is just the string z_1.
+
<li>
 +
<p>The ''concatenation'' of one string <math>s_1\!</math> is just the string <math>s_1.\!</math></p>
  
    The "concatenation" of two strings z_1, z_2 is the string z_1 · z_2.
+
<p>The ''concatenation'' of two strings <math>s_1, s_2\!</math> is the string <math>{s_1 \cdot s_2}.\!</math></p>
  
    The "concatenation" of the k strings z_j, for j = 1 to k,
+
<p>The ''concatenation'' of the <math>k\!</math> strings <math>(s_j)_{j = 1}^k\!</math> is the string of the form <math>{s_1 \cdot \ldots \cdot s_k}.\!</math></p></li>
  
    is the string of the form z_1 · ... · z_k.
+
<li>
 +
<p>The ''surcatenation'' of one string <math>s_1\!</math> is the string <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p>
  
2.  The "surcatenation" of one string z_1 is the string "-(" · z_1 · ")-".
+
<p>The ''surcatenation'' of two strings <math>s_1, s_2\!</math> is <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p>
  
    The "surcatenation" of two strings z_1, z_2 is "-(" · z_1 · "," · z_2 · ")-".
+
<p>The ''surcatenation'' of the <math>k\!</math> strings <math>(s_j)_{j = 1}^k</math> is the string of the form <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p></li>
  
    The "surcatenation" of k strings z_j, for j = 1 to k,
+
</ol>
  
    is the string of the form "-(" · z_1 · "," · ... · "," · z_k · ")-".
+
These definitions can be made a little more succinct by defining the following sorts of generic operators on strings:
  
These definitions can be made a little more succinct by
+
<ol style="list-style-type:decimal">
defining the following sorts of generic operators on strings:
 
  
1.  The "concatenation" Conc^k of the k strings z_j,
+
<li>The ''concatenation'' <math>\operatorname{Conc}_{j=1}^k</math> of the sequence of <math>k\!</math> strings <math>(s_j)_{j=1}^k</math> is defined recursively as follows:</li>
    for j = 1 to k, is defined recursively as follows:
 
  
    a.  Conc^1_j  z_j  = z_1.
+
<ol style="list-style-type:lower-alpha">
  
    b. For k > 1,
+
<li><math>\operatorname{Conc}_{j=1}^1 s_j \ = \ s_1.</math></li>
  
        Conc^k_j  z_j  =  (Conc^(k-1)_j  z_j) · z_k.
+
<li>
 +
<p>For <math>\ell > 1,\!</math></p>
  
2.  The "surcatenation" Surc^k of the k strings z_j,
+
<p><math>\operatorname{Conc}_{j=1}^\ell s_j \ = \ \operatorname{Conc}_{j=1}^{\ell - 1} s_j \, \cdot \, s_\ell.</math></p></li>
    for j = 1 to k, is defined recursively as follows:
 
  
    a.  Surc^1_j  z_j  =  "-(" · z_1 · ")-".
+
</ol>
  
    b.  For k > 1,
+
<li>The ''surcatenation'' <math>\operatorname{Surc}_{j=1}^k</math> of the sequence of <math>k\!</math> strings <math>(s_j)_{j=1}^k</math> is defined recursively as follows:</li>
  
        Surc^k_j  z_j  = (Surc^(k-1)_j  z_j) · ")-"^(-1) · "," · z_k · ")-".
+
<ol style="list-style-type:lower-alpha">
  
The definitions of these syntactic operations can now be organized in a slightly
+
<li><math>\operatorname{Surc}_{j=1}^1 s_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></li>
better fashion, for both conceptual and computational purposes, by making a few
 
additional conventions and auxiliary definitions.
 
  
1.  The conception of the k-place concatenation operation
+
<li>
    can be extended to include its natural "prequel":
+
<p>For <math>\ell > 1,\!</math></p>
  
    Conc^= ""  = the empty string.
+
<p><math>\operatorname{Surc}_{j=1}^\ell s_j \ = \ \operatorname{Surc}_{j=1}^{\ell - 1} s_j \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_\ell \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p></li>
  
    Next, the construction of the k-place concatenation can be
+
</ol></ol>
    broken into stages by means of the following conceptions:
 
  
    a. The "precatenation" Prec(z_1, z_2) of the two strings
+
The definitions of these syntactic operations can now be organized in a slightly better fashion by making a few additional conventions and auxiliary definitions.
        z_1, z_2 is the string that is defined as follows:
 
  
        Prec(z_1, z_2)  = z_1 · z_2.
+
<ol style="list-style-type:decimal">
  
    b.  The "concatenation" of the k strings z_1, ..., z_k can now be
+
<li>
        defined as an iterated precatenation over the sequence of k+1
+
<p>The conception of the <math>k\!</math>-place concatenation operation can be extended to include its natural ''prequel'':</p>
        strings that begins with the string z_0 = Conc^0 = "" and then
 
        continues on through the other k strings:
 
  
        i.  Conc^0_j  z_j  = Conc^= "".
+
<p><math>\operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}</math> &nbsp;=&nbsp; the empty string.</p>
  
        ii.  For k > 0,
+
<p>Next, the construction of the <math>k\!</math>-place concatenation can be broken into stages by means of the following conceptions:</p></li>
  
            Conc^k_j  z_j  = Prec(Conc^(k-1)_j  z_j, z_k).
+
<ol style="list-style-type:lower-alpha">
  
2.  The conception of the k-place surcatenation operation
+
<li>
    can be extended to include its natural "prequel":
+
<p>The ''precatenation'' <math>\operatorname{Prec} (s_1, s_2)</math> of the two strings <math>s_1, s_2\!</math> is the string that is defined as follows:</p>
  
    Surc^0  =  "-()-".
+
<p><math>\operatorname{Prec} (s_1, s_2) \ = \ s_1 \cdot s_2.</math></p></li>
  
    Finally, the construction of the k-place surcatenation can be
+
<li>
    broken into stages by means of the following conceptions:
+
<p>The ''concatenation'' of the sequence of <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> can now be defined as an iterated precatenation over the sequence of <math>k+1\!</math> strings that begins with the string <math>s_0 = \operatorname{Conc}^0 \, = \, ^{\backprime\backprime\prime\prime}</math> and then continues on through the other <math>k\!</math> strings:</p></li>
  
    a.  A "subclause" in !A!* is a string that ends with a ")-".
+
<ol style="list-style-type:lower-roman">
  
    b. The "subcatenation" Subc(z_1, z_2)
+
<li>
        of a subclause z_1 by a string z_2 is
+
<p><math>\operatorname{Conc}_{j=0}^0 s_j \ = \ \operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}.</math></p></li>
        the string that is defined as follows:
 
  
        Subc(z_1, z_2)  =  z_1 · ")-"^(-1) · "," · z_2 · ")-".
+
<li>
 +
<p>For <math>\ell > 0,\!</math></p>
  
    c.  The "surcatenation" of the k strings z_1, ..., z_k can now be
+
<p><math>\operatorname{Conc}_{j=1}^\ell s_j \ = \ \operatorname{Prec}(\operatorname{Conc}_{j=0}^{\ell - 1} s_j, s_\ell).</math></p></li>
        defined as an iterated subcatenation over the sequence of k+1
 
        strings that starts with the string z_0 = Surc^0 = "-()-" and
 
        then continues on through the other k strings:
 
  
        i.  Surc^0_j  z_j  =  Surc^0  =  "-()-".
+
</ol></ol>
  
        ii.  For k > 0,
+
<li>
 +
<p>The conception of the <math>k\!</math>-place surcatenation operation can be extended to include its natural "prequel":</p>
  
            Surc^k_j  z_j  = Subc(Surc^(k-1)_j  z_j, z_k).
+
<p><math>\operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.</math></p>
  
Notice that the expressions Conc^0_j z_j and Surc^0_j z_j
+
<p>Finally, the construction of the <math>k\!</math>-place surcatenation can be broken into stages by means of the following conceptions:</p>
are defined in such a way that the respective operators
 
Conc^0 and Surc^0 basically "ignore", in the manner of
 
constants, whatever sequences of strings z_j may be
 
listed as their ostensible arguments.
 
  
Having defined the basic operations of concatenation and surcatenation
+
<ol style="list-style-type:lower-alpha">
on arbitrary strings, in effect, giving them operational meaning for
 
the all-inclusive language !L! = !A!*, it is time to adjoin the
 
notion of a more discriminating grammaticality, in other words,
 
a more properly restrictive concept of a sentence.
 
  
If !L! is an arbitrary formal language over an alphabet of the sort that
+
<li>
we are talking about, that is, an alphabet of the form !A! = !M! |_| !P!,
+
<p>A ''subclause'' in <math>\mathfrak{A}^*</math> is a string that ends with a <math>^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p></li>
then there are a number of basic structural relations that can be defined
 
on the strings of !L!.
 
  
1.  z is the "concatenation" of z_1 and z_2 in !L! if and only if
+
<li>
 +
<p>The ''subcatenation'' <math>\operatorname{Subc} (s_1, s_2)</math> of a subclause <math>s_1\!</math> by a string <math>s_2\!</math> is the string that is defined as follows:</p>
  
    z_1 is a sentence of !L!, z_2 is a sentence of !L!, and
+
<p><math>\operatorname{Subc} (s_1, s_2) \ = \ s_1 \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p>
  
    z  = z_1 · z_2.
+
<li>
 +
<p>The ''surcatenation'' of the <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> can now be defined as an iterated subcatenation over the sequence of <math>k+1\!</math> strings that starts with the string <math>s_0 \ = \ \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}</math> and then continues on through the other <math>k\!</math> strings:</p></li>
  
2.  z is the "concatenation" of the k strings z1, ..., z_k in !L!,
+
<ol style="list-style-type:lower-roman">
  
    if and only if z_j is a sentence of !L!, for all j = 1 to k, and
+
<li>
 +
<p><math>\operatorname{Surc}_{j=0}^0 s_j \ = \ \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.</math></p></li>
  
    z  =  Conc^k_j  z_j  =  z_1 · ... · z_k.
+
<li>
 +
<p>For <math>\ell > 0,\!</math></p>
  
3. z is the "discatenation" of z_1 by t if and only if
+
<p><math>\operatorname{Surc}_{j=1}^\ell s_j \ = \ \operatorname{Subc}(\operatorname{Surc}_{j=0}^{\ell - 1} s_j, s_\ell).</math></p></li>
  
    z_1 is a sentence of !L!, t is an element of !A!, and
+
</ol></ol></ol>
  
    z_1  = z · t.
+
Notice that the expressions <math>\operatorname{Conc}_{j=0}^0 s_j</math> and <math>\operatorname{Surc}_{j=0}^0 s_j</math> are defined in such a way that the respective operators <math>\operatorname{Conc}^0</math> and <math>\operatorname{Surc}^0</math> simply ignore, in the manner of constants, whatever sequences of strings <math>s_j\!</math> may be listed as their ostensible arguments.
  
    When this is the case, one more commonly writes:
+
Having defined the basic operations of concatenation and surcatenation on arbitrary strings, in effect, giving them operational meaning for the all-inclusive language <math>\mathfrak{L} = \mathfrak{A}^*,</math> it is time to adjoin the notion of a more discriminating grammaticality, in other words, a more properly restrictive concept of a sentence.
  
    z  = z_1 · t^-1.
+
If <math>\mathfrak{L}</math> is an arbitrary formal language over an alphabet of the sort that
 +
we are talking about, that is, an alphabet of the form <math>\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P},</math> then there are a number of basic structural relations that can be defined on the strings of <math>\mathfrak{L}.</math>
  
4. z is a "subclause" of !L! if and only if
+
{| align="center" cellpadding="4" width="90%"
 +
| 1. || <math>s\!</math> is the ''concatenation'' of <math>s_1\!</math> and <math>s_2\!</math> in <math>\mathfrak{L}</math> if and only if
 +
|-
 +
| &nbsp; || <math>s_1\!</math> is a sentence of <math>\mathfrak{L},</math> <math>s_2\!</math> is a sentence of <math>\mathfrak{L},</math> and
 +
|-
 +
| &nbsp; || <math>s = s_1 \cdot s_2.</math>
 +
|-
 +
| 2. || <math>s\!</math> is the ''concatenation'' of the <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> in <math>\mathfrak{L},</math>
 +
|-
 +
| &nbsp; || if and only if <math>s_j\!</math> is a sentence of <math>\mathfrak{L},</math> for all <math>j = 1 \ldots k,</math> and
 +
|-
 +
| &nbsp; || <math>s = \operatorname{Conc}_{j=1}^k s_j = s_1 \cdot \ldots \cdot s_k.</math>
 +
|-
 +
| 3. || <math>s\!</math> is the ''discatenation'' of <math>s_1\!</math> by <math>t\!</math> if and only if
 +
|-
 +
| &nbsp; || <math>s_1\!</math> is a sentence of <math>\mathfrak{L},</math> <math>t\!</math> is an element of <math>\mathfrak{A},</math> and
 +
|-
 +
| &nbsp; || <math>s_1 = s \cdot t.</math>
 +
|-
 +
| &nbsp; || When this is the case, one more commonly writes:
 +
|-
 +
| &nbsp; || <math>s = s_1 \cdot t^{-1}.</math>
 +
|-
 +
| 4. || <math>s\!</math> is a ''subclause'' of <math>\mathfrak{L}</math> if and only if
 +
|-
 +
| &nbsp; || <math>s\!</math> is a sentence of <math>\mathfrak{L}</math> and <math>s\!</math> ends with a <math>^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math>
 +
|-
 +
| 5. || <math>s\!</math> is the ''subcatenation'' of <math>s_1\!</math> by <math>s_2\!</math> if and only if
 +
|-
 +
| &nbsp; || <math>s_1\!</math> is a subclause of <math>\mathfrak{L},</math> <math>s_2\!</math> is a sentence of <math>\mathfrak{L},</math> and
 +
|-
 +
| &nbsp; || <math>s = s_1 \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math>
 +
|-
 +
| 6. || <math>s\!</math> is the ''surcatenation'' of the <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> in <math>\mathfrak{L},</math>
 +
|-
 +
| &nbsp; || if and only if <math>s_j\!</math> is a sentence of <math>\mathfrak{L},</math> for all <math>{j = 1 \ldots k},\!</math> and
 +
|-
 +
| &nbsp; || <math>s \ = \ \operatorname{Surc}_{j=1}^k s_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math>
 +
|}
  
    z is a sentence of !L! and z ends with a ")-".
+
The converses of these decomposition relations are tantamount to the corresponding forms of composition operations, making it possible for these complementary forms of analysis and synthesis to articulate the structures of strings and sentences in two directions.
  
5.  z is the "subcatenation" of z_1 by z_2 if and only if
+
The ''painted cactus language'' with paints in the set <math>\mathfrak{P} = \{ p_j : j \in J \}</math> is the formal language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) \subseteq \mathfrak{A}^* = (\mathfrak{M} \cup \mathfrak{P})^*</math> that is defined as follows:
  
    z_1 is a subclause of !L!, z_2 is a sentence of !L!, and
+
{| align="center" cellpadding="4" width="90%"
 +
|-
 +
| PC 1. || The blank symbol <math>m_1\!</math> is a sentence.
 +
|-
 +
| PC 2. || The paint <math>p_j\!</math> is a sentence, for each <math>j\!</math> in <math>J.\!</math>
 +
|-
 +
| PC 3. || <math>\operatorname{Conc}^0</math> and <math>\operatorname{Surc}^0</math> are sentences.
 +
|-
 +
| PC 4. || For each positive integer <math>k,\!</math>
 +
|-
 +
| &nbsp; || if <math>s_1, \ldots, s_k\!</math> are sentences,
 +
|-
 +
| &nbsp; || then <math>\operatorname{Conc}_{j=1}^k s_j</math> is a sentence,
 +
|-
 +
| &nbsp; || and <math>\operatorname{Surc}_{j=1}^k s_j</math> is a sentence.
 +
|}
  
    z = z_1 · ")-"^(-1) · "," · z_2 · ")-".
+
As usual, saying that <math>s\!</math> is a sentence is just a conventional way of stating that the string <math>s\!</math> belongs to the relevant formal language <math>\mathfrak{L}.</math>  An individual sentence of <math>\mathfrak{C} (\mathfrak{P}),\!</math> for any palette <math>\mathfrak{P},</math> is referred to as a ''painted and rooted cactus expression'' (PARCE) on the palette <math>\mathfrak{P},</math> or a ''cactus expression'', for short. Anticipating the forms that the parse graphs of these PARCE's will take, to be described in the next Subsection, the language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P})</math> is also described as the set <math>\operatorname{PARCE} (\mathfrak{P})</math> of PARCE's on the palette <math>\mathfrak{P},</math> more generically, as the PARCE's that constitute the language <math>\operatorname{PARCE}.</math>
  
6z is the "surcatenation" of the k strings z_1, ..., z_k in !L!,
+
A ''bare'' PARCE, a bit loosely referred to as a ''bare cactus expression'', is a PARCE on the empty palette <math>\mathfrak{P} = \varnothing.</math> A bare PARCE is a sentence in the ''bare cactus language'', <math>\mathfrak{C}^0 = \mathfrak{C} (\varnothing) = \operatorname{PARCE}^0 = \operatorname{PARCE} (\varnothing).</math>  This set of strings, regarded as a formal language in its own right, is a sublanguage of every cactus language <math>\mathfrak{C} (\mathfrak{P}).</math>  A bare cactus expression is commonly encountered in practice when one has occasion to start with an arbitrary PARCE and then finds a reason to delete or to erase all of its paints.
  
    if and only if z_j is a sentence of !L!, for all j = 1 to k, and
+
Only one thing remains to cast this description of the cactus language into a form that is commonly found acceptable.  As presently formulated, the principle PC&nbsp;4 appears to be attempting to define an infinite number of new concepts all in a single step, at least, it appears to invoke the indefinitely long sequences of operators, <math>\operatorname{Conc}^k</math> and <math>\operatorname{Surc}^k,</math> for all <math>k > 0.\!</math>  As a general rule, one prefers to have an effectively finite description of
 +
conceptual objects, and this means restricting the description to a finite number of schematic principles, each of which involves a finite number of schematic effects, that is, a finite number of schemata that explicitly relate conditions to results.
  
    z  =  Surc^k_j  z_j  =  "-(" · z_1 · "," · ... · "," · z_k · ")-".
+
A start in this direction, taking steps toward an effective description of the cactus language, a finitary conception of its membership conditions, and a bounded characterization of a typical sentence in the language, can be made by recasting the present description of these expressions into the pattern of what is called, more or less roughly, a ''formal grammar''.
  
The converses of these decomposition relations are tantamount to the
+
A notation in the style of <math>S :> T\!</math> is now introduced, to be read among many others in this manifold of ways:
corresponding forms of composition operations, making it possible for
 
these complementary forms of analysis and synthesis to articulate the
 
structures of strings and sentences in two directions.
 
  
The "painted cactus language" with paints in the
+
{| align="center" cellpadding="4" width="90%"
set !P! = {p_j : j in J} is the formal language
+
|-
!L! = !C!(!P!) c !A!* = (!M! |_| !P!)* that is
+
| <math>S\ \operatorname{covers}\ T</math>
defined as follows:
+
|-
 +
| <math>S\ \operatorname{governs}\ T</math>
 +
|-
 +
| <math>S\ \operatorname{rules}\ T</math>
 +
|-
 +
| <math>S\ \operatorname{subsumes}\ T</math>
 +
|-
 +
| <math>S\ \operatorname{types~over}\ T</math>
 +
|}
  
PC 1.  The blank symbol m_1 is a sentence.
+
The form <math>S :> T\!</math> is here recruited for polymorphic employment in at least the following types of roles:
  
PC 2.  The paint p_j is a sentence, for each j in J.
+
# To signify that an individually named or quoted string <math>T\!</math> is being typed as a sentence <math>S\!</math> of the language of interest <math>\mathfrak{L}.</math>
 +
# To express the fact or to make the assertion that each member of a specified set of strings <math>T \subseteq \mathfrak{A}^*</math> also belongs to the syntactic category <math>S,\!</math> the one that qualifies a string as being a sentence in the relevant formal language <math>\mathfrak{L}.</math>
 +
# To specify the intension or to signify the intention that every string that fits the conditions of the abstract type <math>T\!</math> must also fall under the grammatical heading of a sentence, as indicated by the type <math>S,\!</math> all within the target language <math>\mathfrak{L}.</math>
  
PC 3.  Conc^0 and Surc^0 are sentences.
+
In these types of situation the letter <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> that signifies the type of a sentence in the language of interest, is called the ''initial symbol'' or the ''sentence symbol'' of a candidate formal grammar for the language, while any number of letters like <math>^{\backprime\backprime} T \, ^{\prime\prime}</math> signifying other types of strings that are necessary to a reasonable account or a rational reconstruction of the sentences that belong to the language, are collectively referred to as ''intermediate symbols''.
  
PC 4For each positive integer k,
+
Combining the singleton set <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \}</math> whose sole member is the initial symbol with the set <math>\mathfrak{Q}</math> that assembles together all of the intermediate symbols results in the set <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q}</math> of ''non-terminal symbols''.  Completing the package, the alphabet <math>\mathfrak{A}</math> of the language is also known as the set of ''terminal symbols''.  In this discussion, I will adopt the convention that <math>\mathfrak{Q}</math> is the set of ''intermediate symbols'', but I will often use <math>q\!</math> as a typical variable that ranges over all of the non-terminal symbols, <math>q \in \{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q}.</math> Finally, it is convenient to refer to all of the symbols in <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q} \cup \mathfrak{A}</math> as the ''augmented alphabet'' of the prospective grammar for the language, and accordingly to describe the strings in <math>( \{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q} \cup \mathfrak{A} )^*</math> as the ''augmented strings'', in effect, expressing the forms that are superimposed on a language by one of its conceivable grammars.  In certain settings it becomes desirable to separate the augmented strings that contain the symbol <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> from all other sorts of augmented strings.  In these situations the strings in the disjoint union <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup (\mathfrak{Q} \cup \mathfrak{A} )^*</math> are known as the ''sentential forms'' of the associated grammar.
  
      if    z_1, ..., z_k are sentences,
+
In forming a grammar for a language statements of the form <math>W :> W',\!</math>
 +
where <math>W\!</math> and <math>W'\!</math> are augmented strings or sentential forms of specified types that depend on the style of the grammar that is being sought, are variously known as ''characterizations'', ''covering rules'', ''productions'', ''rewrite rules'', ''subsumptions'', ''transformations'', or ''typing rules''.  These are collected together into a set <math>\mathfrak{K}</math> that serves to complete the definition of the formal grammar in question.
  
      then Conc^k_j  z_j is a sentence,
+
Correlative with the use of this notation, an expression of the form <math>T <: S,\!</math> read to say that <math>T\!</math> is covered by <math>S,\!</math> can be interpreted to say that <math>T\!</math> is of the type <math>S.\!</math> Depending on the context, this can be taken in either one of two ways:
  
      and  Surc^k_j  z_j is a sentence.
+
# Treating <math>T\!</math> as a string variable, it means that the individual string <math>T\!</math> is typed as <math>S.\!</math>
 +
# Treating <math>T\!</math> as a type name, it means that any instance of the type <math>T\!</math> also falls under the type <math>S.\!</math>
  
As usual, saying that z is a sentence is just a conventional way of
+
In accordance with these interpretations, an expression of the form <math>t <: T\!</math> can be read in all of the ways that one typically reads an expression of the form <math>t : T.\!</math>
stating that the string z belongs to the relevant formal language !L!.
 
An individual sentence of !C!(!P!), for any palette !P!, is referred to
 
as a "painted and rooted cactus expression" (PARCE) on the palette !P!,
 
or a "cactus expression", for short.  Anticipating the forms that the
 
parse graphs of these PARCE's will take, to be described in the next
 
Subsection, the language !L! = !C!(!P!) is also described as the
 
set PARCE(!P!) of PARCE's on the palette !P!, more generically,
 
as the PARCE's that constitute the language PARCE.
 
  
A "bare" PARCE, a bit loosely referred to as a "bare cactus expression",
+
There are several abuses of notation that commonly tolerated in the use of covering relations.  The worst offense is that of allowing symbols to stand equivocally either for individual strings or else for their typesThere is a measure of consistency to this practice, considering the fact that perfectly individual entities are rarely if ever grasped by means of signs and finite expressions, which entails that every appearance of an apparent token is only a type of more particular tokens, and meaning in the end that there is never any recourse but to the sort of discerning interpretation that can decide just how each sign is intendedIn view of all this, I continue to permit expressions like <math>t <: T\!</math> and <math>T <: S,\!</math> where any of the symbols <math>t, T, S\!</math> can be taken to signify either the tokens or the subtypes of their covering types.
is a PARCE on the empty palette !P! = {}A bare PARCE is a sentence
 
in the "bare cactus language", !C!^0 = !C!({}) = PARCE^0 = PARCE({}).
 
This set of strings, regarded as a formal language in its own right,
 
is a sublanguage of every cactus language !C!(!P!)A bare cactus
 
expression is commonly encountered in practice when one has occasion
 
to start with an arbitrary PARCE and then finds a reason to delete or
 
to erase all of its paints.
 
  
Only one thing remains to cast this description of the cactus language
+
'''Note.'''  For some time to come in the discussion that follows, although I will continue to focus on the cactus language as my principal object example, my more general purpose will be to develop the subject matter of the formal languages and grammars. I will do this by taking up a particular method of ''stepwise refinement'' and using it to extract a rigorous formal grammar for the cactus language, starting with little more than a rough description of the target language and applying a systematic analysis to develop a sequence of increasingly more effective and more exact approximations to the desired grammar.
into a form that is commonly found acceptable.  As presently formulated,
 
the principle PC 4 appears to be attempting to define an infinite number
 
of new concepts all in a single step, at least, it appears to invoke the
 
indefinitely long sequences of operators, Conc^k and Surc^k, for all k > 0.
 
As a general rule, one prefers to have an effectively finite description of
 
conceptual objects, and this means restricting the description to a finite
 
number of schematic principles, each of which involves a finite number of
 
schematic effects, that is, a finite number of schemata that explicitly
 
relate conditions to results.
 
  
A start in this direction, taking steps toward an effective description
+
Employing the notion of a covering relation it becomes possible to redescribe the cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P})</math> in the following ways.
of the cactus language, a finitary conception of its membership conditions,
 
and a bounded characterization of a typical sentence in the language, can be
 
made by recasting the present description of these expressions into the pattern
 
of what is called, more or less roughly, a "formal grammar".
 
  
A notation in the style of "S :> T" is now introduced,
+
====Grammar 1====
to be read among many others in this manifold of ways:
 
  
| S covers T
+
Grammar&nbsp;1 is something of a misnomer. It is nowhere near exemplifying any kind of a standard form and it is only intended as a starting point for the initiation of more respectable grammars. Such as it is, it uses the terminal alphabet <math>\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P}</math> that comes with the territory of the cactus language <math>\mathfrak{C} (\mathfrak{P}),\!</math> it specifies <math>\mathfrak{Q} = \varnothing,</math> in other words, it employs no intermediate symbols, and it embodies the ''covering set'' <math>\mathfrak{K}</math> as listed in the following display.
|
 
| S governs T
 
|
 
|  S rules T
 
|
 
|  S subsumes T
 
|
 
|  S types over T
 
  
The form "S :> T" is here recruited for polymorphic
+
<br>
employment in at least the following types of roles:
 
  
1.  To signify that an individually named or quoted string T is
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
    being typed as a sentence S of the language of interest !L!.
+
| align="left" style="border-left:1px solid black;" width="50%" |
 
+
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 1}\!</math>
2.  To express the fact or to make the assertion that each member
+
| align="right" style="border-right:1px solid black;" width="50%" |
    of a specified set of strings T c !A!* also belongs to the
+
<math>\mathfrak{Q} = \varnothing</math>
    syntactic category S, the one that qualifies a string as
+
|-
    being a sentence in the relevant formal language !L!.
+
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 
+
<math>\begin{array}{rcll}
3.  To specify the intension or to signify the intention that every
+
1.
    string that fits the conditions of the abstract type T must also
+
& S
    fall under the grammatical heading of a sentence, as indicated by
+
& :>
    the type name "S", all within the target language !L!.
+
& m_1 \ = \ ^{\backprime\backprime} \operatorname{~} ^{\prime\prime}
 
+
\\
In these types of situation the letter "S", that signifies the type of
+
2.
a sentence in the language of interest, is called the "initial symbol"
+
& S
or the "sentence symbol" of a candidate formal grammar for the language,
+
& :>
while any number of letters like "T", signifying other types of strings
+
& p_j, \, \text{for each} \, j \in J
that are necessary to a reasonable account or a rational reconstruction
+
\\
of the sentences that belong to the language, are collectively referred
+
3.
to as "intermediate symbols".
+
& S
 
+
& :>
Combining the singleton set {"S"} whose sole member is the initial symbol
+
& \operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}
with the set !Q! that assembles together all of the intermediate symbols
+
\\
results in the set {"S"} |_| !Q! of "non-terminal symbols".  Completing
+
4.
the package, the alphabet !A! of the language is also known as the set
+
& S
of "terminal symbols". In this discussion, I will adopt the convention
+
& :>
that !Q! is the set of intermediate symbols, but I will often use "q"
+
& \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
as a typical variable that ranges over all of the non-terminal symbols,
+
\\
q in {"S"} |_| !Q!. Finally, it is convenient to refer to all of the
+
5.
symbols in {"S"} |_| !Q! |_| !A! as the "augmented alphabet" of the
+
& S
prospective grammar for the language, and accordingly to describe
+
& :>
the strings in ({"S"} |_| !Q! |_| !A!)* as the "augmented strings",
+
& S^*
in effect, expressing the forms that are superimposed on a language
+
\\
by one of its conceivable grammars. In certain settings is becomes
+
6.
desirable to separate the augmented strings that contain the symbol
+
& S
"S" from all other sorts of augmented strings. In these situations,
+
& :>
the strings in the disjoint union {"S"} |_| (!Q! |_| !A!)* are known
+
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S \, \cdot \, ( \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \, )^* \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
as the "sentential forms" of the associated grammar.
+
\\
 +
\end{array}</math>
 +
|}
  
In forming a grammar for a language, statements of the form W :> W',
+
<br>
where W and W' are augmented strings or sentential forms of specified
 
types that depend on the style of the grammar that is being sought, are
 
variously known as "characterizations", "covering rules", "productions",
 
"rewrite rules", "subsumptions", "transformations", or "typing rules".
 
These are collected together into a set !K! that serves to complete
 
the definition of the formal grammar in question.
 
 
 
Correlative with the use of this notation, an expression of the
 
form "T <: S", read as "T is covered by S", can be interpreted
 
as saying that T is of the type S.  Depending on the context,
 
this can be taken in either one of two ways:
 
 
 
1.  Treating "T" as a string variable, it means
 
    that the individual string T is typed as S.
 
 
 
2.  Treating "T" as a type name, it means that any
 
    instance of the type T also falls under the type S.
 
 
 
In accordance with these interpretations, an expression like "t <: T" can be
 
read in all of the ways that one typically reads an expression like "t : T".
 
 
 
There are several abuses of notation that commonly tolerated in the use
 
of covering relations.  The worst offense is that of allowing symbols to
 
stand equivocally either for individual strings or else for their types.
 
There is a measure of consistency to this practice, considering the fact
 
that perfectly individual entities are rarely if ever grasped by means of
 
signs and finite expressions, which entails that every appearance of an
 
apparent token is only a type of more particular tokens, and meaning in
 
the end that there is never any recourse but to the sort of discerning
 
interpretation that can decide just how each sign is intended.  In view
 
of all this, I continue to permit expressions like "t <: T" and "T <: S",
 
where any of the symbols "t", "T", "S" can be taken to signify either the
 
tokens or the subtypes of their covering types.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
The combined effect of several typos in my typography
 
along with what may be a lack of faith in imagination,
 
obliges me to redo a couple of paragraphs from before.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
A notation in the style of "S :> T" is now introduced,
 
to be read among many others in this manifold of ways:
 
 
 
|  S covers T
 
|
 
|  S governs T
 
|
 
|  S rules T
 
|
 
|  S subsumes T
 
|
 
|  S types over T
 
 
 
The form "S :> T" is here recruited for polymorphic
 
employment in at least the following types of roles:
 
 
 
1.  To signify that an individually named or quoted string T is
 
    being typed as a sentence S of the language of interest !L!.
 
 
 
2.  To express the fact or to make the assertion that each member
 
    of a specified set of strings T c !A!* also belongs to the
 
    syntactic category S, the one that qualifies a string as
 
    being a sentence in the relevant formal language !L!.
 
 
 
3.  To specify the intension or to signify the intention that every
 
    string that fits the conditions of the abstract type T must also
 
    fall under the grammatical heading of a sentence, as indicated by
 
    the type name "S", all within the target language !L!.
 
 
 
In these types of situation the letter "S", that signifies the type of
 
a sentence in the language of interest, is called the "initial symbol"
 
or the "sentence symbol" of a candidate formal grammar for the language,
 
while any number of letters like "T", signifying other types of strings
 
that are necessary to a reasonable account or a rational reconstruction
 
of the sentences that belong to the language, are collectively referred
 
to as "intermediate symbols".
 
 
 
Combining the singleton set {"S"} whose sole member is the initial symbol
 
with the set !Q! that assembles together all of the intermediate symbols
 
results in the set {"S"} |_| !Q! of "non-terminal symbols".  Completing
 
the package, the alphabet !A! of the language is also known as the set
 
of "terminal symbols".  In this discussion, I will adopt the convention
 
that !Q! is the set of intermediate symbols, but I will often use "q"
 
as a typical variable that ranges over all of the non-terminal symbols,
 
q in {"S"} |_| !Q!.  Finally, it is convenient to refer to all of the
 
symbols in {"S"} |_| !Q! |_| !A! as the "augmented alphabet" of the
 
prospective grammar for the language, and accordingly to describe
 
the strings in ({"S"} |_| !Q! |_| !A!)* as the "augmented strings",
 
in effect, expressing the forms that are superimposed on a language
 
by one of its conceivable grammars.  In certain settings is becomes
 
desirable to separate the augmented strings that contain the symbol
 
"S" from all other sorts of augmented strings.  In these situations,
 
the strings in the disjoint union {"S"} |_| (!Q! |_| !A!)* are known
 
as the "sentential forms" of the associated grammar.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
For some time to come in the discussion that follows,
 
although I will continue to focus on the cactus language
 
as my principal object example, my more general purpose will
 
be to develop and to demonstrate the subject materials and the
 
technical methodology of the theory of formal languages and grammars.
 
I will do this by taking up a particular method of "stepwise refinement"
 
and using it to extract a rigorous formal grammar for the cactus language,
 
starting with little more than a rough description of the target language
 
and applying a systematic analysis to develop a sequence of increasingly
 
more effective and more exact approximations to the desired grammar.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Employing the notion of a covering relation it becomes possible to
 
redescribe the cactus language !L! = !C!(!P!) in the following way.
 
 
 
Grammar 1 is something of a misnomer.  It is nowhere near exemplifying
 
any kind of a standard form and it is only intended as a starting point
 
for the initiation of more respectable grammars.  Such as it is, it uses
 
the terminal alphabet !A! = !M! |_| !P! that comes with the territory of
 
the cactus language !C!(!P!), it specifies !Q! = {}, in other words, it
 
employs no intermediate symbols, and it embodies the "covering set" !K!
 
as listed in the following display.
 
 
 
| !C!(!P!).  Grammar 1
 
|
 
| !Q! = {}
 
|
 
| 1.  S  :>  m_1  =  " "
 
|
 
| 2.  S  :>  p_j, for each j in J
 
|
 
| 3.  S  :>  Conc^0  =  ""
 
|
 
| 4.  S  :>  Surc^0  =  "-()-"
 
|
 
| 5.  S  :>  S*
 
|
 
| 6.  S  :> "-(" · S · ("," · S)* · ")-"
 
  
 
In this formulation, the last two lines specify that:
 
In this formulation, the last two lines specify that:
  
5.  The concept of a sentence in !L! covers any
+
<ol style="list-style-type:decimal">
    concatenation of sentences in !L!, in effect,
 
    any number of freely chosen sentences that are
 
    available to be concatenated one after another.
 
 
 
6.  The concept of a sentence in !L! covers any
 
    surcatenation of sentences in !L!, in effect,
 
    any string that opens with a "-(", continues
 
    with a sentence, possibly empty, follows with
 
    a finite number of phrases of the form "," · S,
 
    and closes with a ")-".
 
 
 
This appears to be just about the most concise description
 
of the cactus language !C!(!P!) that one can imagine, but
 
there exist a couple of problems that are commonly felt
 
to afflict this style of presentation and to make it
 
less than completely acceptable.  Briefly stated,
 
these problems turn on the following properties
 
of the presentation:
 
 
 
1.  The invocation of the kleene star operation
 
    is not reduced to a manifestly finitary form.
 
 
 
2.  The type of a sentence S is allowed to cover
 
    not only itself but also the empty string.
 
 
 
I will discuss these issues at first in general, and especially in regard to
 
how the two features interact with one another, and then I return to address
 
in further detail the questions that they engender on their individual bases.
 
 
 
In the process of developing a grammar for a language, it is possible
 
to notice a number of organizational, pragmatic, and stylistic questions,
 
whose moment to moment answers appear to decide the ongoing direction of the
 
grammar that develops and the impact of whose considerations work in tandem
 
to determine, or at least to influence, the sort of grammar that turns out.
 
The issues that I can see arising at this point I can give the following
 
prospective names, putting off the discussion of their natures and the
 
treatment of their details to the points in the development of the
 
present example where they evolve their full import.
 
 
 
1.  The "degree of intermediate organization" in a grammar.
 
 
 
2.  The "distinction between empty and significant strings", and thus
 
    the "distinction between empty and significant types of strings".
 
 
 
3.  The "principle of intermediate significance".  This is a constraint
 
    on the grammar that arises from considering the interaction of the
 
    first two issues.
 
 
 
In responding to these issues, it is advisable at first to proceed in
 
a stepwise fashion, all the better thereby to accommodate the chances
 
of pursuing a series of parallel developments in the grammar, to allow
 
for the possibility of reversing many steps in its development, indeed,
 
to take into account the near certain necessity of having to revisit,
 
to revise, and to reverse many decisions about how to proceed toward
 
an optimal description or a satisfactory grammar for the language.
 
Doing all this means exploring the effects of various alterations
 
and innovations as independently from each other as possible.
 
 
 
The degree of intermediate organization in a grammar is measured by how many
 
intermediate symbols it has and by how they interact with each other by means
 
of its productions.  With respect to this issue, Grammar 1 has no intermediate
 
symbols at all, !Q! = {}, and therefore remains at an ostensibly trivial degree
 
of intermediate organization.  Some additions to the list of intermediate symbols
 
are practically obligatory in order to arrive at any reasonable grammar at all,
 
other inclusions appear to have a more optional character, though obviously
 
useful from the standpoints of clarity and ease of comprehension.
 
 
 
One of the troubles that is perceived to affect Grammar 1 is that it wastes
 
so much of the available potential for efficient description in recounting
 
over and over again the simple fact that the empty string is present in
 
the language.  This arises in part from the statement that S :> S*,
 
which implies that:
 
 
 
S  :> S*  =  %e% |_| S |_| S · S |_| S · S · S |_| ...
 
  
There is nothing wrong with the more expansive pan of the covered equation,
+
<li value="5"> The concept of a sentence in <math>\mathfrak{L}</math> covers any concatenation of sentences in <math>\mathfrak{L},</math> in effect, any number of freely chosen sentences that are available to be concatenated one after another.</li>
since it follows straightforwardly from the definition of the kleene star
 
operation, but the covering statement, to the effect that S :> S*, is not
 
necessarily a very productive piece of information, to the extent that it
 
does always tell us very much about the language that is being supposed to
 
fall under the type of a sentence S.  In particular, since it implies that
 
S :> %e%, and since %e%·!L!  =  !L!·%e%  =  !L!, for any formal language !L!,
 
the empty string !e! = "" is counted over and over in every term of the union,
 
and every non-empty sentence under S appears again and again in every term of
 
the union that follows the initial appearance of S.  As a result, this style
 
of characterization has to be classified as "true but not very informative".
 
If at all possible, one prefers to partition the language of interest into
 
a disjoint union of subsets, thereby accounting for each sentence under
 
its proper term, and one whose place under the sum serves as a useful
 
parameter of its character or its complexity.  In general, this form
 
of description is not always possible to achieve, but it is usually
 
worth the trouble to actualize it whenever it is.
 
  
Suppose that one tries to deal with this problem by eliminating each use of
+
<li value="6"> The concept of a sentence in <math>\mathfrak{L}</math> covers any surcatenation of sentences in <math>\mathfrak{L},</math> in effect, any string that opens with a <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime},</math> continues with a sentence, possibly empty, follows with a finite number of phrases of the form <math>^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S,</math> and closes with a <math>^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></li>
the kleene star operation, by reducing it to a purely finitary set of steps,
 
or by finding an alternative way to cover the sublanguage that it is used to
 
generate.  This amounts, in effect, to "recognizing a type", a complex process
 
that involves the following steps:
 
  
1.  Noticing a category of strings that
+
</ol>
    is generated by iteration or recursion.
 
  
2.  Acknowledging the circumstance that the noted category
+
This appears to be just about the most concise description of the cactus language <math>\mathfrak{C} (\mathfrak{P})</math> that one can imagine, but there are a couple of problems that are commonly felt to afflict this style of presentation and to make it less than completely acceptable. Briefly stated, these problems turn on the following properties of the presentation:
    of strings needs to be covered by a non-terminal symbol.
 
  
3. Making a note of it by declaring and instituting
+
# The invocation of the kleene star operation is not reduced to a manifestly finitary form.
    an explicitly and even expressively named category.
+
# The type <math>S\!</math> that indicates a sentence is allowed to cover not only itself but also the empty string.
  
In sum, one introduces a non-terminal symbol for each type of sentence and
+
I will discuss these issues at first in general, and especially in regard to how the two features interact with one another, and then I return to address in further detail the questions that they engender on their individual bases.
each "part of speech" or sentential component that is generated by means of
 
iteration or recursion under the ruling constraints of the grammar.  In order
 
to do this one needs to analyze the iteration of each grammatical operation in
 
a way that is analogous to a mathematically inductive definition, but further in
 
a way that is not forced explicitly to recognize a distinct and separate type of
 
expression merely to account for and to recount every increment in the parameter
 
of iteration.
 
  
Returning to the case of the cactus language, the process of recognizing an
+
In the process of developing a grammar for a language, it is possible to notice a number of organizational, pragmatic, and stylistic questions, whose moment to moment answers appear to decide the ongoing direction of the grammar that develops and the impact of whose considerations work in tandem to determine, or at least to influence, the sort of grammar that turns outThe issues that I can see arising at this point I can give the following prospective names, putting off the discussion of their natures and the treatment of their details to the points in the development of the present example where they evolve their full import.
iterative type or a recursive type can be illustrated in the following way.
 
The operative phrases in the simplest sort of recursive definition are its
 
initial part and its generic partFor the cactus language !C!(!P!), one
 
has the following definitions of concatenation as iterated precatenation
 
and of surcatenation as iterated subcatenation, respectively:
 
  
1Conc^0        =  "".
+
# The ''degree of intermediate organization'' in a grammar.
 +
# The ''distinction between empty and significant strings'', and thus the ''distinction between empty and significant types of strings''.
 +
# The ''principle of intermediate significance''This is a constraint on the grammar that arises from considering the interaction of the first two issues.
  
    Conc^k_j S_j  = Prec(Conc^(k-1)_j S_j, S_k).
+
In responding to these issues, it is advisable at first to proceed in a stepwise fashion, all the better to accommodate the chances of pursuing a series of parallel developments in the grammar, to allow for the possibility of reversing many steps in its development, indeed, to take into account the near certain necessity of having to revisit, to revise, and to reverse many decisions about how to proceed toward an optimal description or a satisfactory grammar for the language. Doing all this means exploring the effects of various alterations and innovations as independently from each other as possible.
  
2Surc^0        "-()-".
+
The degree of intermediate organization in a grammar is measured by how many intermediate symbols it has and by how they interact with each other by means of its productionsWith respect to this issue, Grammar&nbsp;1 has no intermediate symbols at all, <math>\mathfrak{Q} = \varnothing,</math> and therefore remains at an ostensibly trivial degree of intermediate organization. Some additions to the list of intermediate symbols are practically obligatory in order to arrive at any reasonable grammar at all, other inclusions appear to have a more optional character, though obviously useful from the standpoints of clarity and ease of comprehension.
  
    Surc^k_j S_j  = Subc(Surc^(k-1)_j S_j, S_k).
+
One of the troubles that is perceived to affect Grammar&nbsp;1 is that it wastes so much of the available potential for efficient description in recounting over and over again the simple fact that the empty string is present in the language. This arises in part from the statement that <math>S :> S^*,\!</math> which implies that:
  
In order to transform these recursive definitions into grammar rules,
+
{| align="center" cellpadding="8" width="90%"
one introduces a new pair of intermediate symbols, "Conc" and "Surc",
 
corresponding to the operations that share the same names but ignoring
 
the inflexions of their individual parameters j and k.  Recognizing the
 
type of a sentence by means of the initial symbol "S", and interpreting
 
"Conc" and "Surc" as names for the types of strings that are generated
 
by concatenation and by surcatenation, respectively, one arrives at
 
the following transformation of the ruling operator definitions
 
into the form of covering grammar rules:
 
 
 
1.  Conc  :>  "".
 
 
 
    Conc  :>  Conc · S.
 
 
 
2.  Surc  :>  "-()-".
 
 
 
    Surc  :>  "-(" · S · ")-".
 
 
 
    Surc  :>  Surc ")-"^(-1) · "," · S · ")-".
 
 
 
As given, this particular fragment of the intended grammar
 
contains a couple of features that are desirable to amend.
 
 
 
1.  Given the covering S :> Conc, the covering rule Conc :> Conc · S
 
    says no more than the covering rule Conc :> S · S.  Consequently,
 
    all of the information contained in these two covering rules is
 
    already covered by the statement that S :> S · S.
 
 
 
2.  A grammar rule that invokes a notion of decatenation, deletion, erasure,
 
    or any other sort of retrograde production, is frequently considered to
 
    be lacking in elegance, and a there is a style of critique for grammars
 
    that holds it preferable to avoid these types of operations if it is at
 
    all possible to do so.  Accordingly, contingent on the prescriptions of
 
    the informal rule in question, and pursuing the stylistic dictates that
 
    are writ in the realm of its aesthetic regime, it becomes necessary for
 
    us to backtrack a little bit, to temporarily withdraw the suggestion of
 
    employing these elliptical types of operations, but without, of course,
 
    eliding the record of doing so.
 
 
 
One way to analyze the surcatenation of any number of sentences is to
 
introduce an auxiliary type of string, not in general a sentence, but
 
a proper component of any sentence that is formed by surcatenation.
 
Doing this brings one to the following definition:
 
 
 
A "tract" is a concatenation of a finite sequence of sentences, with a
 
literal comma "," interpolated between each pair of adjacent sentences.
 
Thus, a typical tract T takes the form:
 
 
 
= S_1 · "," · ...  · "," · S_k.
 
 
 
A tract must be distinguished from the abstract sequence of sentences,
 
S_1, ..., S_k, where the commas that appear to come to mind, as if being
 
called up to separate the successive sentences of the sequence, remain as
 
partially abstract conceptions, or as signs that retain a disengaged status
 
on the borderline between the text and the mind.  In effect, the types of
 
commas that appear to follow in the abstract sequence continue to exist
 
as conceptual abstractions and fail to be cognized in a wholly explicit
 
fashion, whether as concrete tokens in the object language, or as marks
 
in the strings of signs that are able to engage one's parsing attention.
 
 
 
Returning to the case of the painted cactus language !L! = !C!(!P!),
 
it is possible to put the currently assembled pieces of a grammar
 
together in the light of the presently adopted canons of style,
 
to arrive a more refined analysis of the fact that the concept
 
of a sentence covers any concatenation of sentences and any
 
surcatenation of sentences, and so to obtain the following
 
form of a grammar:
 
 
 
| !C!(!P!).  Grammar 2
 
 
|
 
|
| !Q! = {"T"}
+
<math>\begin{array}{lcccccccccccc}
|
+
S
| 1.  S  :> !e!
+
& :>
|
+
& S^*
| 2.  S  :>  m_1
+
& =
|
+
& \underline\varepsilon
| 3.  S :>  p_j, for each j in J
+
& \cup & S
|
+
& \cup & S \cdot S
| 4.  S :>  S · S
+
& \cup & S \cdot S \cdot S
|
+
& \cup & \ldots \\
| 5.  S  :> "-(" · T · ")-"
+
\end{array}</math>
|
+
|}
| 6.  T  :>  S
 
|
 
| 7.  T  :>  T · "," · S
 
  
In this rendition, a string of type T is not in general
+
There is nothing wrong with the more expansive pan of the covered equation, since it follows straightforwardly from the definition of the kleene star operation, but the covering statement to the effect that <math>S :> S^*\!</math> is not a very productive piece of information, in the sense of telling very much about the language that falls under the type of a sentence <math>S.\!</math> In particular, since it implies that <math>S :> \underline\varepsilon,</math> and since <math>\underline\varepsilon \cdot \mathfrak{L} \, = \, \mathfrak{L} \cdot \underline\varepsilon \, = \, \mathfrak{L},</math> for any formal language <math>\mathfrak{L},</math> the empty string <math>\varepsilon\!</math> is counted over and over in every term of the union, and every non-empty sentence under <math>S\!</math> appears again and again in every term of the union that follows the initial appearance of <math>S.\!</mathAs a result, this style of characterization has to be classified as ''true but not very informative''.  If at all possible, one prefers to partition the language of interest into a disjoint union of subsets, thereby accounting for each sentence under its proper term, and one whose place under the sum serves as a useful parameter of its character or its complexity.  In general, this form of description is not always possible to achieve, but it is usually worth the trouble to actualize it whenever it is.
a sentence itself but a proper "part of speech", that is,
 
a strictly "lesser" component of a sentence in any suitable
 
ordering of sentences and their components.  In order to see
 
how the grammatical category T gets off the ground, that is,
 
to detect its minimal strings and to discover how its ensuing
 
generations gets started from these, it is useful to observe
 
that the covering rule T :> S means that T "inherits" all of
 
the initial conditions of S, namely, T  :!e!, m_1, p_j.
 
In accord with these simple beginnings it comes to parse
 
that the rule T :> T · "," · S, with the substitutions
 
T = !e! and S = !e! on the covered side of the rule,
 
bears the germinal implication that T :> ",".
 
  
Grammar 2 achieves a portion of its success through a higher degree of
+
Suppose that one tries to deal with this problem by eliminating each use of the kleene star operation, by reducing it to a purely finitary set of steps, or by finding an alternative way to cover the sublanguage that it is used to generateThis amounts, in effect, to ''recognizing a type'', a complex process that involves the following steps:
intermediate organization.  Roughly speaking, the level of organization
 
can be seen as reflected in the cardinality of the intermediate alphabet
 
!Q! = {"T"}, but it is clearly not explained by this simple circumstance
 
alone, since it is taken for granted that the intermediate symbols serve
 
a purpose, a purpose that is easily recognizable but that may not be so
 
easy to pin down and to specify exactlyNevertheless, it is worth the
 
trouble of exploring this aspect of organization and this direction of
 
development a little further.  Although it is not strictly necessary
 
to do so, it is possible to organize the materials of the present
 
grammar in a slightly better fashion by recognizing two recurrent
 
types of strings that appear in the typical cactus expression.
 
In doing this, one arrives at the following two definitions:
 
  
A "rune" is a string of blanks and paints concatenated together.
+
# '''Noticing''' a category of strings that is generated by iteration or recursion.
Thus, a typical rune R is a string over {m_1} |_| !P!, possibly
+
# '''Acknowledging''' the fact that it needs to be covered by a non-terminal symbol.
the empty string.
+
# '''Making a note of it''' by instituting an explicitly-named grammatical category.
  
R in ({m_1} |_| !P!)*.
+
In sum, one introduces a non-terminal symbol for each type of sentence and each ''part of speech'' or sentential component that is generated by means of iteration or recursion under the ruling constraints of the grammar. In order to do this one needs to analyze the iteration of each grammatical operation in a way that is analogous to a mathematically inductive definition, but further in a way that is not forced explicitly to recognize a distinct and separate type of expression merely to account for and to recount every increment in the parameter of iteration.
  
When there is no possibility of confusion, the letter "R" can be used
+
Returning to the case of the cactus language, the process of recognizing an iterative type or a recursive type can be illustrated in the following way.  The operative phrases in the simplest sort of recursive definition are its ''initial part'' and its ''generic part''For the cactus language <math>\mathfrak{C} (\mathfrak{P}),\!</math> one has the following definitions of concatenation as iterated precatenation and of surcatenation as iterated subcatenation, respectively:
either as a string variable that ranges over the set of runes or else
 
as a type name for the class of runes.  The latter reading amounts to
 
the enlistment of a fresh intermediate symbol, "R" in !Q!, as a part
 
of a new grammar for !C!(!P!)In effect, "R" affords a grammatical
 
recognition for any rune that forms a part of a sentence in !C!(!P!).
 
In situations where these variant usages are likely to be confused,
 
the types of strings can be indicated by means of expressions like
 
"r <: R" and "W <: R".
 
  
A "foil" is a string of the form "-(" · T · ")-", where T is a tract.
+
{| align="center" cellpadding="8" width="90%"
Thus, a typical foil F has the form:
 
 
 
= "-(" · S_1 · "," · ... · "," · S_k · ")-".
 
 
 
This is just the surcatenation of the sentences S_1, ..., S_k.
 
Given the possibility that this sequence of sentences is empty,
 
and thus that the tract T is the empty string, the minimum foil
 
F is the expression "-()-".  Explicitly marking each foil F that
 
is embodied in a cactus expression is tantamount to recognizing
 
another intermediate symbol, "F" in !Q!, further articulating the
 
structures of sentences and expanding the grammar for the language
 
!C!(!P!).  All of the same remarks about the versatile uses of the
 
intermediate symbols, as string variables and as type names, apply
 
again to the letter "F".
 
 
 
| !C!(!P!).  Grammar 3
 
 
|
 
|
| !Q! = {"F", "R", "T"}
+
<math>\begin{array}{llll}
|
+
1.
1. S  :>  R
+
& \operatorname{Conc}_{j=1}^0
|
+
& =
|  2.  S  :>  F
+
& ^{\backprime\backprime\prime\prime}
|
+
\\ \\
|  3.  S  :>  S · S
+
& \operatorname{Conc}_{j=1}^k S_j
|
+
& =
|  4.  R  :>  !e!
+
& \operatorname{Prec} (\operatorname{Conc}_{j=1}^{k-1} S_j, S_k)
|
+
\\ \\
|  5.  R  :>  m_1
+
2.
|
+
& \operatorname{Surc}_{j=1}^0
|  6.  R  :>  p_j, for each j in J
+
& =
|
+
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
|  7.  R  :>  R · R
+
\\ \\
|
+
& \operatorname{Surc}_{j=1}^k S_j
|  8.  F  :>  "-(" · T · ")-"
+
& =
|
+
& \operatorname{Subc} (\operatorname{Surc}_{j=1}^{k-1} S_j, S_k)
|  9.  T  :>  S
+
\\ \\
|
+
\end{array}</math>
| 10.  T  :>  T · "," · S
+
|}
 
 
In Grammar 3, the first three Rules say that a sentence (a string of type S),
 
is a rune (a string of type R), a foil (a string of type F), or an arbitrary
 
concatenation of strings of these two types. Rules 4 through 7 specify that
 
a rune R is an empty string !e! = "", a blank symbol m_1 = " ", a paint p_j,
 
for j in J, or any concatenation of strings of these three types.  Rule 8
 
characterizes a foil F as a string of the form "-(" · T · ")-", where T is
 
a tract.  The last two Rules say that a tract T is either a sentence S or
 
else the concatenation of a tract, a comma, and a sentence, in that order.
 
 
 
At this point in the succession of grammars for !C!(!P!), the explicit
 
uses of indefinite iterations, like the kleene star operator, are now
 
completely reduced to finite forms of concatenation, but the problems
 
that some styles of analysis have with allowing non-terminal symbols
 
to cover both themselves and the empty string are still present.
 
 
 
Any degree of reflection on this difficulty raises the general question:
 
What is a practical strategy for accounting for the empty string in the
 
organization of any formal language that counts it among its sentences?
 
One answer that presents itself is this:  If the empty string belongs to
 
a formal language, it suffices to count it once at the beginning of the
 
formal account that enumerates its sentences and then to move on to more
 
interesting materials.
 
 
 
Returning to the case of the cactus language !C!(!P!), that is,
 
the formal language of "painted and rooted cactus expressions",
 
it serves the purpose of efficient accounting to partition the
 
language PARCE into the following couple of sublanguages:
 
 
 
1.  The "emptily painted and rooted cactus expressions"
 
    make up the language EPARCE that consists of
 
    a single empty string as its only sentence.
 
    In short:
 
 
 
    EPARCE  =  {""}.
 
 
 
2.  The "significantly painted and rooted cactus expressions"
 
    make up the language SPARCE that consists of everything else,
 
    namely, all of the non-empty strings in the language PARCE.
 
    In sum:
 
 
 
    SPARCE  =  PARCE \ "".
 
 
 
As a result of marking the distinction between empty and significant sentences,
 
that is, by categorizing each of these three classes of strings as an entity
 
unto itself and by conceptualizing the whole of its membership as falling
 
under a distinctive symbol, one obtains an equation of sets that connects
 
the three languages being marked:
 
 
 
SPARCE  =  PARCE - EPARCE.
 
 
 
In sum, one has the disjoint union:
 
 
 
PARCE  =  EPARCE |_| SPARCE.
 
  
For brevity in the present case, and to serve as a generic device
+
In order to transform these recursive definitions into grammar rules, one introduces a new pair of intermediate symbols, <math>\operatorname{Conc}</math> and <math>\operatorname{Surc},</math> corresponding to the operations that share the same names but ignoring the inflexions of their individual parameters <math>j\!</math> and <math>k.\!</math> Recognizing the
in any similar array of situations, let the symbol "S" be used to
+
type of a sentence by means of the initial symbol <math>S\!</math> and interpreting <math>\operatorname{Conc}</math> and <math>\operatorname{Surc}</math> as names for the types of strings that are generated by concatenation and by surcatenation, respectively, one arrives at the following transformation of the ruling operator definitions into the form of covering grammar rules:
signify the type of an arbitrary sentence, possibly empty, whereas
 
the symbol "S'" is reserved to designate the type of a specifically
 
non-empty sentenceIn addition, let the symbol "%e%" be employed
 
to indicate the type of the empty sentence, in effect, the language
 
%e% = {""} that contains a single empty string, and let a plus sign
 
"+" signify a disjoint union of types.  In the most general type of
 
situation, where the type S is permitted to include the empty string,
 
one notes the following relation among types:
 
  
S  =  %e%  +  S'.
+
{| align="center" cellpadding="8" width="90%"
 
 
Consequences of the distinction between empty expressions and
 
significant expressions are taken up for discussion next time.
 
 
 
With the distinction between empty and significant expressions in mind,
 
I return to the grasp of the cactus language !L! = !C!(!P!) = PARCE(!P!)
 
that is afforded by Grammar 2, and, taking that as a point of departure,
 
explore other avenues of possible improvement in the comprehension of
 
these expressions.  In order to observe the effects of this alteration
 
as clearly as possible, in isolation from any other potential factors,
 
it is useful to strip away the higher levels intermediate organization
 
that are present in Grammar 3, and start again with a single intermediate
 
symbol, as used in Grammar 2.  One way of carrying out this strategy leads
 
on to a grammar of the variety that will be articulated next.
 
 
 
If one imposes the distinction between empty and significant types on
 
each non-terminal symbol in Grammar 2, then the non-terminal symbols
 
"S" and "T" give rise to the non-terminal symbols "S", "S'", "T", "T'",
 
leaving the last three of these to form the new intermediate alphabet.
 
Grammar 4 has the intermediate alphabet !Q! = {"S'", "T", "T'"}, with
 
the set !K! of covering production rules as listed in the next display.
 
 
 
| !C!(!P!).  Grammar 4
 
 
|
 
|
| !Q! = {"S'", "T", "T'"}
+
<math>\begin{array}{llll}
|
+
1.
| 1. :> !e!
+
& \operatorname{Conc}
|
+
& :>
| 2.  S  :> S'
+
& ^{\backprime\backprime\prime\prime}
|
+
\\ \\
| 3. S'  :>  m_1
+
& \operatorname{Conc}
|
+
& :>
| 4.  S'  :> p_j, for each j in J
+
& \operatorname{Conc} \cdot S
|
+
\\ \\
| 5.  S'  :>  "-(" · T · ")-"
+
2.
|
+
& \operatorname{Surc}
| 6.  S'  :> S' · S'
+
& :>
|
+
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
| 7.  T  :>  !e!
+
\\ \\
|
+
& \operatorname{Surc}
| 8.  T  :> T'
+
& :>
|
+
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
| 9.  T'  :>  T · "," · S
+
\\ \\
 
+
& \operatorname{Surc}
In this version of a grammar for !L! = !C!(!P!), the intermediate type T
+
& :>
is partitioned as T = %e% + T', thereby parsing the intermediate symbol T
+
& \operatorname{Surc} \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
in parallel fashion with the division of its overlying type as S = %e% + S'.
+
\end{array}</math>
This is an option that I will choose to close off for now, but leave it open
+
|}
to consider at a later point.  Thus, it suffices to give a brief discussion
 
of what it involves, in the process of moving on to its chief alternative.
 
  
There does not appear to be anything radically wrong with trying this
+
As given, this particular fragment of the intended grammar contains a couple of features that are desirable to amend.
approach to types.  It is reasonable and consistent in its underlying
 
principle, and it provides a rational and a homogeneous strategy toward
 
all parts of speech, but it does require an extra amount of conceptual
 
overhead, in that every non-trivial type has to be split into two parts
 
and comprehended in two stages.  Consequently, in view of the largely
 
practical difficulties of making the requisite distinctions for every
 
intermediate symbol, it is a common convention, whenever possible, to
 
restrict intermediate types to covering exclusively non-empty strings.
 
  
For the sake of future reference, it is convenient to refer to this restriction
+
# Given the covering <math>S :> \operatorname{Conc},</math> the covering rule <math>\operatorname{Conc} :> \operatorname{Conc} \cdot S</math> says no more than the covering rule <math>\operatorname{Conc} :> S \cdot S.</math>  Consequently, all of the information contained in these two covering rules is already covered by the statement that <math>S :> S \cdot S.</math>
on intermediate symbols as the "intermediate significance" constraint. It can
+
# A grammar rule that invokes a notion of decatenation, deletion, erasure, or any other sort of retrograde production, is frequently considered to be lacking in elegance, and a there is a style of critique for grammars that holds it preferable to avoid these types of operations if it is at all possible to do so.  Accordingly, contingent on the prescriptions of the informal rule in question, and pursuing the stylistic dictates that are writ in the realm of its aesthetic regime, it becomes necessary for us to backtrack a little bit, to temporarily withdraw the suggestion of employing these elliptical types of operations, but without, of course, eliding the record of doing so.
be stated in a compact form as a condition on the relations between non-terminal
 
symbols q in {"S"} |_| !Q! and sentential forms W in {"S"} |_| (!Q! |_| !A!)*.
 
  
| Condition On Intermediate Significance
+
====Grammar 2====
|
 
| If    q  :>  W
 
|
 
| and  W  = !e!,
 
|
 
| then  q  = "S".
 
  
If this is beginning to sound like a monotone condition, then it is
+
One way to analyze the surcatenation of any number of sentences is to introduce an auxiliary type of string, not in general a sentence, but a proper component of any sentence that is formed by surcatenationDoing this brings one to the following definition:
not absurd to sharpen the resemblance and render the likeness more
 
acuteThis is done by declaring a couple of ordering relations,
 
denoting them under variant interpretations by the same sign "<".
 
  
1.  The ordering "<" on the set of non-terminal symbols,
+
A ''tract'' is a concatenation of a finite sequence of sentences, with a literal comma <math>^{\backprime\backprime} \operatorname{,} ^{\prime\prime}</math> interpolated between each pair of adjacent sentences. Thus, a typical tract <math>T\!</math> takes the form:
    q in {"S"} |_| !Q!, ordains the initial symbol "S"
 
    to be strictly prior to every intermediate symbol.
 
    This is tantamount to the axiom that "S" < q,
 
    for all q in !Q!.
 
  
2.  The ordering "<" on the collection of sentential forms,
+
{| align="center" cellpadding="8" width="90%"
    W in {"S"} |_| (!Q! |_| !A!)*, ordains the empty string
 
    to be strictly minor to every other sentential form.
 
    This is stipulated in the axiom that !e! < W,
 
    for every non-empty sentential form W.
 
 
 
Given these two orderings, the constraint in question
 
on intermediate significance can be stated as follows:
 
 
 
| Condition Of Intermediate Significance
 
 
|
 
|
| If    q  :> W
+
<math>\begin{array}{lllllllllll}
|
+
T
| and  q  >  "S",
+
& =
|
+
& S_1
| then  W  >  !e!.
+
& \cdot
 +
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
 +
& \cdot
 +
& \ldots
 +
& \cdot
 +
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
 +
& \cdot
 +
& S_k
 +
\\
 +
\end{array}</math>
 +
|}
  
Achieving a grammar that respects this convention typically requires a more
+
A tract must be distinguished from the abstract sequence of sentences, <math>S_1, \ldots, S_k,\!</math> where the commas that appear to come to mind, as if being called up to separate the successive sentences of the sequence, remain as partially abstract conceptions, or as signs that retain a disengaged status on the borderline between the text and the mind.  In effect, the types of commas that appear to follow in the abstract sequence continue to exist as conceptual abstractions and fail to be cognized in a wholly explicit fashion, whether as concrete tokens in the object language, or as marks in the strings of signs that are able to engage one's parsing attention.
detailed account of the initial setting of a type, both with regard to the
 
type of context that incites its appearance and also with respect to the
 
minimal strings that arise under the type in question.  In order to find
 
covering productions that satisfy the intermediate significance condition,
 
one must be prepared to consider a wider variety of calling contexts or
 
inciting situations that can be noted to surround each recognized type,
 
and also to enumerate a larger number of the smallest cases that can
 
be observed to fall under each significant type.
 
  
With the array of foregoing considerations in mind,
+
Returning to the case of the painted cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}),</math> it is possible to put the currently assembled pieces of a grammar together in the light of the presently adopted canons of style, to arrive a more refined analysis of the fact that the concept of a sentence covers any concatenation of sentences and any surcatenation of sentences, and so to obtain the following form of a grammar:
one is gradually led to a grammar for !L! = !C!(!P!)
 
in which all of the covering productions have either
 
one of the following two forms:
 
  
| S  :> !e!
+
<br>
|
 
| q  :>  W,  with  q in {"S"} |_| !Q!,  and  W in (!Q! |_| !A!)^+
 
  
A grammar that fits into this mold is called a "context-free" grammar.
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
The first type of rewrite rule is referred to as a "special production",
+
| align="left"  style="border-left:1px solid black;"  width="50%" |
while the second type of rewrite rule is called an "ordinary production".
+
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 2}\!</math>
An "ordinary derivation" is one that employs only ordinary productions.
+
| align="right" style="border-right:1px solid black;" width="50%" |
In ordinary productions, those that have the form q :> W, the replacement
+
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math>
string W is never the empty string, and so the lengths of the augmented
+
|-
strings or the sentential forms that follow one another in an ordinary
+
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
derivation, on account of using the ordinary types of rewrite rules,
+
<math>\begin{array}{rcll}
never decrease at any stage of the process, up to and including the
+
1.
terminal string that is finally generated by the grammar. This type
+
& S
of feature is known as the "non-contracting property" of productions,
+
& :>
derivations, and grammars. A grammar is said to have the property if
+
& \varepsilon
all of its covering productions, with the possible exception of S :> e,
+
\\
are non-contracting. In particular, context-free grammars are special
+
2.
cases of non-contracting grammars. The presence of the non-contracting
+
& S
property within a formal grammar makes the length of the augmented string
+
& :>
available as a parameter that can figure into mathematical inductions and
+
& m_1
motivate recursive proofs, and this handle on the generative process makes
+
\\
it possible to establish the kinds of results about the generated language
+
3.
that are not easy to achieve in more general cases, nor by any other means
+
& S
even in these brands of special cases.
+
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
4.
 +
& S
 +
& :>
 +
& S \, \cdot \, S
 +
\\
 +
5.
 +
& S
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
6.
 +
& T
 +
& :>
 +
& S
 +
\\
 +
7.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S
 +
\\
 +
\end{array}</math>
 +
|}
  
Grammar 5 is a context-free grammar for the painted cactus language
+
<br>
that uses !Q! = {"S'", "T"}, with !K! as listed in the next display.
 
  
| !C!(!P!).  Grammar 5
+
In this rendition, a string of type <math>T\!</math> is not in general a sentence itself but a proper ''part of speech'', that is, a strictly ''lesser'' component of a sentence in any suitable ordering of sentences and their componentsIn order to see how the grammatical category <math>T\!</math> gets off the ground, that is, to detect its minimal strings and to discover how its ensuing generations get started from these, it is useful to observe that the covering rule <math>T :> S\!</math> means that <math>T\!</math> ''inherits'' all of the initial conditions of <math>S,\!</math> namely, <math>T \, :> \, \varepsilon, m_1, p_j.</math> In accord with these simple beginnings it comes to parse that the rule <math>T \, :> \, T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S,</math> with the substitutions <math>T = \varepsilon</math> and <math>S = \varepsilon</math> on the covered side of the rule, bears the germinal implication that <math>T \, :> \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}.</math>
|
 
| !Q! = {"S'", "T"}
 
|
 
|  1. S  :>  !e!
 
|
 
|  2.  S  :> S'
 
|
 
|  3.  S' :>  m_1
 
|
 
|  4.  S' :>  p_j, for each j in J
 
|
 
|  5S'  :> S' · S'
 
|
 
|  6.  S' :> "-()-"
 
|
 
|  7.  S'  :> "-(" · T · ")-"
 
|
 
|  8.  T  :> ","
 
|
 
|  9.  T   :> S'
 
|
 
| 10.  T   :> T · ","
 
|
 
| 11.  T  :> T · "," · S'
 
  
Finally, it is worth trying to bring together the advantages of these
+
Grammar&nbsp;2 achieves a portion of its success through a higher degree of intermediate organization. Roughly speaking, the level of organization can be seen as reflected in the cardinality of the intermediate alphabet <math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math> but it is clearly not explained by this simple circumstance alone, since it is taken for granted that the intermediate symbols serve a purpose, a purpose that is easily recognizable but that may not be so easy to pin down and to specify exactly.  Nevertheless, it is worth the trouble of exploring this aspect of organization and this direction of development a little further.
diverse styles of grammar, to whatever extent that they are compatible.
 
To do this, a prospective grammar must be capable of maintaining a high
 
level of intermediate organization, like that arrived at in Grammar 2,
 
while respecting the principle of intermediate significance, and thus
 
accumulating all the benefits of the context-free format in Grammar 5.
 
A plausible synthesis of most of these features is given in Grammar 6.
 
  
| !C!(!P!).  Grammar 6
+
====Grammar 3====
|
 
| !Q! = {"S'", "R", "F", "T"}
 
|
 
|  1.  S  :>  !e!
 
|
 
|  2.  S  :>  S'
 
|
 
|  3.  S'  :>  R
 
|
 
|  4.  S'  :>  F
 
|
 
|  5.  S'  :>  S' · S'
 
|
 
|  6.  R  :>  m_1
 
|
 
|  7.  R  :>  p_j, for each j in J
 
|
 
|  8.  R  :>  R · R
 
|
 
|  9.  F  :>  "-()-"
 
|
 
| 10.  F  :>  "-(" · T · ")-"
 
|
 
| 11.  T  :>  ","
 
|
 
| 12.  T  :>  S'
 
|
 
| 13.  T  :>  T · ","
 
|
 
| 14.  T  :>  T · "," · S'
 
  
The preceding development provides a typical example of how an initially
+
Although it is not strictly necessary to do so, it is possible to organize the materials of our developing grammar in a slightly better fashion by recognizing two recurrent types of strings that appear in the typical cactus expression. In doing this, one arrives at the following two definitions:
effective and conceptually succinct description of a formal language, but
 
one that is terse to the point of allowing its prospective interpreter to
 
waste exorbitant amounts of energy in trying to unravel its implications,
 
can be converted into a form that is more efficient from the operational
 
point of view, even if slightly more ungainly in regard to its elegance.
 
  
The basic idea behind all of this machinery remains the same: Besides
+
A ''rune'' is a string of blanks and paints concatenated together. Thus, a typical rune <math>R\!</math> is a string over <math>\{ m_1 \} \cup \mathfrak{P},</math> possibly the empty string:
the select body of formulas that are introduced as boundary conditions,
 
it merely institutes the following general rule:
 
  
| If    the strings S_1, ..., S_k are sentences,
+
{| align="center" cellpadding="8" width="90%"
|
+
| <math>R\ \in\ ( \{ m_1 \} \cup \mathfrak{P} )^*</math>
| then  their concatenation in the form
+
|}
|
 
|      Conc^k_j S_j  = S_1 · ... · S_k
 
|
 
|      is a sentence,
 
|
 
| and  their surcatenation in the form
 
|
 
|      Surc^k_j S_j  =  "-(" · S_1 · "," · ... · "," · S_k · ")-"
 
|
 
|       is a sentence.
 
  
It is fitting to wrap up the foregoing developments by summarizing the
+
When there is no possibility of confusion, the letter <math>^{\backprime\backprime} R \, ^{\prime\prime}</math> can be used either as a string variable that ranges over the set of runes or else as a type name for the class of runes. The latter reading amounts to the enlistment of a fresh intermediate symbol, <math>^{\backprime\backprime} R \, ^{\prime\prime} \in \mathfrak{Q},</math> as a part of a new grammar for <math>\mathfrak{C} (\mathfrak{P}).</math> In effect, <math>^{\backprime\backprime} R \, ^{\prime\prime}</math> affords a grammatical recognition for any rune that forms a part of a sentence in <math>\mathfrak{C} (\mathfrak{P}).</math>  In situations where these variant usages are likely to be confused, the types of strings can be indicated by means of expressions like <math>r <: R\!</math> and <math>W <: R.\!</math>
notion of a formal grammar that appeared to evolve in the present case.
 
For the sake of future reference and the chance of a wider application,
 
it is also useful to try to extract the scheme of a formalization that
 
potentially holds for any formal languageThe following presentation
 
of the notion of a formal grammar is adapted, with minor modifications,
 
from the treatment in (DDQ, 60-61).
 
  
A "formal grammar" !G! is given by a four-tuple !G! = ("S", !Q!, !A!, !K!)
+
A ''foil'' is a string of the form <math>{}^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime},\!</math> where <math>T\!</math> is a tract.  Thus, a typical foil <math>F\!</math> has the form:
that takes the following form of description:
 
  
1.  "S" is the "initial", "special", "start", or "sentence symbol".
+
{| align="center" cellpadding="8" width="90%"
    Since the letter "S" serves this function only in a special setting,
 
    its employment in this role need not create any confusion with its
 
    other typical uses as a string variable or as a sentence variable.
 
 
 
2.  !Q! = {q_1, ..., q_m} is a finite set of "intermediate symbols",
 
    all distinct from "S".
 
 
 
3.  !A! = {a_1, ..., a_n} is a finite set of "terminal symbols",
 
    also known as the "alphabet" of !G!, all distinct from "S" and
 
    disjoint from !Q!.  Depending on the particular conception of the
 
    language !L! that is "covered", "generated", "governed", or "ruled"
 
    by the grammar !G!, that is, whether !L! is conceived to be a set of
 
    words, sentences, paragraphs, or more extended structures of discourse,
 
    it is usual to describe !A! as the "alphabet", "lexicon", "vocabulary",
 
    "liturgy", or "phrase book" of both the grammar !G! and the language !L!
 
    that it regulates.
 
 
 
4.  !K! is a finite set of "characterizations".  Depending on how they
 
    come into play, these are variously described as "covering rules",
 
    "formations", "productions", "rewrite rules", "subsumptions",
 
    "transformations", or "typing rules".
 
 
 
To describe the elements of !K! it helps to define some additional terms:
 
 
 
a.  The symbols in {"S"} |_| !Q! |_| !A! form the "augmented alphabet" of !G!.
 
 
 
b.  The symbols in {"S"} |_| !Q! are the "non-terminal symbols" of !G!.
 
 
 
c.  The symbols in !Q! |_| !A! are the "non-initial symbols" of !G!.
 
 
 
d.  The strings in ({"S"} |_| !Q! |_| !A!)*  are the "augmented strings" for G.
 
 
 
e.  The strings in {"S"} |_| (!Q! |_| !A!)* are the "sentential forms" for G.
 
 
 
Each characterization in !K! is an ordered pair of strings (S_1, S_2)
 
that takes the following form:
 
 
 
| S_1  =  Q_1 · q · Q_2,
 
|
 
| S_2  = Q_1 · W · Q_2.
 
 
 
In this scheme, S_1 and S_2 are members of the augmented strings for !G!,
 
more precisely, S_1 is a non-empty string and a sentential form over !G!,
 
while S_2 is a possibly empty string and also a sentential form over !G!.
 
 
 
Here also, q is a non-terminal symbol, that is, q is in {"S"} |_| !Q!,
 
while Q_1, Q_2, and W are possibly empty strings of non-initial symbols,
 
a fact that can be expressed in the form:  Q_1, Q_2, W in (!Q! |_| !A!)*.
 
 
 
In practice, the ordered pairs of strings in !K! are used to "derive",
 
to "generate", or to "produce" sentences of the language !L! = <!G!>
 
that is then said to be "governed" or "regulated" by the grammar !G!.
 
In order to facilitate this active employment of the grammar, it is
 
conventional to write the characterization (S_1, S_2) in either one
 
of the next two forms, where the more generic form is followed by
 
the more specific form:
 
 
 
| S_1            :>  S_2
 
 
|
 
|
| Q_1 · q · Q_2  :>   Q_1 · W · Q_2
+
<math>\begin{array}{*{15}{l}}
 
+
F
In this usage, the characterization S_1 :> S_2 is tantamount to a grammatical
+
& =
license to transform a string of the form Q_1 · q · Q_2 into a string of the
+
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime}
form Q1 · W · Q2, in effect, replacing the non-terminal symbol q with the
+
& \cdot
non-initial string W in any selected, preserved, and closely adjoining
+
& S_1
context of the form Q1 · ... · Q2.  Accordingly, in this application
+
& \cdot
the notation "S_1 :> S_2" can be read as "S_1 produces S_2" or as
+
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
"S_1 transforms into S_2".
+
& \cdot
 
+
& \ldots
An "immediate derivation" in !G! is an ordered pair (W, W')
+
& \cdot
of sentential forms in !G! such that:
+
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
 
+
& \cdot
| W  = Q_1 · X · Q_2,
+
& S_k
|
+
& \cdot
| W'  =  Q_1 · Y · Q_2,
+
& ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
|
+
\\
| and  (X, Y)  in !K!,
+
\end{array}</math>
|
+
|}
| i.e.  X :> Y  in !G!.
 
 
 
This relation is indicated by saying that W "immediately derives" W',
 
that W' is "immediately derived" from W in !G!, and also by writing:
 
 
 
W  ::>  W'.
 
 
 
A "derivation" in !G! is a finite sequence (W_1, ..., W_k)
 
of sentential forms over !G! such that each adjacent pair
 
(W_j, W_(j+1)) of sentential forms in the sequence is an
 
immediate derivation in !G!, in other words, such that:
 
 
 
W_j  ::>  W_(j+1), for all j = 1 to k-1.
 
 
 
If there exists a derivation (W_1, ..., W_k) in !G!,
 
one says that W_1 "derives" W_k in !G!, conversely,
 
that W_k is "derivable" from W_1 in !G!, and one
 
typically summarizes the derivation by writing:
 
 
 
W_1  :*:>  W_k.
 
 
 
The language !L! = !L!(!G!) = <!G!> that is "generated"
 
by the formal grammar !G! = ("S", !Q!, !A!, !K!) is the
 
set of strings over the terminal alphabet !A! that are
 
derivable from the initial symbol "S" by way of the
 
intermediate symbols in !Q! according to the
 
characterizations in K.  In sum:
 
 
 
!L!(!G!)  =  <!G!>  =  {W in !A!*  :  "S" :*:> W}.
 
 
 
Finally, a string W is called a "word", a "sentence", or so on,
 
of the language generated by !G! if and only if W is in !L!(!G!).
 
  
Reference
+
This is just the surcatenation of the sentences <math>S_1, \ldots, S_k.\!</math>  Given the possibility that this sequence of sentences is empty, and thus that the tract <math>T\!</math> is the empty string, the minimum foil <math>F\!</math> is the expression <math>^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.</math>  Explicitly marking each foil <math>F\!</math> that is embodied in a cactus expression is tantamount to recognizing another intermediate symbol, <math>^{\backprime\backprime} F \, ^{\prime\prime} \in \mathfrak{Q},</math> further articulating the structures of sentences and expanding the grammar for the language <math>\mathfrak{C} (\mathfrak{P}).\!</math>  All of the same remarks about the versatile uses of the intermediate symbols, as string variables and as type names, apply again to the letter <math>^{\backprime\backprime} F \, ^{\prime\prime}.</math>
 
 
| Denning, P.J., Dennis, J.B., Qualitz, J.E.,
 
|'Machines, Languages, and Computation',
 
| Prentice-Hall, Englewood Cliffs, NJ, 1978.
 
</pre>
 
  
==The Cactus Language : Stylistics==
+
<br>
  
{| align="center" cellpadding="0" cellspacing="0" width="90%"
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
|
+
| align="left"  style="border-left:1px solid black;"  width="50%" |
<p>As a result, we can hardly conceive of how many possibilities there are for what we call objective reality.  Our sharp quills of knowledge are so narrow and so concentrated in particular directions that with science there are myriads of totally different real worlds, each one accessible from the next simply by slight alterations &mdash; shifts of gaze &mdash; of every particular discipline and subspecialty.
+
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 3}\!</math>
</p>
+
| align="right" style="border-right:1px solid black;" width="50%" |
 +
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math>
 
|-
 
|-
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
+
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{rcll}
 +
1.
 +
& S
 +
& :>
 +
& R
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& F
 +
\\
 +
3.
 +
& S
 +
& :>
 +
& S \, \cdot \, S
 +
\\
 +
4.
 +
& R
 +
& :>
 +
& \varepsilon
 +
\\
 +
5.
 +
& R
 +
& :>
 +
& m_1
 +
\\
 +
6.
 +
& R
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
7.
 +
& R
 +
& :>
 +
& R \, \cdot \, R
 +
\\
 +
8.
 +
& F
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
9.
 +
& T
 +
& :>
 +
& S
 +
\\
 +
10.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S
 +
\\
 +
\end{array}\!</math>
 
|}
 
|}
  
 +
<br>
  
<pre>
+
In Grammar&nbsp;3, the first three Rules say that a sentence (a string of type <math>S\!</math>), is a rune (a string of type <math>R\!</math>), a foil (a string of type <math>F\!</math>), or an arbitrary concatenation of strings of these two typesRules&nbsp;4 through 7 specify that a rune <math>R\!</math> is an empty string <math>\varepsilon,</math> a blank symbol <math>m_1,\!</math> a paint <math>p_j,\!</math> or any concatenation of strings of these three types.  Rule&nbsp;8 characterizes a foil <math>F\!</math> as a string of the form <math>{}^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime},\!</math> where <math>T\!</math> is a tract.  The last two Rules say that a tract <math>T\!</math> is either a sentence <math>S\!</math> or else the concatenation of a tract, a comma, and a sentence, in that order.
This Subsection highlights an issue of "style" that arises in describing
 
a formal language.  In broad terms, I use the word "style" to refer to a
 
loosely specified class of formal systems, typically ones that have a set
 
of distinctive features in common.  For instance, a style of proof system
 
usually dictates one or more rules of inference that are acknowledged as
 
conforming to that styleIn the present context, the word "style" is a
 
natural choice to characterize the varieties of formal grammars, or any
 
other sorts of formal systems that can be contemplated for deriving the
 
sentences of a formal language.
 
  
In looking at what seems like an incidental issue, the discussion arrives
+
At this point in the succession of grammars for <math>\mathfrak{C} (\mathfrak{P}),\!</math> the explicit uses of indefinite iterations, like the kleene star operator, are now completely reduced to finite forms of concatenation, but the problems that some styles of analysis have with allowing non-terminal symbols to cover both themselves and the empty string are still present.
at a critical point.  The question is:  What decides the issue of style?
 
Taking a given language as the object of discussion, what factors enter
 
into and determine the choice of a style for its presentation, that is,
 
a particular way of arranging and selecting the materials that come to
 
be involved in a description, a grammar, or a theory of the language?
 
To what degree is the determination accidental, empirical, pragmatic,
 
rhetorical, or stylistic, and to what extent is the choice essential,
 
logical, and necessary?  For that matter, what determines the order
 
of signs in a word, a sentence, a text, or a discussion?  All of
 
the corresponding parallel questions about the character of this
 
choice can be posed with regard to the constituent part as well
 
as with regard to the main constitution of the formal language.
 
  
In order to answer this sort of question, at any level of articulation,
+
Any degree of reflection on this difficulty raises the general question:  What is a practical strategy for accounting for the empty string in the organization of any formal language that counts it among its sentences? One answer that presents itself is this:  If the empty string belongs to a formal language, it suffices to count it once at the beginning of the formal account that enumerates its sentences and then to move on to more interesting materials.
one has to inquire into the type of distinction that it invokes, between
 
arrangements and orders that are essential, logical, and necessary and
 
orders and arrangements that are accidental, rhetorical, and stylistic.
 
As a rough guide to its comprehension, a "logical order", if it resides
 
in the subject at all, can be approached by considering all of the ways
 
of saying the same things, in all of the languages that are capable of
 
saying roughly the same things about that subject. Of course, the "all"
 
that appears in this rule of thumb has to be interpreted as a fittingly
 
qualified sort of universal.  For all practical purposes, it simply means
 
"all of the ways that a person can think of" and "all of the languages
 
that a person can conceive of", with all things being relative to the
 
particular moment of investigation.  For all of these reasons, the rule
 
must stand as little more than a rough idea of how to approach its object.
 
  
If it is demonstrated that a given formal language can be presented in
+
Returning to the case of the cactus language <math>\mathfrak{C} (\mathfrak{P}),\!</math> in other words, the formal language <math>\operatorname{PARCE}\!</math> of ''painted and rooted cactus expressions'', it serves the purpose of efficient accounting to partition the language into the following couple of sublanguages:
any one of several styles of formal grammar, then the choice of a format
 
is accidental, optional, and stylistic to the very extent that it is free.
 
But if it can be shown that a particular language cannot be successfully
 
presented in a particular style of grammar, then the issue of style is
 
no longer free and rhetorical, but becomes to that very degree essential,
 
necessary, and obligatory, in other words, a question of the objective
 
logical order that can be found to reside in the object language.
 
  
As a rough illustration of the difference between logical and rhetorical
+
<ol style="list-style-type:decimal">
orders, consider the kinds of order that are expressed and exhibited in
 
the following conjunction of implications:
 
  
"X => and  Y => Z".
+
<li>
 +
<p>The ''emptily painted and rooted cactus expressions'' make up the language <math>\operatorname{EPARCE}</math> that consists of a single empty string as its only sentence. In short:</p>
  
Here, there is a happy conformity between the logical content and the
+
<p><math>\operatorname{EPARCE} \ = \ \underline\varepsilon \ = \ \{ \varepsilon \}</math></p></li>
rhetorical form, indeed, to such a degree that one hardly notices the
 
difference between them.  The rhetorical form is given by the order
 
of sentences in the two implications and the order of implications
 
in the conjunction.  The logical content is given by the order of
 
propositions in the extended implicational sequence:
 
  
X  =Y  =< Z.
+
<li>
 +
<p>The ''significantly painted and rooted cactus expressions'' make up the language <math>\operatorname{SPARCE}</math> that consists of everything else, namely, all of the non-empty strings in the language <math>\operatorname{PARCE}.</math> In sum:</p>
  
To see the difference between form and content, or manner and matter,
+
<p><math>\operatorname{SPARCE} \ = \ \operatorname{PARCE} \setminus \varepsilon</math></p></li>
it is enough to observe a few of the ways that the expression can be
 
varied without changing its meaning, for example:
 
  
"Z <= Y  and  Y <= X".
+
</ol>
  
Any style of declarative programming, also called "logic programming",
+
As a result of marking the distinction between empty and significant sentences, that is, by categorizing each of these three classes of strings as an entity unto itself and by conceptualizing the whole of its membership as falling under a distinctive symbol, one obtains an equation of sets that connects the three languages being marked:
depends on a capacity, as embodied in a programming language or other
 
formal system, to describe the relation between problems and solutions
 
in logical terms.  A recurring problem in building this capacity is in
 
bridging the gap between ostensibly non-logical orders and the logical
 
orders that are used to describe and to represent them.  For instance,
 
to mention just a couple of the most pressing cases, and the ones that
 
are currently proving to be the most resistant to a complete analysis,
 
one has the orders of dynamic evolution and rhetorical transition that
 
manifest themselves in the process of inquiry and in the communication
 
of its results.
 
  
This patch of the ongoing discussion is concerned with describing a
+
{| align="center" cellpadding="8" width="90%"
particular variety of formal languages, whose typical representative
+
| <math>\operatorname{SPARCE} \ = \ \operatorname{PARCE} \ - \ \operatorname{EPARCE}</math>
is the painted cactus language !L! = !C!(!P!).  It is the intention of
+
|}
this work to interpret this language for propositional logic, and thus
 
to use it as a sentential calculus, an order of reasoning that forms an
 
active ingredient and a significant component of all logical reasoning.
 
To describe this language, the standard devices of formal grammars and
 
formal language theory are more than adequate, but this only raises the
 
next question:  What sorts of devices are exactly adequate, and fit the
 
task to a "T"?  The ultimate desire is to turn the tables on the order
 
of description, and so begins a process of eversion that evolves to the
 
point of asking:  To what extent can the language capture the essential
 
features and laws of its own grammar and describe the active principles
 
of its own generation?  In other words:  How well can the language be
 
described by using the language itself to do so?
 
  
In order to speak to these questions, I have to express what a grammar says
+
In sum, one has the disjoint union:
about a language in terms of what a language can say on its own.  In effect,
 
it is necessary to analyze the kinds of meaningful statements that grammars
 
are capable of making about languages in general and to relate them to the
 
kinds of meaningful statements that the syntactic "sentences" of the cactus
 
language might be interpreted as making about the very same topics.  So far
 
in the present discussion, the sentences of the cactus language do not make
 
any meaningful statements at all, much less any meaningful statements about
 
languages and their constitutions.  As of yet, these sentences subsist in the
 
form of purely abstract, formal, and uninterpreted combinatorial constructions.
 
  
Before the capacity of a language to describe itself can be evaluated,
+
{| align="center" cellpadding="8" width="90%"
the missing link to meaning has to be supplied for each of its strings.
+
| <math>\operatorname{PARCE} \ = \ \operatorname{EPARCE} \ \cup \ \operatorname{SPARCE}</math>
This calls for a dimension of semantics and a notion of interpretation,
+
|}
topics that are taken up for the case of the cactus language !C!(!P!)
 
in Subsection 1.3.10.12.  Once a plausible semantics is prescribed for
 
this language it will be possible to return to these questions and to
 
address them in a meaningful way.
 
  
The prominent issue at this point is the distinct placements of formal
+
For brevity in the present case, and to serve as a generic device in any similar array of situations, let <math>S\!</math> be the type of an arbitrary sentence, possibly empty, and let <math>S'\!</math> be the type of a specifically non-empty sentenceIn addition, let <math>\underline\varepsilon</math> be the type of the empty sentence, in effect, the language
languages and formal grammars with respect to the question of meaning.
+
<math>\underline\varepsilon = \{ \varepsilon \}</math> that contains a single empty string, and let a plus sign <math>^{\backprime\backprime} + ^{\prime\prime}</math> signify a disjoint union of types. In the most general type of situation, where the type <math>S\!</math> is permitted to include the empty string, one notes the following relation among types:
The sentences of a formal language are merely the abstract strings of
 
abstract signs that happen to belong to a certain setThey do not by
 
themselves make any meaningful statements at all, not without mounting
 
a separate effort of interpretation, but the rules of a formal grammar
 
make meaningful statements about a formal language, to the extent that
 
they say what strings belong to it and what strings do not.  Thus, the
 
formal grammar, a formalism that appears to be even more skeletal than
 
the formal language, still has bits and pieces of meaning attached to it.
 
In a sense, the question of meaning is factored into two parts, structure
 
and value, leaving the aspect of value reduced in complexity and subtlety
 
to the simple question of belonging.  Whether this single bit of meaningful
 
value is enough to encompass all of the dimensions of meaning that we require,
 
and whether it can be compounded to cover the complexity that actually exists
 
in the realm of meaning -- these are questions for an extended future inquiry.
 
  
Perhaps I ought to comment on the differences between the present and
+
{| align="center" cellpadding="8" width="90%"
the standard definition of a formal grammar, since I am attempting to
+
| <math>S \ = \ \underline\varepsilon \ + \ S'</math>
strike a compromise with several alternative conventions of usage, and
+
|}
thus to leave certain options open for future exploration.  All of the
 
changes are minor, in the sense that they are not intended to alter the
 
classes of languages that are able to be generated, but only to clear up
 
various ambiguities and sundry obscurities that affect their conception.
 
  
Primarily, the conventional scope of non-terminal symbols was expanded
+
With the distinction between empty and significant expressions in mind, I return to the grasp of the cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) = \operatorname{PARCE} (\mathfrak{P})</math> that is afforded by Grammar&nbsp;2, and, taking that as a point of departure, explore other avenues of possible improvement in the comprehension of these expressionsIn order to observe the effects of this alteration as clearly as possible, in isolation from any other potential factors, it is useful to strip away the higher levels intermediate organization that are present in Grammar&nbsp;3, and start again with a single intermediate symbol, as used in Grammar&nbsp;2.  One way of carrying out this strategy leads on to a grammar of the variety that will be articulated next.
to encompass the sentence symbol, mainly on account of all the contexts
 
where the initial and the intermediate symbols are naturally invoked in
 
the same breathBy way of compensating for the usual exclusion of the
 
sentence symbol from the non-terminal class, an equivalent distinction
 
was introduced in the fashion of a distinction between the initial and
 
the intermediate symbols, and this serves its purpose in all of those
 
contexts where the two kind of symbols need to be treated separately.
 
  
At the present point, I remain a bit worried about the motivations
+
====Grammar 4====
and the justifications for introducing this distinction, under any
 
name, in the first place.  It is purportedly designed to guarantee
 
that the process of derivation at least gets started in a definite
 
direction, while the real questions have to do with how it all ends.
 
The excuses of efficiency and expediency that I offered as plausible
 
and sufficient reasons for distinguishing between empty and significant
 
sentences are likely to be ephemeral, if not entirely illusory, since
 
intermediate symbols are still permitted to characterize or to cover
 
themselves, not to mention being allowed to cover the empty string,
 
and so the very types of traps that one exerts oneself to avoid at
 
the outset are always there to afflict the process at all of the
 
intervening times.
 
  
If one reflects on the form of grammar that is being prescribed here,
+
If one imposes the distinction between empty and significant types on each non-terminal symbol in Grammar&nbsp;2, then the non-terminal symbols <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> and <math>^{\backprime\backprime} T \, ^{\prime\prime}</math> give rise to the expanded set of non-terminal symbols <math>^{\backprime\backprime} S \, ^{\prime\prime}, \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime},</math> leaving the last three of these to form the new intermediate alphabetGrammar&nbsp;4 has the intermediate alphabet <math>\mathfrak{Q} \, = \, \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime} \, \},</math> with the set <math>\mathfrak{K}</math> of covering rules as listed in the next display.
it looks as if one sought, rather futilely, to avoid the problems of
 
recursion by proscribing the main program from calling itself, while
 
allowing any subprogram to do soBut any trouble that is avoidable
 
in the part is also avoidable in the main, while any trouble that is
 
inevitable in the part is also inevitable in the main.  Consequently,
 
I am reserving the right to change my mind at a later stage, perhaps
 
to permit the initial symbol to characterize, to cover, to regenerate,
 
or to produce itself, if that turns out to be the best way in the end.
 
  
Before I leave this Subsection, I need to say a few things about
+
<br>
the manner in which the abstract theory of formal languages and
 
the pragmatic theory of sign relations interact with each other.
 
  
Formal language theory can seem like an awfully picky subject at times,
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
treating every symbol as a thing in itself the way it does, sorting out
+
| align="left"  style="border-left:1px solid black;"  width="50%" |
the nominal types of symbols as objects in themselves, and singling out
+
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 4}\!</math>
the passing tokens of symbols as distinct entities in their own rights.
+
| align="right" style="border-right:1px solid black;" width="50%" |
It has to continue doing this, if not for any better reason than to aid
+
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime} \, \}</math>
in clarifying the kinds of languages that people are accustomed to use,
+
|-
to assist in writing computer programs that are capable of parsing real
+
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
sentences, and to serve in designing programming languages that people
+
<math>\begin{array}{rcll}
would like to become accustomed to use. As a matter of fact, the only
+
1.
time that formal language theory becomes too picky, or a bit too myopic
+
& S
in its focus, is when it leads one to think that one is dealing with the
+
& :>
thing itself and not just with the sign of it, in other words, when the
+
& \varepsilon
people who use the tools of formal language theory forget that they are
+
\\
dealing with the mere signs of more interesting objects and not with the
+
2.
objects of ultimate interest in and of themselves.
+
& S
 +
& :>
 +
& S'
 +
\\
 +
3.
 +
& S'
 +
& :>
 +
& m_1
 +
\\
 +
4.
 +
& S'
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
5.
 +
& S'
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
6.
 +
& S'
 +
& :>
 +
& S' \, \cdot \, S'
 +
\\
 +
7.
 +
& T
 +
& :>
 +
& \varepsilon
 +
\\
 +
8.
 +
& T
 +
& :>
 +
& T'
 +
\\
 +
9.
 +
& T'
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S
 +
\\
 +
\end{array}</math>
 +
|}
  
As a result, there a number of deleterious effects that can arise from
+
<br>
the extreme pickiness of formal language theory, arising, as is often the
 
case, when formal theorists forget the practical context of theorization.
 
It frequently happens that the exacting task of defining the membership
 
of a formal language leads one to think that this object and this object
 
alone is the justifiable end of the whole exercise.  The distractions of
 
this mediate objective render one liable to forget that one's penultimate
 
interest lies always with various kinds of equivalence classes of signs,
 
not entirely or exclusively with their more meticulous representatives.
 
  
When this happens, one typically goes on working oblivious to the fact
+
In this version of a grammar for <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}),</math> the intermediate type <math>T\!</math> is partitioned as <math>T = \underline\varepsilon + T',</math> thereby parsing the intermediate symbol <math>T\!</math> in parallel fashion with the division of its overlying type as <math>S = \underline\varepsilon + S'.</math>  This is an option that I will choose to close off for now, but leave it open to consider at a later point. Thus, it suffices to give a brief discussion of what it involves, in the process of moving on to its chief alternative.
that many details about what transpires in the meantime do not matter
 
at all in the end, and one is likely to remain in blissful ignorance
 
of the circumstance that many special details of language membership
 
are bound, destined, and pre-determined to be glossed over with some
 
measure of indifference, especially when it comes down to the final
 
constitution of those equivalence classes of signs that are able to
 
answer for the genuine objects of the whole enterprise of language.
 
When any form of theory, against its initial and its best intentions,
 
leads to this kind of absence of mind that is no longer beneficial in
 
all of its main effects, the situation calls for an antidotal form of
 
theory, one that can restore the presence of mind that all forms of
 
theory are meant to augment.
 
  
The pragmatic theory of sign relations is called for in settings where
+
There does not appear to be anything radically wrong with trying this approach to typesIt is reasonable and consistent in its underlying principle, and it provides a rational and a homogeneous strategy toward all parts of speech, but it does require an extra amount of conceptual overhead, in that every non-trivial type has to be split into two parts and comprehended in two stagesConsequently, in view of the largely practical difficulties of making the requisite distinctions for every intermediate symbol, it is a common convention, whenever possible, to restrict intermediate types to covering exclusively non-empty strings.
everything that can be named has many other names, that is to say, in
 
the usual caseOf course, one would like to replace this superfluous
 
multiplicity of signs with an organized system of canonical signs, one
 
for each object that needs to be denoted, but reducing the redundancy
 
too far, beyond what is necessary to eliminate the factor of "noise" in
 
the language, that is, to clear up its effectively useless distractions,
 
can destroy the very utility of a typical language, which is intended to
 
provide a ready means to express a present situation, clear or not, and
 
to describe an ongoing condition of experience in just the way that it
 
seems to present itselfWithin this fleshed out framework of language,
 
moreover, the process of transforming the manifestations of a sign from
 
its ordinary appearance to its canonical aspect is the whole problem of
 
computation in a nutshell.
 
  
It is a well-known truth, but an often forgotten fact, that nobody
+
For the sake of future reference, it is convenient to refer to this restriction on intermediate symbols as the ''intermediate significance'' constraint. It can be stated in a compact form as a condition on the relations between non-terminal symbols <math>q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q}</math> and sentential forms <math>W \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*.</math>
computes with numbers, but solely with numerals in respect of numbers,
 
and numerals themselves are symbols.  Among other things, this renders
 
all discussion of numeric versus symbolic computation a bit beside the
 
point, since it is only a question of what kinds of symbols are best for
 
one's immediate application or for one's selection of ongoing objectives.
 
The numerals that everybody knows best are just the canonical symbols,
 
the standard signs or the normal terms for numbers, and the process of
 
computation is a matter of getting from the arbitrarily obscure signs
 
that the data of a situation are capable of throwing one's way to the
 
indications of its character that are clear enough to motivate action.
 
  
Having broached the distinction between propositions and sentences, one
+
<br>
can see its similarity to the distinction between numbers and numerals.
 
What are the implications of the foregoing considerations for reasoning
 
about propositions and for the realm of reckonings in sentential logic?
 
If the purpose of a sentence is just to denote a proposition, then the
 
proposition is just the object of whatever sign is taken for a sentence.
 
This means that the computational manifestation of a piece of reasoning
 
about propositions amounts to a process that takes place entirely within
 
a language of sentences, a procedure that can rationalize its account by
 
referring to the denominations of these sentences among propositions.
 
  
The application of these considerations in the immediate setting is this:
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
Do not worry too much about what roles the empty string "" and the blank
+
| align="center" style="border-left:1px solid black; border-right:1px solid black" |
symbol " " are supposed to play in a given species of formal languages.
+
<math>\text{Condition On Intermediate Significance}\!</math>
As it happens, it is far less important to wonder whether these types
+
|-
of formal tokens actually constitute genuine sentences than it is to
+
| style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
decide what equivalence classes it makes sense to form over all of
+
<math>\begin{array}{lccc}
the sentences in the resulting language, and only then to bother
+
\text{If}
about what equivalence classes these limiting cases of sentences
+
& q
are most conveniently taken to represent.
+
& :>
 +
& W
 +
\\
 +
\text{and}
 +
& W
 +
& =
 +
& \varepsilon
 +
\\
 +
\text{then}
 +
& q
 +
& =
 +
& ^{\backprime\backprime} S \, ^{\prime\prime}
 +
\\
 +
\end{array}</math>
 +
|}
  
These concerns about boundary conditions betray a more general issue.
+
<br>
Already by this point in discussion the limits of the purely syntactic
 
approach to a language are beginning to be visible.  It is not that one
 
cannot go a whole lot further by this road in the analysis of a particular
 
language and in the study of languages in general, but when it comes to the
 
questions of understanding the purpose of a language, of extending its usage
 
in a chosen direction, or of designing a language for a particular set of uses,
 
what matters above all else are the "pragmatic equivalence classes" of signs that
 
are demanded by the application and intended by the designer, and not so much the
 
peculiar characters of the signs that represent these classes of practical meaning.
 
  
Any description of a language is bound to have alternative descriptions.
+
If this is beginning to sound like a monotone condition, then it is not absurd to sharpen the resemblance and render the likeness more acute. This is done by declaring a couple of ordering relations, denoting them under variant interpretations by the same sign, <math>^{\backprime\backprime}\!< \, ^{\prime\prime}.</math>
More precisely, a circumscribed description of a formal language, as any
 
effectively finite description is bound to be, is certain to suggest the
 
equally likely existence and the possible utility of other descriptions.
 
A single formal grammar describes but a single formal language, but any
 
formal language is described by many different formal grammars, not all
 
of which afford the same grasp of its structure, provide an equivalent
 
comprehension of its character, or yield an interchangeable view of its
 
aspects.  Consequently, even with respect to the same formal language,
 
different formal grammars are typically better for different purposes.
 
  
With the distinctions that evolve among the different styles of grammar,
+
# The ordering <math>^{\backprime\backprime}\!< \, ^{\prime\prime}</math> on the set of non-terminal symbols, <math>q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q},</math> ordains the initial symbol <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> to be strictly prior to every intermediate symbol.  This is tantamount to the axiom that <math>^{\backprime\backprime} S \, ^{\prime\prime} < q,</math> for all <math>q \in \mathfrak{Q}.</math>
and with the preferences that different observers display toward them,
+
# The ordering <math>^{\backprime\backprime}\!< \, ^{\prime\prime}</math> on the collection of sentential forms, <math>W \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*,</math> ordains the empty string to be strictly minor to every other sentential form. This is stipulated in the axiom that <math>\varepsilon < W,</math> for every non-empty sentential form <math>W.\!</math>
there naturally comes the question: What is the root of this evolution?
 
  
One dimension of variation in the styles of formal grammars can be seen
+
Given these two orderings, the constraint in question on intermediate significance can be stated as follows:
by treating the union of languages, and especially the disjoint union of
 
languages, as a "sum", by treating the concatenation of languages as a
 
"product", and then by distinguishing the styles of analysis that favor
 
"sums of products" from those that favor "products of sums" as their
 
canonical forms of description.  If one examines the relation between
 
languages and grammars carefully enough to see the presence and the
 
influence of these different styles, and when one comes to appreciate
 
the ways that different styles of grammars can be used with different
 
degrees of success for different purposes, then one begins to see the
 
possibility that alternative styles of description can be based on
 
altogether different linguistic and logical operations.
 
  
It possible to trace this divergence of styles to an even more primitive
+
<br>
division, one that distinguishes the "additive" or the "parallel" styles
 
from the "multiplicative" or the "serial" styles.  The issue is somewhat
 
confused by the fact that an "additive" analysis is typically expressed
 
in the form of a "series", in other words, a disjoint union of sets or a
 
linear sum of their independent effects.  But it is easy enough to sort
 
this out if one observes the more telling connection between "parallel"
 
and "independent".  Another way to keep the right associations straight
 
is to employ the term "sequential" in preference to the more misleading
 
term "serial".  Whatever one calls this broad division of styles, the
 
scope and sweep of their dimensions of variation can be delineated in
 
the following way:
 
  
1.  The "additive" or "parallel" styles favor "sums of products" as
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
    canonical forms of expression, pulling sums, unions, co-products,
+
| align="center" style="border-left:1px solid black; border-right:1px solid black" |
    and logical disjunctions to the outermost layers of analysis and
+
<math>\text{Condition On Intermediate Significance}\!</math>
    synthesis, while pushing products, intersections, concatenations,
+
|-
    and logical conjunctions to the innermost levels of articulation
+
| style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
    and generation.  In propositional logic, this style leads to the
+
<math>\begin{array}{lccc}
    "disjunctive normal form" (DNF).
+
\text{If}
 
+
& q
2.  The "multiplicative" or "serial" styles favor "products of sums"
+
& :>
    as canonical forms of expression, pulling products, intersections,
+
& W
    concatenations, and logical conjunctions to the outermost layers of
+
\\
    analysis and synthesis, while pushing sums, unions, co-products,
+
\text{and}
    and logical disjunctions to the innermost levels of articulation
+
& q
    and generation.  In propositional logic, this style leads to the
+
& >
    "conjunctive normal form" (CNF).
+
& ^{\backprime\backprime} S \, ^{\prime\prime}
 +
\\
 +
\text{then}
 +
& W
 +
& >
 +
& \varepsilon
 +
\\
 +
\end{array}</math>
 +
|}
  
There is a curious sort of diagnostic clue, a veritable shibboleth,
+
<br>
that often serves to reveal the dominance of one mode or the other
 
within an individual thinker's cognitive style.  Examined on the
 
question of what constitutes the "natural numbers", an "additive"
 
thinker tends to start the sequence at 0, while a "multiplicative"
 
thinker tends to regard it as beginning at 1.
 
  
In any style of description, grammar, or theory of a language, it is
+
Achieving a grammar that respects this convention typically requires a more detailed account of the initial setting of a type, both with regard to the type of context that incites its appearance and also with respect to the minimal strings that arise under the type in question.  In order to find covering productions that satisfy the intermediate significance condition, one must be prepared to consider a wider variety of calling contexts or inciting situations that can be noted to surround each recognized type, and also to enumerate a larger number of the smallest cases that can be observed to fall under each significant type.
usually possible to tease out the influence of these contrasting traits,
 
namely, the "additive" attitude versus the "mutiplicative" tendency that
 
go to make up the particular style in question, and even to determine the
 
dominant inclination or point of view that establishes its perspective on
 
the target domain.
 
  
In each style of formal grammar, the "multiplicative" aspect is present
+
====Grammar 5====
in the sequential concatenation of signs, both in the augmented strings
 
and in the terminal strings.  In settings where the non-terminal symbols
 
classify types of strings, the concatenation of the non-terminal symbols
 
signifies the cartesian product over the corresponding sets of strings.
 
  
In the context-free style of formal grammar, the "additive" aspect is
+
With the foregoing array of considerations in mind, one is gradually led to a grammar for <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P})</math> in which all of the covering productions have either one of the following two forms:
easy enough to spot.  It is signaled by the parallel covering of many
 
augmented strings or sentential forms by the same non-terminal symbol.
 
Expressed in active terms, this calls for the independent rewriting
 
of that non-terminal symbol by a number of different successors,
 
as in the following scheme:
 
  
| q    :>    W_1.
+
{| align="center" cellpadding="8" width="90%"
 
|
 
|
| ...  ...  ...
+
<math>\begin{array}{ccll}
|
+
S
| q     :>   W_k.
+
& :>
 +
& \varepsilon
 +
&
 +
\\
 +
q
 +
& :>
 +
& W,
 +
& \text{with} \ q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \ \text{and} \ W \in (\mathfrak{Q} \cup \mathfrak{A})^+
 +
\\
 +
\end{array}</math>
 +
|}
  
It is useful to examine the relationship between the grammatical covering
+
A grammar that fits into this mold is called a ''context-free grammar''.  The first type of rewrite rule is referred to as a ''special production'', while the second type of rewrite rule is called an ''ordinary production''.  An ''ordinary derivation'' is one that employs only ordinary productions.  In ordinary productions, those that have the form <math>q :> W,\!</math> the replacement string <math>W\!</math> is never the empty string, and so the lengths of the augmented strings or the sentential forms that follow one another in an ordinary derivation, on account of using the ordinary types of rewrite rules, never decrease at any stage of the process, up to and including the terminal string that is finally generated by the grammar.  This type of feature is known as the ''non-contracting property'' of productions, derivations, and grammars.  A grammar is said to have the property if all of its covering productions, with the possible exception of <math>S :> \varepsilon,</math> are non-contracting.  In particular, context-free grammars are special cases of non-contracting grammars.  The presence of the non-contracting property within a formal grammar makes the length of the augmented string available as a parameter that can figure into mathematical inductions and motivate recursive proofs, and this handle on the generative process makes it possible to establish the kinds of results about the generated language that are not easy to achieve in more general cases, nor by any other means even in these brands of special cases.
or production relation ":>" and the logical relation of implication "=>",
 
with one eye to what they have in common and one eye to how they differ.
 
The production "q :> W" says that the appearance of the symbol "q" in
 
a sentential form implies the possibility of exchanging it for "W".
 
Although this sounds like a "possible implication", to the extent
 
that "q implies a possible W" or that "q possibly implies W", the
 
qualifiers "possible" and "possibly" are the critical elements in
 
these statements, and they are crucial to the meaning of what is
 
actually being implied.  In effect, these qualifications reverse
 
the direction of implication, yielding "q <= W" as the best
 
analogue for the sense of the production.
 
  
One way to sum this up is to say that non-terminal symbols have the
+
Grammar&nbsp;5 is a context-free grammar for the painted cactus language that uses <math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \},</math> with <math>\mathfrak{K}</math> as listed in the next display.
significance of hypotheses.  The terminal strings form the empirical
 
matter of a language, while the non-terminal symbols mark the patterns
 
or the types of substrings that can be noticed in the profusion of data.
 
If one observes a portion of a terminal string that falls into the pattern
 
of the sentential form W, then it is an admissable hypothesis, according to
 
the theory of the language that is constituted by the formal grammar, that
 
this piece not only fits the type q but even comes to be generated under
 
the auspices of the non-terminal symbol "q".
 
  
A moment's reflection on the issue of style, giving due consideration to the
+
<br>
received array of stylistic choices, ought to inspire at least the question:
 
"Are these the only choices there are?"  In the present setting, there are
 
abundant indications that other options, more differentiated varieties of
 
description and more integrated ways of approaching individual languages,
 
are likely to be conceivable, feasible, and even more ultimately viable.
 
If a suitably generic style, one that incorporates the full scope of
 
logical combinations and operations, is broadly available, then it
 
would no longer be necessary, or even apt, to argue in universal
 
terms about "which style is best", but more useful to investigate
 
how we might adapt the local styles to the local requirements.
 
The medium of a generic style would yield a viable compromise
 
between "additive" and "multiplicative" canons, and render the
 
choice between "parallel" and "serial" a false alternative,
 
at least, when expressed in the globally exclusive terms
 
that are currently most commonly adopted to pose it.
 
  
One set of indications comes from the study of machines, languages, and
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
computation, especially the theories of their structures and relations.
+
| align="left"  style="border-left:1px solid black;"  width="50%" |
The forms of composition and decomposition that are generally known as
+
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 5}\!</math>
"parallel" and "serial" are merely the extreme special cases, in variant
+
| align="right" style="border-right:1px solid black;" width="50%" |
directions of specialization, of a more generic form, usually called the
+
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math>
"cascade" form of combination. This is a well-known fact in the theories
+
|-
that deal with automata and their associated formal languages, but its
+
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
implications do not seem to be widely appreciated outside these fields.
+
<math>\begin{array}{rcll}
In particular, it dispells the need to choose one extreme or the other,
+
1.
since most of the natural cases are likely to exist somewhere in between.
+
& S
 +
& :>
 +
& \varepsilon
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& S'
 +
\\
 +
3.
 +
& S'
 +
& :>
 +
& m_1
 +
\\
 +
4.
 +
& S'
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
5.
 +
& S'
 +
& :>
 +
& S' \, \cdot \, S'
 +
\\
 +
6.
 +
& S'
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
 +
\\
 +
7.
 +
& S'
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
8.
 +
& T
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
9.
 +
& T
 +
& :>
 +
& S'
 +
\\
 +
10.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
11.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S'
 +
\\
 +
\end{array}</math>
 +
|}
  
Another set of indications appears in algebra and category theory,
+
<br>
where forms of composition and decomposition related to the cascade
 
combination, namely, the "semi-direct product" and its special case,
 
the "wreath product", are encountered at higher levels of generality
 
than the cartesian products of sets or the direct products of spaces.
 
  
In these domains of operation, one finds it necessary to consider also
+
Finally, it is worth trying to bring together the advantages of these diverse styles of grammar, to whatever extent that they are compatibleTo do this, a prospective grammar must be capable of maintaining a high level of intermediate organization, like that arrived at in Grammar&nbsp;2, while respecting the principle of intermediate significance, and thus accumulating all the benefits of the context-free format in Grammar&nbsp;5.  A plausible synthesis of most of these features is given in Grammar&nbsp;6.
the "co-product" of sets and spaces, a construction that artificially
 
creates a disjoint union of sets, that is, a union of spaces that are
 
being treated as independentIt does this, in effect, by "indexing",
 
"coloring", or "preparing" the otherwise possibly overlapping domains
 
that are being combined.  What renders this a "chimera" or a "hybrid"
 
form of combination is the fact that this indexing is tantamount to a
 
cartesian product of a singleton set, namely, the conventional "index",
 
"color", or "affix" in question, with the individual domain that is
 
entering as a factor, a term, or a participant in the final result.
 
  
One of the insights that arises out of Peirce's logical work is that
+
====Grammar 6====
the set operations of complementation, intersection, and union, along
 
with the logical operations of negation, conjunction, and disjunction
 
that operate in isomorphic tandem with them, are not as fundamental as
 
they first appear.  This is because all of them can be constructed from
 
or derived from a smaller set of operations, in fact, taking the logical
 
side of things, from either one of two "solely sufficient" operators,
 
called "amphecks" by Peirce, "strokes" by those who re-discovered them
 
later, and known in computer science as the NAND and the NNOR operators.
 
For this reason, that is, by virtue of their precedence in the orders
 
of construction and derivation, these operations have to be regarded
 
as the simplest and the most primitive in principle, even if they are
 
scarcely recognized as lying among the more familiar elements of logic.
 
  
I am throwing together a wide variety of different operations into each
+
Grammar&nbsp;6 has the intermediate alphabet <math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \},</math> with the production set <math>\mathfrak{K}</math> as listed in the next display.
of the bins labeled "additive" and "multiplicative", but it is easy to
 
observe a natural organization and even some relations approaching
 
isomorphisms among and between the members of each class.
 
  
The relation between logical disjunction and set-theoretic union and the
+
<br>
relation between logical conjunction and set-theoretic intersection ought
 
to be clear enough for the purposes of the immediately present context.
 
In any case, all of these relations are scheduled to receive a thorough
 
examination in a subsequent discussion (Subsection 1.3.10.13).  But the
 
relation of a set-theoretic union to a category-theoretic co-product and
 
the relation of a set-theoretic intersection to a syntactic concatenation
 
deserve a closer look at this point.
 
  
The effect of a co-product as a "disjointed union", in other words, that
+
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
creates an object tantamount to a disjoint union of sets in the resulting
+
| align="left" style="border-left:1px solid black;"  width="50%" |
co-product even if some of these sets intersect non-trivially and even if
+
<math>{\mathfrak{C} (\mathfrak{P}) : \text{Grammar 6}}\!</math>
some of them are identical "in reality", can be achieved in several ways.
+
| align="right" style="border-right:1px solid black;" width="50%" |
The most usual conception is that of making a "separate copy", for each
+
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}\!</math>
part of the intended co-product, of the set that is intended to go there.
+
|-
Often one thinks of the set that is assigned to a particular part of the
+
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
co-product as being distinguished by a particular "color", in other words,
+
<math>\begin{array}{rcll}
by the attachment of a distinct "index", "label", or "tag", being a marker
+
1.
that is inherited by and passed on to every element of the set in that part.
+
& S
A concrete image of this construction can be achieved by imagining that each
+
& :>
set and each element of each set is placed in an ordered pair with the sign
+
& \varepsilon
of its color, index, label, or tag. One describes this as the "injection"
+
\\
of each set into the corresponding "part" of the co-product.
+
2.
 +
& S
 +
& :>
 +
& S'
 +
\\
 +
3.
 +
& S'
 +
& :>
 +
& R
 +
\\
 +
4.
 +
& S'
 +
& :>
 +
& F
 +
\\
 +
5.
 +
& S'
 +
& :>
 +
& S' \, \cdot \, S'
 +
\\
 +
6.
 +
& R
 +
& :>
 +
& m_1
 +
\\
 +
7.
 +
& R
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
8.
 +
& R
 +
& :>
 +
& R \, \cdot \, R
 +
\\
 +
9.
 +
& F
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
 +
\\
 +
10.
 +
& F
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
11.
 +
& T
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
12.
 +
& T
 +
& :>
 +
& S'
 +
\\
 +
13.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
14.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S'
 +
\\
 +
\end{array}</math>
 +
|}
  
For example, given the sets P and Q, overlapping or not, one can define
+
<br>
the "indexed" sets or the "marked" sets P_[1] and Q_[2], amounting to the
 
copy of P into the first part of the co-product and the copy of Q into the
 
second part of the co-product, in the following manner:
 
  
P_[1]  =  <P, 1>  =  {<x, 1>  :  x in P},
+
The preceding development provides a typical example of how an initially effective and conceptually succinct description of a formal language, but one that is terse to the point of allowing its prospective interpreter to waste exorbitant amounts of energy in trying to unravel its implications, can be converted into a form that is more efficient from the operational point of view, even if slightly more ungainly in regard to its elegance.
  
Q_[2]  = <Q, 2>  =  {<x, 2>  : x in Q}.
+
The basic idea behind all of this machinery remains the same: Besides the select body of formulas that are introduced as boundary conditions, it merely institutes the following general rule:
  
Using the sign "]_[" for this construction, the "sum", the "co-product",
+
{| align="center" cellpadding="8" width="90%"
or the "disjointed union" of P and Q in that order can be represented as
+
|-
the ordinary disjoint union of P_[1] and Q_[2].
+
| <math>\operatorname{If}</math>
 
+
| the strings <math>S_1, \ldots, S_k\!</math> are sentences,
P ]_[ Q  =   P_[1] |_| Q_[2].
+
|-
 
+
| <math>\operatorname{Then}</math>
The concatenation L_1 · L_2 of the formal languages L_1 and L_2 is
+
| their concatenation in the form
just the cartesian product of sets L_1 x L_2 without the extra x's,
+
|-
but the relation of cartesian products to set-theoretic intersections
+
| &nbsp;
and thus to logical conjunctions is far from being clear.  One way of
+
| <math>\operatorname{Conc}_{j=1}^k S_j \ = \ S_1 \, \cdot \, \ldots \, \cdot \, S_k</math>
seeing a type of relation is to focus on the information that is needed
+
|-
to specify each construction, and thus to reflect on the signs that are
+
| &nbsp;
used to carry this information.  As a first approach to the topic of
+
| is a sentence,
information, according to a strategy that seeks to be as elementary
+
|-
and as informal as possible, I introduce the following set of ideas,
+
| <math>\operatorname{And}</math>
intended to be taken in a very provisional way.
+
| their surcatenation in the form
 
+
|-
A "stricture" is a specification of a certain set in a certain place,
+
| &nbsp;
relative to a number of other sets, yet to be specified.  It is assumed
+
| <math>\operatorname{Surc}_{j=1}^k S_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}</math>
that one knows enough to tell if two strictures are equivalent as pieces
 
of information, but any more determinate indications, like names for the
 
places that are mentioned in the stricture, or bounds on the number of
 
places that are involved, are regarded as being extraneous impositions,
 
outside the proper concern of the definition, no matter how convenient
 
they are found to be for a particular discussion.  As a schematic form
 
of illustration, a stricture can be pictured in the following shape:
 
 
 
"... x X x Q x X x ..."
 
 
 
A "strait" is the object that is specified by a stricture, in effect,
 
a certain set in a certain place of an otherwise yet to be specified
 
relation.  Somewhat sketchily, the strait that corresponds to the
 
stricture just given can be pictured in the following shape:
 
 
 
... x X x Q x X x ...
 
 
 
In this picture, Q is a certain set, and X is the universe of discourse
 
that is relevant to a given discussion.  Since a stricture does not, by
 
itself, contain a sufficient amount of information to specify the number
 
of sets that it intends to set in place, or even to specify the absolute
 
location of the set that its does set in place, it appears to place an
 
unspecified number of unspecified sets in a vague and uncertain strait.
 
Taken out of its interpretive context, the residual information that a
 
stricture can convey makes all of the following potentially equivalent
 
as strictures:
 
 
 
"Q",  "XxQxX",  "XxXxQxXxX",  ...
 
 
 
With respect to what these strictures specify, this
 
leaves all of the following equivalent as straits:
 
 
 
= XxQxX  =  XxXxQxXxX  =  ...
 
 
 
Within the framework of a particular discussion, it is customary to
 
set a bound on the number of places and to limit the variety of sets
 
that are regarded as being under active consideration, and it is also
 
convenient to index the places of the indicated relations, and of their
 
encompassing cartesian products, in some fixed way.  But the whole idea
 
of a stricture is to specify a strait that is capable of extending through
 
and beyond any fixed frame of discussion.  In other words, a stricture is
 
conceived to constrain a strait at a certain point, and then to leave it
 
literally embedded, if tacitly expressed, in a yet to be fully specified
 
relation, one that involves an unspecified number of unspecified domains.
 
 
 
A quantity of information is a measure of constraint.  In this respect,
 
a set of comparable strictures is ordered on account of the information
 
that each one conveys, and a system of comparable straits is ordered in
 
accord with the amount of information that it takes to pin each one of
 
them down.  Strictures that are more constraining and straits that are
 
more constrained are placed at higher levels of information than those
 
that are less so, and entities that involve more information are said
 
to have a greater "complexity" in comparison with those entities that
 
involve less information, that are said to have a greater "simplicity".
 
 
 
In order to create a concrete example, let me now institute a frame of
 
discussion where the number of places in a relation is bounded at two,
 
and where the variety of sets under active consideration is limited to
 
the typical subsets P and Q of a universe X.  Under these conditions,
 
one can use the following sorts of expression as schematic strictures:
 
 
 
| "X" ,  "P" ,  "Q" ,
 
|
 
| "XxX", "XxP", "XxQ",
 
|
 
| "PxX",  "PxP",  "PxQ",
 
|
 
| "QxX",  "QxP",  "QxQ".
 
 
 
These strictures and their corresponding straits are stratified according
 
to their amounts of information, or their levels of constraint, as follows:
 
 
 
| High:  "PxP",  "PxQ",  "QxP",  "QxQ".
 
|
 
| Med:    "P" ,  "XxP",  "PxX".
 
|
 
| Med:    "Q" ,  "XxQ",  "QxX".
 
|
 
| Low:    "X" ,  "XxX".
 
 
 
Within this framework, the more complex strait PxQ can be expressed
 
in terms of the simpler straits, PxX and XxQ.  More specifically,
 
it lends itself to being analyzed as their intersection, in the
 
following way:
 
 
 
PxQ  = PxX |^| XxQ.
 
 
 
>From here it is easy to see the relation of concatenation, by virtue of
 
these types of intersection, to the logical conjunction of propositions.
 
The cartesian product PxQ is described by a conjunction of propositions,
 
namely, "P_<1> and Q_<2>", subject to the following interpretation:
 
 
 
1.  "P_<1>" asserts that there is an element from
 
    the set P in the first place of the product.
 
 
 
2.  "Q_<2>" asserts that there is an element from
 
    the set Q in the second place of the product.
 
 
 
The integration of these two pieces of information can be taken
 
in that measure to specify a yet to be fully determined relation.
 
 
 
In a corresponding fashion at the level of the elements,
 
the ordered pair <p, q> is described by a conjunction
 
of propositions, namely, "p_<1> and q_<2>", subject
 
to the following interpretation:
 
 
 
1.  "p_<1>" says that p is in the first place
 
    of the product element under construction.
 
 
 
2.  "q_<2>" says that q is in the second place
 
    of the product element under construction.
 
 
 
Notice that, in construing the cartesian product of the sets P and Q or the
 
concatenation of the languages L_1 and L_2 in this way, one shifts the level
 
of the active construction from the tupling of the elements in P and Q or the
 
concatenation of the strings that are internal to the languages L_1 and L_2 to
 
the concatenation of the external signs that it takes to indicate these sets or
 
these languages, in other words, passing to a conjunction of indexed propositions,
 
"P_<1> and Q_<2>", or to a conjunction of assertions, "L_1_<1> and L_2_<2>", that
 
marks the sets or the languages in question for insertion in the indicated places
 
of a product set or a product language, respectively.  In effect, the subscripting
 
by the indices "<1>" and "<2>" can be recognized as a special case of concatenation,
 
albeit through the posting of editorial remarks from an external "mark-up" language.
 
 
 
In order to systematize the relations that strictures and straits placed
 
at higher levels of complexity, constraint, information, and organization
 
have with those that are placed at the associated lower levels, I introduce
 
the following pair of definitions:
 
 
 
The j^th "excerpt" of a stricture of the form "S_1 x ... x S_k", regarded
 
within a frame of discussion where the number of places is limited to k,
 
is the stricture of the form "X x ... x S_j x ... x X".  In the proper
 
context, this can be written more succinctly as the stricture "S_j_<j>",
 
an assertion that places the j^th set in the j^th place of the product.
 
 
 
The j^th "extract" of a strait of the form S_1 x ... x S_k, constrained
 
to a frame of discussion where the number of places is restricted to k,
 
is the strait of the form X x ... x S_j x ... x X.  In the appropriate
 
context, this can be denoted more succinctly by the stricture "S_j_<j>",
 
an assertion that places the j^th set in the j^th place of the product.
 
 
 
In these terms, a stricture of the form "S_1 x ... x S_k"
 
can be expressed in terms of simpler strictures, to wit,
 
as a conjunction of its k excerpts:
 
 
 
"S_1 x ... x S_k"  =  "S_1_<1>" &  ...  & "S_k_<k>".
 
 
 
In a similar vein, a strait of the form S_1 x ... x S_k
 
can be expressed in terms of simpler straits, namely,
 
as an intersection of its k extracts:
 
 
 
S_1 x ... x S_k    =    S_1_<1> |^| ... |^| S_k_<k>.
 
 
 
There is a measure of ambiguity that remains in this formulation,
 
but it is the best that I can do in the present informal context.
 
</pre>
 
 
 
==The Cactus Language : Mechanics==
 
 
 
{| align="center" cellpadding="0" cellspacing="0" width="90%"
 
|
 
<p>We are only now beginning to see how this works.  Clearly one of the mechanisms for picking a reality is the sociohistorical sense of what is important &mdash; which research program, with all its particularity of knowledge, seems most fundamental, most productive, most penetrating.  The very judgments which make us push narrowly forward simultaneously make us forget how little we know.  And when we look back at history, where the lesson is plain to find, we often fail to imagine ourselves in a parallel situation.  We ascribe the differences in world view to error, rather than to unexamined but consistent and internally justified choice.</p>
 
 
|-
 
|-
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
+
| &nbsp;
 +
| is a sentence.
 
|}
 
|}
  
<pre>
+
===Generalities About Formal Grammars===
In this Subsection, I discuss the "mechanics" of parsing the
 
cactus language into the corresponding class of computational
 
data structures.  This provides each sentence of the language
 
with a translation into a computational form that articulates
 
its syntactic structure and prepares it for automated modes of
 
processing and evaluation.  For this purpose, it is necessary
 
to describe the target data structures at a fairly high level
 
of abstraction only, ignoring the details of address pointers
 
and record structures and leaving the more operational aspects
 
of implementation to the imagination of prospective programmers.
 
In this way, I can put off to another stage of elaboration and
 
refinement the description of the program that constructs these
 
pointers and operates on these graph-theoretic data structures.
 
 
 
The structure of a "painted cactus", insofar as it presents itself
 
to the visual imagination, can be described as follows.  The overall
 
structure, as given by its underlying graph, falls within the species
 
of graph that is commonly known as a "rooted cactus", and the only novel
 
feature that it adds to this is that each of its nodes can be "painted"
 
with a finite sequence of "paints", chosen from a "palette" that is given
 
by the parametric set {" "} |_| !P!  = {m_1} |_| {p_1, ..., p_k}.
 
 
 
It is conceivable, from a purely graph-theoretical point of view, to have
 
a class of cacti that are painted but not rooted, and so it is frequently
 
necessary, for the sake of precision, to more exactly pinpoint the target
 
species of graphical structure as a "painted and rooted cactus" (PARC).
 
 
 
A painted cactus, as a rooted graph, has a distinguished "node" that is
 
called its "root".  By starting from the root and working recursively,
 
the rest of its structure can be described in the following fashion.
 
 
 
Each "node" of a PARC consists of a graphical "point" or "vertex" plus
 
a finite sequence of "attachments", described in relative terms as the
 
attachments "at" or "to" that node.  An empty sequence of attachments
 
defines the "empty node".  Otherwise, each attachment is one of three
 
kinds:  a blank, a paint, or a type of PARC that is called a "lobe".
 
 
 
Each "lobe" of a PARC consists of a directed graphical "cycle" plus a
 
finite sequence of "accoutrements", described in relative terms as the
 
accoutrements "of" or "on" that lobe.  Recalling the circumstance that
 
every lobe that comes under consideration comes already attached to a
 
particular node, exactly one vertex of the corresponding cycle is the
 
vertex that comes from that very node.  The remaining vertices of the
 
cycle have their definitions filled out according to the accoutrements
 
of the lobe in question.  An empty sequence of accoutrements is taken
 
to be tantamount to a sequence that contains a single empty node as its
 
unique accoutrement, and either one of these ways of approaching it can
 
be regarded as defining a graphical structure that is called a "needle"
 
or a "terminal edge".  Otherwise, each accoutrement of a lobe is itself
 
an arbitrary PARC.
 
 
 
Although this definition of a lobe in terms of its intrinsic structural
 
components is logically sufficient, it is also useful to characterize the
 
structure of a lobe in comparative terms, that is, to view the structure
 
that typifies a lobe in relation to the structures of other PARC's and to
 
mark the inclusion of this special type within the general run of PARC's.
 
This approach to the question of types results in a form of description
 
that appears to be a bit more analytic, at least, in mnemonic or prima
 
facie terms, if not ultimately more revealing.  Working in this vein,
 
a "lobe" can be characterized as a special type of PARC that is called
 
an "unpainted root plant" (UR-plant).
 
 
 
An "UR-plant" is a PARC of a simpler sort, at least, with respect to the
 
recursive ordering of structures that is being followed here.  As a type,
 
it is defined by the presence of two properties, that of being "planted"
 
and that of having an "unpainted root".  These are defined as follows:
 
 
 
1.  A PARC is "planted" if its list of attachments has just one PARC.
 
 
 
2.  A PARC is "UR" if its list of attachments has no blanks or paints.
 
 
 
In short, an UR-planted PARC has a single PARC as its only attachment,
 
and since this attachment is prevented from being a blank or a paint,
 
the single attachment at its root has to be another sort of structure,
 
that which we call a "lobe".
 
 
 
To express the description of a PARC in terms of its nodes, each node
 
can be specified in the fashion of a functional expression, letting a
 
citation of the generic function name "Node" be followed by a list of
 
arguments that enumerates the attachments of the node in question, and
 
letting a citation of the generic function name "Lobe" be followed by a
 
list of arguments that details the accoutrements of the lobe in question.
 
Thus, one can write expressions of the following forms:
 
 
 
1.  Node^0        = Node()
 
 
 
                  = a node with no attachments.
 
 
 
    Node^k_j  C_j  = Node(C_1, ..., C_k)
 
 
 
                  = a node with the attachments C_1, ..., C_k.
 
 
 
2.  Lobe^0        = Lobe()
 
 
 
                  =  a lobe with no accoutrements.
 
 
 
    Lobe^k_j  C_j  =  Lobe(C_1, ..., C_k)
 
 
 
                  =  a lobe with the accoutrements C_1, ..., C_k.
 
 
 
Working from a structural description of the cactus language,
 
or any suitable formal grammar for !C!(!P!), it is possible to
 
give a recursive definition of the function called "Parse" that
 
maps each sentence in PARCE(!P!) to the corresponding graph in
 
PARC(!P!).  One way to do this proceeds as follows:
 
 
 
1.  The parse of the concatenation Conc^k of the k sentences S_j,
 
    for j = 1 to k, is defined recursively as follows:
 
 
 
    a.  Parse(Conc^0)        =  Node^0.
 
 
 
    b.  For k > 0,
 
  
        Parse(Conc^k_j S_j) = Node^k_j Parse(S_j).
+
It is fitting to wrap up the foregoing developments by summarizing the notion of a formal grammar that appeared to evolve in the present case. For the sake of future reference and the chance of a wider application, it is also useful to try to extract the scheme of a formalization that potentially holds for any formal language. The following presentation of the notion of a formal grammar is adapted, with minor modifications, from the treatment in (DDQ, 60&ndash;61).
  
2.  The parse of the surcatenation Surc^k of the k sentences S_j,
+
A ''formal grammar'' <math>\mathfrak{G}</math> is given by a four-tuple <math>\mathfrak{G} = ( \, ^{\backprime\backprime} S \, ^{\prime\prime}, \, \mathfrak{Q}, \, \mathfrak{A}, \, \mathfrak{K} \, )</math> that takes the following form of description:
    for j = 1 to k, is defined recursively as follows:
 
  
    a.  Parse(Surc^0)        = Lobe^0.
+
<ol style="list-style-type:decimal">
  
    bFor k > 0,
+
<li><math>^{\backprime\backprime} S \, ^{\prime\prime}</math> is the ''initial'', ''special'', ''start'', or ''sentence'' symbolSince the letter <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> serves this function only in a special setting, its employment in this role need not create any confusion with its other typical uses as a string variable or as a sentence variable.</li>
  
        Parse(Surc^k_j S_j)  =  Lobe^k_j Parse(S_j).
+
<li><math>\mathfrak{Q} = \{ q_1, \ldots, q_m \}</math> is a finite set of ''intermediate symbols'', all distinct from <math>^{\backprime\backprime} S \, ^{\prime\prime}.</math></li>
  
For ease of reference, Table 12 summarizes the mechanics of these parsing rules.
+
<li><math>\mathfrak{A} = \{ a_1, \dots, a_n \}</math> is a finite set of ''terminal symbols'', also known as the ''alphabet'' of <math>\mathfrak{G},</math> all distinct from <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> and disjoint from <math>\mathfrak{Q}.</math>  Depending on the particular conception of the language <math>\mathfrak{L}</math> that is ''covered'', ''generated'', ''governed'', or ''ruled'' by the grammar <math>\mathfrak{G},</math> that is, whether <math>\mathfrak{L}</math> is conceived to be a set of words, sentences, paragraphs, or more extended structures of discourse, it is usual to describe <math>\mathfrak{A}</math> as the ''alphabet'', ''lexicon'', ''vocabulary'', ''liturgy'', or ''phrase book'' of both the grammar <math>\mathfrak{G}</math> and the language <math>\mathfrak{L}</math> that it regulates.</li>
  
Table 12.  Algorithmic Translation Rules
+
<li><math>\mathfrak{K}</math> is a finite set of ''characterizations''. Depending on how they come into play, these are variously described as ''covering rules'', ''formations'', ''productions'', ''rewrite rules'', ''subsumptions'', ''transformations'', or ''typing rules''.</li>
o------------------------o---------o------------------------o
 
|                        |  Parse  |                        |
 
| Sentence in PARCE      |  -->  | Graph in PARC          |
 
o------------------------o---------o------------------------o
 
|                        |        |                        |
 
| Conc^0                |  -->   | Node^0                |
 
|                        |        |                        |
 
| Conc^k_j  S_j          |  -->   | Node^k_j  Parse(S_j)  |
 
|                        |        |                        |
 
| Surc^0                |  -->   | Lobe^0                |
 
|                        |        |                        |
 
| Surc^k_j S_j          |  -->   | Lobe^k_j  Parse(S_j)  |
 
|                        |        |                        |
 
o------------------------o---------o------------------------o
 
  
A "substructure" of a PARC is defined recursively as follows.  Starting
+
</ol>
at the root node of the cactus C, any attachment is a substructure of C.
 
If a substructure is a blank or a paint, then it constitutes a minimal
 
substructure, meaning that no further substructures of C arise from it.
 
If a substructure is a lobe, then each one of its accoutrements is also
 
a substructure of C, and has to be examined for further substructures.
 
  
The concept of substructure can be used to define varieties of deletion
+
To describe the elements of <math>\mathfrak{K}</math> it helps to define some additional terms:
and erasure operations that respect the structure of the abstract graph.
 
For the purposes of this depiction, a blank symbol " " is treated as
 
a "primer", in other words, as a "clear paint", a "neutral tint", or
 
a "white wash".  In effect, one is letting m_1 = p_0.  In this frame
 
of discussion, it is useful to make the following distinction:
 
  
1.  To "delete" a substructure is to replace it with an empty node,
+
<ol style="list-style-type:lower-latin">
    in effect, to reduce the whole structure to a trivial point.
 
  
2.  To "erase" a substructure is to replace it with a blank symbol,
+
<li>The symbols in <math>\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \cup \mathfrak{A}</math> form the ''augmented alphabet'' of <math>\mathfrak{G}.</math></li>
    in effect, to paint it out of the picture or to overwrite it.
 
  
A "bare" PARC, loosely referred to as a "bare cactus", is a PARC on the
+
<li>The symbols in <math>\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q}</math> are the ''non-terminal symbols'' of <math>\mathfrak{G}.</math></li>
empty palette !P! = {}.  In other veins, a bare cactus can be described
 
in several different ways, depending on how the form arises in practice.
 
  
1.  Leaning on the definition of a bare PARCE, a bare PARC can be
+
<li>The symbols in <math>\mathfrak{Q} \cup \mathfrak{A}</math> are the ''non-initial symbols'' of <math>\mathfrak{G}.</math></li>
    described as the kind of a parse graph that results from parsing
 
    a bare cactus expression, in other words, as the kind of a graph
 
    that issues from the requirements of processing a sentence of
 
    the bare cactus language !C!^0 = PARCE^0.
 
  
2.  To express it more in its own terms, a bare PARC can be defined
+
<li>The strings in <math>( \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \cup \mathfrak{A} )^*</math>  are the ''augmented strings'' for <math>\mathfrak{G}.</math></li>
    by tracing the recursive definition of a generic PARC, but then
 
    by detaching an independent form of description from the source
 
    of that analogy. The method is sufficiently sketched as follows:
 
  
    a.  A "bare PARC" is a PARC whose attachments
+
<li>The strings in <math>\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*</math> are the ''sentential forms'' for <math>\mathfrak{G}.</math></li>
        are limited to blanks and "bare lobes".
 
  
    b.  A "bare lobe" is a lobe whose accoutrements
+
</ol>
        are limited to bare PARC's.
 
  
3.  In practice, a bare cactus is usually encountered in the process
+
Each characterization in <math>\mathfrak{K}</math> is an ordered pair of strings <math>(S_1, S_2)\!</math> that takes the following form:
    of analyzing or handling an arbitrary PARC, the circumstances of
 
    which frequently call for deleting or erasing all of its paints.
 
    In particular, this generally makes it easier to observe the
 
    various properties of its underlying graphical structure.
 
</pre>
 
  
==The Cactus Language : Semantics==
+
{| align="center" cellpadding="8" width="90%"
 
+
| <math>S_1 \ = \ Q_1 \cdot q \cdot Q_2,</math>
{| align="center" cellpadding="0" cellspacing="0" width="90%"
 
|
 
<p>Alas, and yet what ''are'' you, my written and painted thoughts!  It is not long ago that you were still so many-coloured, young and malicious, so full of thorns and hidden spices you made me sneeze and laugh &mdash; and now?  You have already taken off your novelty and some of you, I fear, are on the point of becoming truths:  they already look so immortal, so pathetically righteous, so boring!</p>
 
 
|-
 
|-
| align="right" | &mdash; Nietzsche, ''Beyond Good and Evil'', [Nie-2, ¶ 296]
+
| <math>S_2 \ = \ Q_1 \cdot W \cdot Q_2.</math>
 
|}
 
|}
  
<pre>
+
In this scheme, <math>S_1\!</math> and <math>S_2\!</math> are members of the augmented strings for <math>\mathfrak{G},</math> more precisely, <math>S_1\!</math> is a non-empty string and a sentential form over <math>\mathfrak{G},</math> while <math>S_2\!</math> is a possibly empty string and also a sentential form over <math>\mathfrak{G}.</math>
In this Subsection, I describe a particular semantics for the
 
painted cactus language, telling what meanings I aim to attach
 
to its bare syntactic forms.  This supplies an "interpretation"
 
for this parametric family of formal languages, but it is good
 
to remember that it forms just one of many such interpretations
 
that are conceivable and even viable.  In deed, the distinction
 
between the object domain and the sign domain can be observed in
 
the fact that many languages can be deployed to depict the same
 
set of objects and that any language worth its salt is bound to
 
to give rise to many different forms of interpretive saliency.
 
  
In formal settings, it is common to speak of "interpretation" as if it
+
Here also, <math>q\!</math> is a non-terminal symbol, that is, <math>q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q},</math> while <math>Q_1, Q_2,\!</math> and <math>W\!</math> are possibly empty strings of non-initial symbols, a fact that can be expressed in the form, <math>Q_1, Q_2, W \in (\mathfrak{Q} \cup \mathfrak{A})^*.</math>
created a direct connection between the signs of a formal language and
 
the objects of the intended domain, in other words, as if it determined
 
the denotative component of a sign relation.  But a closer attention to
 
what goes on reveals that the process of interpretation is more indirect,
 
that what it does is to provide each sign of a prospectively meaningful
 
source language with a translation into an already established target
 
language, where "already established" means that its relationship to
 
pragmatic objects is taken for granted at the moment in question.
 
  
With this in mind, it is clear that interpretation is an affair of signs
+
In practice, the couplets in <math>\mathfrak{K}</math> are used to ''derive'', to ''generate'', or to ''produce'' sentences of the corresponding language <math>\mathfrak{L} = \mathfrak{L} (\mathfrak{G}).</math>  The language <math>\mathfrak{L}</math> is then said to be ''governed'', ''licensed'', or ''regulated'' by the grammar <math>\mathfrak{G},</math> a circumstance that is expressed in the form <math>\mathfrak{L} = \langle \mathfrak{G} \rangle.</math>  In order to facilitate this active employment of the grammar, it is conventional to write the abstract characterization <math>(S_1, S_2)\!</math> and the specific characterization <math>(Q_1 \cdot q \cdot Q_2, \ Q_1 \cdot W \cdot Q_2)</math> in the following forms, respectively:
that at best respects the objects of all of the signs that enter into it,
 
and so it is the connotative aspect of semiotics that is at stake here.
 
There is nothing wrong with my saying that I interpret a sentence of a
 
formal language as a sign that refers to a function or to a proposition,
 
so long as you understand that this reference is likely to be achieved
 
by way of more familiar and perhaps less formal signs that you already
 
take to denote those objects.
 
  
On entering a context where a logical interpretation is intended for the
+
{| align="center" cellpadding="8" width="90%"
sentences of a formal language there are a few conventions that make it
 
easier to make the translation from abstract syntactic forms to their
 
intended semantic senses.  Although these conventions are expressed in
 
unnecessarily colorful terms, from a purely abstract point of view, they
 
do provide a useful array of connotations that help to negotiate what is
 
otherwise a difficult transition.  This terminology is introduced as the
 
need for it arises in the process of interpreting the cactus language.
 
 
 
The task of this Subsection is to specify a "semantic function" for
 
the sentences of the cactus language !L! = !C!(!P!), in other words,
 
to define a mapping that "interprets" each sentence of !C!(!P!) as
 
a sentence that says something, as a sentence that bears a meaning,
 
in short, as a sentence that denotes a proposition, and thus as a
 
sign of an indicator function.  When the syntactic sentences of a
 
formal language are given a referent significance in logical terms,
 
for example, as denoting propositions or indicator functions, then
 
each form of syntactic combination takes on a corresponding form
 
of logical significance.
 
 
 
By way of providing a logical interpretation for the cactus language,
 
I introduce a family of operators on indicator functions that are
 
called "propositional connectives", and I distinguish these from
 
the associated family of syntactic combinations that are called
 
"sentential connectives", where the relationship between these
 
two realms of connection is exactly that between objects and
 
their signs.  A propositional connective, as an entity of a
 
well-defined functional and operational type, can be treated
 
in every way as a logical or a mathematical object, and thus
 
as the type of object that can be denoted by the corresponding
 
form of syntactic entity, namely, the sentential connective that
 
is appropriate to the case in question.
 
 
 
There are two basic types of connectives, called the "blank connectives"
 
and the "bound connectives", respectively, with one connective of each
 
type for each natural number k = 0, 1, 2, 3, ... .
 
 
 
1.  The "blank connective" of k places is signified by the
 
    concatenation of the k sentences that fill those places.
 
 
 
    For the special case of k = 0, the "blank connective" is taken to
 
    be an empty string or a blank symbol -- it does not matter which,
 
    since both are assigned the same denotation among propositions.
 
    For the generic case of k > 0, the "blank connective" takes
 
    the form "S_1 · ... · S_k".  In the type of data that is
 
    called a "text", the raised dots "·" are usually omitted,
 
    supplanted by whatever number of spaces and line breaks
 
    serve to improve the readability of the resulting text.
 
 
 
2.  The "bound connective" of k places is signified by the
 
    surcatenation of the k sentences that fill those places.
 
 
 
    For the special case of k = 0, the "bound connective" is taken to
 
    be an expression of the form "-()-", "-( )-", "-(  )-", and so on,
 
    with any number of blank symbols between the parentheses, all of
 
    which are assigned the same logical denotation among propositions.
 
    For the generic case of k > 0, the "bound connective" takes the
 
    form "-(S_1, ..., S_k)-".
 
 
 
At this point, there are actually two different "dialects", "scripts",
 
or "modes" of presentation for the cactus language that need to be
 
interpreted, in other words, that need to have a semantic function
 
defined on their domains.
 
 
 
a.  There is the literal formal language of strings in PARCE(!P!),
 
    the "painted and rooted cactus expressions" that constitute
 
    the langauge !L! = !C!(!P!) c !A!* = (!M! |_| !P!)*.
 
 
 
b.  There is the figurative formal language of graphs in PARC(!P!),
 
    the "painted and rooted cacti" themselves, a parametric family
 
    of graphs or a species of computational data structures that
 
    is graphically analogous to the language of literal strings.
 
 
 
Of course, these two modalities of formal language, like written and
 
spoken natural languages, are meant to have compatible interpretations,
 
and so it is usually sufficient to give just the meanings of either one.
 
All that remains is to provide a "codomain" or a "target space" for the
 
intended semantic function, in other words, to supply a suitable range
 
of logical meanings for the memberships of these languages to map into.
 
Out of the many interpretations that are formally possible to arrange,
 
one way of doing this proceeds by making the following definitions:
 
 
 
1.  The "conjunction" Conj^J_j Q_j of a set of propositions, {Q_j : j in J},
 
    is a proposition that is true if and only if each one of the Q_j is true.
 
 
 
    Conj^J_j Q_j is true  <=>  Q_j is true for every j in J.
 
 
 
2.  The "surjunction" Surj^J_j Q_j of a set of propositions, {Q_j : j in J},
 
    is a proposition that is true if and only if just one of the Q_j is untrue.
 
 
 
    Surj^J_j Q_j is true  <=>  Q_j is untrue for unique j in J.
 
 
 
If the number of propositions that are being joined together is finite,
 
then the conjunction and the surjunction can be represented by means of
 
sentential connectives, incorporating the sentences that represent these
 
propositions into finite strings of symbols.
 
 
 
If J is finite, for instance, if J constitutes the interval j = 1 to k,
 
and if each proposition Q_j is represented by a sentence S_j, then the
 
following strategies of expression are open:
 
 
 
1.  The conjunction Conj^J_j Q_j can be represented by a sentence that
 
    is constructed by concatenating the S_j in the following fashion:
 
 
 
    Conj^J_j Q_j  <-<  S_1 S_2 ... S_k.
 
 
 
2.  The surjunction Surj^J_j Q_j can be represented by a sentence that
 
    is constructed by surcatenating the S_j in the following fashion:
 
 
 
    Surj^J_j Q_j  <-<  -(S_1, S_2, ..., S_k)-.
 
 
 
If one opts for a mode of interpretation that moves more directly from
 
the parse graph of a sentence to the potential logical meaning of both
 
the PARC and the PARCE, then the following specifications are in order:
 
 
 
A cactus rooted at a particular node is taken to represent what that
 
node denotes, its logical denotation or its logical interpretation.
 
 
 
1.  The logical denotation of a node is the logical conjunction of that node's
 
    "arguments", which are defined as the logical denotations of that node's
 
    attachments.  The logical denotation of either a blank symbol or an empty
 
    node is the boolean value %1% = "true".  The logical denotation of the
 
    paint p_j is the proposition P_j, a proposition that is regarded as
 
    "primitive", at least, with respect to the level of analysis that
 
    is represented in the current instance of !C!(!P!).
 
 
 
2.  The logical denotation of a lobe is the logical surjunction of that lobe's
 
    "arguments", which are defined as the logical denotations of that lobe's
 
    accoutrements.  As a corollary, the logical denotation of the parse graph
 
    of "-()-", otherwise called a "needle", is the boolean value %0% = "false".
 
 
 
If one takes the point of view that PARC's and PARCE's amount to a
 
pair of intertranslatable languages for the same domain of objects,
 
then the "spiny bracket" notation, as in "-[C_j]-" or "-[S_j]-",
 
can be used on either domain of signs to indicate the logical
 
denotation of a cactus C_j or the logical denotation of
 
a sentence S_j, respectively.
 
 
 
Tables 13.1 and 13.2 summarize the relations that serve to connect the
 
formal language of sentences with the logical language of propositions.
 
Between these two realms of expression there is a family of graphical
 
data structures that arise in parsing the sentences and that serve to
 
facilitate the performance of computations on the indicator functions.
 
The graphical language supplies an intermediate form of representation
 
between the formal sentences and the indicator functions, and the form
 
of mediation that it provides is very useful in rendering the possible
 
connections between the other two languages conceivable in fact, not to
 
mention in carrying out the necessary translations on a practical basis.
 
These Tables include this intermediate domain in their Central Columns.
 
Between their First and Middle Columns they illustrate the mechanics of
 
parsing the abstract sentences of the cactus language into the graphical
 
data structures of the corresponding species.  Between their Middle and
 
Final Columns they summarize the semantics of interpreting the graphical
 
forms of representation for the purposes of reasoning with propositions.
 
 
 
Table 13.1  Semantic Translations:  Functional Form
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  | Par |                  | Den |                  |
 
| Sentence          | --> | Graph            | --> | Proposition      |
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  |    |                  |    |                  |
 
| S_j              | --> | C_j              | --> | Q_j              |
 
|                  |    |                  |    |                  |
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  |    |                  |    |                  |
 
| Conc^0            | --> | Node^0            | --> | %1%              |
 
|                  |    |                  |    |                  |
 
| Conc^k_j  S_j    | --> | Node^k_j  C_j    | --> | Conj^k_j  Q_j    |
 
|                  |    |                  |    |                  |
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  |    |                  |    |                  |
 
| Surc^0            | --> | Lobe^0            | --> | %0%              |
 
|                  |    |                  |    |                  |
 
| Surc^k_j  S_j    | --> | Lobe^k_j  C_j    | --> | Surj^k_j  Q_j    |
 
|                  |    |                  |    |                  |
 
o-------------------o-----o-------------------o-----o-------------------o
 
 
 
Table 13.2  Semantic Translations:  Equational Form
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  | Par |                  | Den |                  |
 
| -[Sentence]-      |  =  | -[Graph]-        |  =  | Proposition      |
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  |    |                  |    |                  |
 
| -[S_j]-          |  =  | -[C_j]-          |  =  | Q_j              |
 
|                  |    |                  |    |                  |
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  |    |                  |    |                  |
 
| -[Conc^0]-        |  =  | -[Node^0]-        |  =  | %1%              |
 
|                  |    |                  |    |                  |
 
| -[Conc^k_j  S_j]- |  =  | -[Node^k_j  C_j]- | = | Conj^k_j  Q_j    |
 
|                  |    |                  |    |                  |
 
o-------------------o-----o-------------------o-----o-------------------o
 
|                  |    |                  |    |                  |
 
| -[Surc^0]-        |  =  | -[Lobe^0]-        |  =  | %0%              |
 
|                  |    |                  |    |                  |
 
| -[Surc^k_j  S_j]- |  =  | -[Lobe^k_j  C_j]- |  =  | Surj^k_j  Q_j    |
 
|                  |    |                  |    |                  |
 
o-------------------o-----o-------------------o-----o-------------------o
 
 
 
Aside from their common topic, the two Tables present slightly different
 
ways of conceptualizing the operations that go to establish their maps.
 
Table 13.1 records the functional associations that connect each domain
 
with the next, taking the triplings of a sentence S_j, a cactus C_j, and
 
a proposition Q_j as basic data, and fixing the rest by recursion on these.
 
Table 13.2 records these associations in the form of equations, treating
 
sentences and graphs as alternative kinds of signs, and generalizing the
 
spiny bracket operator to indicate the proposition that either denotes.
 
It should be clear at this point that either scheme of translation puts
 
the sentences, the graphs, and the propositions that it associates with
 
each other roughly in the roles of the signs, the interpretants, and the
 
objects, respectively, whose triples define an appropriate sign relation.
 
Indeed, the "roughly" can be made "exactly" as soon as the domains of
 
a suitable sign relation are specified precisely.
 
 
 
A good way to illustrate the action of the conjunction and surjunction
 
operators is to demonstate how they can be used to construct all of the
 
boolean functions on k variables, just now, let us say, for k = 0, 1, 2.
 
 
 
A boolean function on 0 variables is just a boolean constant F^0 in the
 
boolean domain %B% = {%0%, %1%}.  Table 14 shows several different ways
 
of referring to these elements, just for the sake of consistency using
 
the same format that will be used in subsequent Tables, no matter how
 
degenerate it tends to appears in the immediate case:
 
 
 
Column 1 lists each boolean element or boolean function under its
 
ordinary constant name or under a succinct nickname, respectively.
 
 
 
Column 2 lists each boolean function in a style of function name "F^i_j"
 
that is constructed as follows:  The superscript "i" gives the dimension
 
of the functional domain, that is, the number of its functional variables,
 
and the subscript "j" is a binary string that recapitulates the functional
 
values, using the obvious translation of boolean values into binary values.
 
 
 
Column 3 lists the functional values for each boolean function, or possibly
 
a boolean element appearing in the guise of a function, for each combination
 
of its domain values.
 
 
 
Column 4 shows the usual expressions of these elements in the cactus language,
 
conforming to the practice of omitting the strike-throughs in display formats.
 
Here I illustrate also the useful convention of sending the expression "(())"
 
as a visible stand-in for the expression of a constantly "true" truth value,
 
one that would otherwise be represented by a blank expression, and tend to
 
elude our giving it much notice in the context of more demonstrative texts.
 
 
 
Table 14.  Boolean Functions on Zero Variables
 
o----------o----------o-------------------------------------------o----------o
 
| Constant | Function |                    F()                    | Function |
 
o----------o----------o-------------------------------------------o----------o
 
|          |          |                                          |          |
 
| %0%      | F^0_0    |                    %0%                    |    ()    |
 
|          |          |                                          |          |
 
| %1%      | F^0_1    |                    %1%                    |  (())  |
 
|          |          |                                          |          |
 
o----------o----------o-------------------------------------------o----------o
 
 
 
Table 15 presents the boolean functions on one variable, F^1 : %B% -> %B%,
 
of which there are precisely four.  Here, Column 1 codes the contents of
 
Column 2 in a more concise form, compressing the lists of boolean values,
 
recorded as bits in the subscript string, into their decimal equivalents.
 
Naturally, the boolean constants reprise themselves in this new setting
 
as constant functions on one variable.  Thus, one has the synonymous
 
expressions for constant functions that are expressed in the next
 
two chains of equations:
 
 
 
| F^1_0  =  F^1_00  =  %0% : %B% -> %B%
 
|
 
| F^1_3  =  F^1_11  =  %1% : %B% -> %B%
 
 
 
As for the rest, the other two functions are easily recognized as corresponding
 
to the one-place logical connectives, or the monadic operators on %B%.  Thus,
 
the function F^1_1  =  F^1_01 is recognizable as the negation operation, and
 
the function F^1_2  =  F^1_10 is obviously the identity operation.
 
 
 
Table 15.  Boolean Functions on One Variable
 
o----------o----------o-------------------------------------------o----------o
 
| Function | Function |                  F(x)                    | Function |
 
o----------o----------o---------------------o---------------------o----------o
 
|          |          |      F(%0%)        |      F(%1%)        |          |
 
o----------o----------o---------------------o---------------------o----------o
 
|          |          |                    |                    |          |
 
| F^1_0    | F^1_00  |        %0%        |        %0%        |  ( )    |
 
|          |          |                    |                    |          |
 
| F^1_1    | F^1_01  |        %0%        |        %1%        |  (x)    |
 
|          |          |                    |                    |          |
 
| F^1_2    | F^1_10  |        %1%        |        %0%        |    x    |
 
|          |          |                    |                    |          |
 
| F^1_3    | F^1_11  |        %1%        |        %1%        |  (( ))  |
 
|          |          |                    |                    |          |
 
o----------o----------o---------------------o---------------------o----------o
 
 
 
Table 16 presents the boolean functions on two variables, F^2 : %B%^2 -> %B%,
 
of which there are precisely sixteen in number.  As before, all of the boolean
 
functions of fewer variables are subsumed in this Table, though under a set of
 
alternative names and possibly different interpretations.  Just to acknowledge
 
a few of the more notable pseudonyms:
 
 
 
The constant function %0% : %B%^2 -> %B% appears under the name of F^2_00.
 
 
 
The constant function %1% : %B%^2 -> %B% appears under the name of F^2_15.
 
 
 
The negation and identity of the first variable are F^2_03 and F^2_12, resp.
 
 
 
The negation and identity of the other variable are F^2_05 and F^2_10, resp.
 
 
 
The logical conjunction is given by the function F^2_08 (x, y)  =  x · y.
 
 
 
The logical disjunction is given by the function F^2_14 (x, y)  = ((x)(y)).
 
 
 
Functions expressing the "conditionals", "implications",
 
or "if-then" statements are given in the following ways:
 
 
 
[x => y]  =  F^2_11 (x, y)  =  (x (y))  =  [not x without y].
 
 
 
[x <= y]  =  F^2_13 (x, y)  =  ((x) y)  =  [not y without x].
 
 
 
The function that corresponds to the "biconditional",
 
the "equivalence", or the "if and only" statement is
 
exhibited in the following fashion:
 
 
 
[x <=> y]  =  [x = y]  =  F^2_09 (x, y)  =  ((x , y)).
 
 
 
Finally, there is a boolean function that is logically associated with
 
the "exclusive disjunction", "inequivalence", or "not equals" statement,
 
algebraically associated with the "binary sum" or "bitsum" operation,
 
and geometrically associated with the "symmetric difference" of sets.
 
This function is given by:
 
 
 
[x =/= y]  =  [x + y]  =  F^2_06 (x, y)  =  (x , y).
 
 
 
Table 16.  Boolean Functions on Two Variables
 
o----------o----------o-------------------------------------------o----------o
 
| Function | Function |                  F(x, y)                  | Function |
 
o----------o----------o----------o----------o----------o----------o----------o
 
|          |          | %1%, %1% | %1%, %0% | %0%, %1% | %0%, %0% |          |
 
o----------o----------o----------o----------o----------o----------o----------o
 
|          |          |          |          |          |          |          |
 
| F^2_00  | F^2_0000 |  %0%    |  %0%    |  %0%    |  %0%    |    ()    |
 
|          |          |          |          |          |          |          |
 
| F^2_01  | F^2_0001 |  %0%    |  %0%    |  %0%    |  %1%    |  (x)(y)  |
 
|          |          |          |          |          |          |          |
 
| F^2_02  | F^2_0010 |  %0%    |  %0%    |  %1%    |  %0%    |  (x) y  |
 
|          |          |          |          |          |          |          |
 
| F^2_03  | F^2_0011 |  %0%    |  %0%    |  %1%    |  %1%    |  (x)    |
 
|          |          |          |          |          |          |          |
 
| F^2_04  | F^2_0100 |  %0%    |  %1%    |  %0%    |  %0%    |  x (y)  |
 
|          |          |          |          |          |          |          |
 
| F^2_05  | F^2_0101 |  %0%    |  %1%    |  %0%    |  %1%    |    (y)  |
 
|          |          |          |          |          |          |          |
 
| F^2_06  | F^2_0110 |  %0%    |  %1%    |  %1%    |  %0%    |  (x, y)  |
 
|          |          |          |          |          |          |          |
 
| F^2_07  | F^2_0111 |  %0%    |  %1%    |  %1%    |  %1%    |  (x  y)  |
 
|          |          |          |          |          |          |          |
 
| F^2_08  | F^2_1000 |  %1%    |  %0%    |  %0%    |  %0%    |  x  y  |
 
|          |          |          |          |          |          |          |
 
| F^2_09  | F^2_1001 |  %1%    |  %0%    |  %0%    |  %1%    | ((x, y)) |
 
|          |          |          |          |          |          |          |
 
| F^2_10  | F^2_1010 |  %1%    |  %0%    |  %1%    |  %0%    |      y  |
 
|          |          |          |          |          |          |          |
 
| F^2_11  | F^2_1011 |  %1%    |  %0%    |  %1%    |  %1%    |  (x (y)) |
 
|          |          |          |          |          |          |          |
 
| F^2_12  | F^2_1100 |  %1%    |  %1%    |  %0%    |  %0%    |  x      |
 
|          |          |          |          |          |          |          |
 
| F^2_13  | F^2_1101 |  %1%    |  %1%    |  %0%    |  %1%    | ((x) y)  |
 
|          |          |          |          |          |          |          |
 
| F^2_14  | F^2_1110 |  %1%    |  %1%    |  %1%    |  %0%    | ((x)(y)) |
 
|          |          |          |          |          |          |          |
 
| F^2_15  | F^2_1111 |  %1%    |  %1%    |  %1%    |  %1%    |  (())  |
 
|          |          |          |          |          |          |          |
 
o----------o----------o----------o----------o----------o----------o----------o
 
 
 
Let me now address one last question that may have occurred to some.
 
What has happened, in this suggested scheme of functional reasoning,
 
to the distinction that is quite pointedly made by careful logicians
 
between (1) the connectives called "conditionals" and symbolized by
 
the signs "->" and "<-", and (2) the assertions called "implications"
 
and symbolized by the signs "=>" and "<=", and, in a related question:
 
What has happened to the distinction that is equally insistently made
 
between (3) the connective called the "biconditional" and signified by
 
the sign "<->" and (4) the assertion that is called an "equivalence"
 
and signified by the sign "<=>"?  My answer is this:  For my part,
 
I am deliberately avoiding making these distinctions at the level
 
of syntax, preferring to treat them instead as distinctions in
 
the use of boolean functions, turning on whether the function
 
is mentioned directly and used to compute values on arguments,
 
or whether its inverse is being invoked to indicate the fibers
 
of truth or untruth under the propositional function in question.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
In this Subsection, I finally bring together many of what may
 
have appeared to be wholly independent threads of development,
 
in the hope of paying off a percentage of my promissory notes,
 
even if a goodly number my creditors have no doubt long since
 
forgotten, if not exactly forgiven the debentures in question.
 
 
 
For ease of reference, I repeat here a couple of the
 
definitions that are needed again in this discussion.
 
 
 
| A "boolean connection" of degree k, also known as a "boolean function"
 
| on k variables, is a map of the form F : %B%^k -> %B%.  In other words,
 
| a boolean connection of degree k is a proposition about things in the
 
| universe of discourse X = %B%^k.
 
 
|
 
|
| An "imagination" of degree k on X is a k-tuple of propositions
+
<math>\begin{array}{lll}
| about things in the universe X.  By way of displaying the kinds
+
S_1
| of notation that are used to express this idea, the imagination
+
& :>
| #f# = <f_1, ..., f_k> is can be given as a sequence of indicator
+
& S_2
| functions f_j : X -> %B%, for j = 1 to k.  All of these features
+
\\
| of the typical imagination #f# can be summed up in either one of
+
Q_1 \cdot q \cdot Q_2
| two ways:  either in the form of a membership statement, stating
+
& :>
| words to the effect that #f# belongs to the space (X -> %B%)^k,
+
& Q_1 \cdot W \cdot Q_2
| or in the form of the type declaration that #f# : (X -> %B%)^k,
+
\\
| though perhaps the latter specification is slightly more precise
+
\end{array}</math>
| than the former.
+
|}
 
 
The definition of the "stretch" operation and the uses of the
 
various brands of denotational operators can be reviewed here:
 
 
 
055.  http://suo.ieee.org/email/msg07466.html
 
057.  http://suo.ieee.org/email/msg07469.html
 
  
070.  http://suo.ieee.org/ontology/msg03473.html
+
In this usage, the characterization <math>S_1 :> S_2\!</math> is tantamount to a grammatical license to transform a string of the form <math>Q_1 \cdot q \cdot Q_2</math> into a string of the form <math>Q1 \cdot W \cdot Q2,</math> in effect, replacing the non-terminal symbol <math>q\!</math> with the non-initial string <math>W\!</math> in any selected, preserved, and closely adjoining context of the form <math>Q1 \cdot \underline{[[User:Jon Awbrey|Jon Awbrey]] ([[User talk:Jon Awbrey|talk]])} \cdot Q2.</math> In this application the notation <math>S_1 :> S_2\!</math> can be read to say that <math>S_1\!</math> ''produces'' <math>S_2\!</math> or that <math>S_1\!</math> ''transforms into'' <math>S_2.\!</math>
071http://suo.ieee.org/ontology/msg03479.html
 
</pre>
 
  
==Stretching Exercises==
+
An ''immediate derivation'' in <math>\mathfrak{G}\!</math> is an ordered pair <math>(W, W^\prime)\!</math> of sentential forms in <math>\mathfrak{G}\!</math> such that:
  
<pre>
+
{| align="center" cellpadding="8" width="90%"
Taking up the preceding arrays of particular connections, namely,
 
the boolean functions on two or less variables, it possible to
 
illustrate the use of the stretch operation in a variety of
 
concrete cases.
 
 
 
For example, suppose that F is a connection of the form F : %B%^2 -> %B%,
 
that is, any one of the sixteen possibilities in Table 16, while p and q
 
are propositions of the form p, q : X -> %B%, that is, propositions about
 
things in the universe X, or else the indicators of sets contained in X.
 
 
 
Then one has the imagination #f# = <f_1, f_2> = <p, q> : (X -> %B%)^2,
 
and the stretch of the connection F to #f# on X amounts to a proposition
 
F^$ <p, q> : X -> %B%, usually written as "F^$ (p, q)" and vocalized as
 
the "stretch of F to p and q".  If one is concerned with many different
 
propositions about things in X, or if one is abstractly indifferent to
 
the particular choices for p and q, then one can detach the operator
 
F^$ : (X -> %B%)^2 -> (X -> %B%), called the "stretch of F over X",
 
and consider it in isolation from any concrete application.
 
 
 
When the "cactus notation" is used to represent boolean functions,
 
a single "$" sign at the end of the expression is enough to remind
 
a reader that the connections are meant to be stretched to several
 
propositions on a universe X.
 
 
 
For instance, take the connection F : %B%^2 -> %B% such that:
 
 
 
F(x, y)  =  F^2_06 (x, y)  =  -(x, y)-.
 
 
 
This connection is the boolean function on a couple of variables x, y
 
that yields a value of %1% if and only if just one of x, y is not %1%,
 
that is, if and only if just one of x, y is %1%.  There is clearly an
 
isomorphism between this connection, viewed as an operation on the
 
boolean domain %B% = {%0%, %1%}, and the dyadic operation on binary
 
values x, y in !B! = GF(2) that is otherwise known as "x + y".
 
 
 
The same connection F : %B%^2 -> %B% can also be read as a proposition
 
about things in the universe X = %B%^2.  If S is a sentence that denotes
 
the proposition F, then the corresponding assertion says exactly what one
 
otherwise states by uttering "x is not equal to y".  In such a case, one
 
has -[S]- = F, and all of the following expressions are ordinarily taken
 
as equivalent descriptions of the same set:
 
 
 
[| -[S]- |]  =  [| F |]
 
 
 
            =  F^(-1)(%1%)
 
 
 
            =  {<x, y> in %B%^2  :  S}
 
 
 
            =  {<x, y> in %B%^2  :  F(x, y) = %1%}
 
 
 
            =  {<x, y> in %B%^2  :  F(x, y)}
 
 
 
            =  {<x, y> in %B%^2  :  -(x, y)- = %1%}
 
 
 
            =  {<x, y> in %B%^2  :  -(x, y)- }
 
 
 
            =  {<x, y> in %B%^2  :  x exclusive-or y}
 
 
 
            =  {<x, y> in %B%^2  :  just one true of x, y}
 
 
 
            =  {<x, y> in %B%^2  :  x not equal to y}
 
 
 
            =  {<x, y> in %B%^2  :  x <=/=> y}
 
 
 
            =  {<x, y> in %B%^2  :  x =/= y}
 
 
 
            =  {<x, y> in %B%^2  :  x + y}.
 
 
 
Notice the slight distinction, that I continue to maintain at this point,
 
between the logical values {false, true} and the algebraic values {0, 1}.
 
This makes it legitimate to write a sentence directly into the right side
 
of the set-builder expression, for instance, weaving the sentence S or the
 
sentence "x is not equal to y" into the context "{<x, y> in %B%^2 : ... }",
 
thereby obtaining the corresponding expressions listed above, while the
 
proposition F(x, y) can also be asserted more directly without equating
 
it to %1%, since it already has a value in {false, true}, and thus can
 
be taken as tantamount to an actual sentence.
 
 
 
If the appropriate safeguards can be kept in mind, avoiding all danger of
 
confusing propositions with sentences and sentences with assertions, then
 
the marks of these distinctions need not be forced to clutter the account
 
of the more substantive indications, that is, the ones that really matter.
 
If this level of understanding can be achieved, then it may be possible
 
to relax these restrictions, along with the absolute dichotomy between
 
algebraic and logical values, which tends to inhibit the flexibility
 
of interpretation.
 
 
 
This covers the properties of the connection F(x, y) = -(x, y)-,
 
treated as a proposition about things in the universe X = %B%^2.
 
Staying with this same connection, it is time to demonstrate how
 
it can be "stretched" into an operator on arbitrary propositions.
 
 
 
To continue the exercise, let p and q be arbitrary propositions about
 
things in the universe X, that is, maps of the form p, q : X -> %B%,
 
and suppose that p, q are indicator functions of the sets P, Q c X,
 
respectively.  In other words, one has the following set of data:
 
 
 
|  p    =        -{P}-        :  X -> %B%
 
 
|
 
|
|  q    =        -{Q}-        :  X -> %B%
+
<math>\begin{array}{llll}
|
+
W = Q_1 \cdot X \cdot Q_2,
| <p, q>  =  < -{P}- , -{Q}- > :  (X -> %B%)^2
+
& W' = Q_1 \cdot Y \cdot Q_2,
 +
& \text{and}
 +
& (X, Y) \in \mathfrak{K}.
 +
\end{array}</math>
 +
|}
  
Then one has an operator F^$, the stretch of the connection F over X,
+
As noted above, it is usual to express the condition <math>(X, Y) \in \mathfrak{K}</math> by writing <math>X :> Y \, \text{in} \, \mathfrak{G}.</math>
and a proposition F^$ (p, q), the stretch of F to <p, q> on X, with
 
the following properties:
 
  
| F^$        =  -( , )-^$  :  (X -> %B%)^2 -> (X -> %B%)
+
The immediate derivation relation is indicated by saying that <math>W\!</math> ''immediately derives'' <math>W',\!</math> by saying that <math>W'\!</math> is ''immediately derived'' from <math>W\!</math> in <math>\mathfrak{G},</math> and also by writing:
|
 
| F^$ (p, q)  =  -(p, q)-^$  :   X -> %B%
 
  
As a result, the application of the proposition F^$ (p, q) to each x in X
+
{| align="center" cellpadding="8" width="90%"
yields a logical value in %B%, all in accord with the following equations:
+
| <math>W ::> W'.\!</math>
 +
|}
  
| F^$ (p, q)(x)   =  -(p, q)-^$ (x)  in %B%
+
A ''derivation'' in <math>\mathfrak{G}</math> is a finite sequence <math>(W_1, \ldots, W_k)\!</math> of sentential forms over <math>\mathfrak{G}</math> such that each adjacent pair <math>(W_j, W_{j+1})\!</math> of sentential forms in the sequence is an immediate derivation in <math>\mathfrak{G},</math> in other words, such that:
|
 
|  ^                        ^
 
|  |                        |
 
|  =                        =
 
|  |                        |
 
|  v                        v
 
|
 
| F(p(x), q(x))  =  -(p(x), q(x))-  in  %B%
 
  
For each choice of propositions p and q about things in X, the stretch of
+
{| align="center" cellpadding="8" width="90%"
F to p and q on X is just another proposition about things in X, a simple
+
| <math>W_j ::> W_{j+1},\ \text{for all}\ j = 1\ \text{to}\ k - 1.</math>
proposition in its own right, no matter how complex its current expression
+
|}
or its present construction as F^$ (p, q) = -(p, q)^$ makes it appear in
 
relation to p and q. Like any other proposition about things in X, it
 
indicates a subset of X, namely, the fiber that is variously described
 
in the following ways:
 
  
[| F^$ (p, q) |]  =  [| -(p, q)-^$ |]
+
If there exists a derivation <math>(W_1, \ldots, W_k)\!</math> in <math>\mathfrak{G},</math> one says that <math>W_1\!</math> ''derives'' <math>W_k\!</math> in <math>\mathfrak{G}</math> or that <math>W_k\!</math> is ''derivable'' from <math>W_1\!</math> in <math>\mathfrak{G},</math> and one
 +
typically summarizes the derivation by writing:
  
                  = (F^$ (p, q))^(-1)(%1%)
+
{| align="center" cellpadding="8" width="90%"
 +
| <math>W_1 :\!*\!:> W_k.\!</math>
 +
|}
  
                  = {x in X  :  F^$ (p, q)(x)}
+
The language <math>\mathfrak{L} = \mathfrak{L} (\mathfrak{G}) = \langle \mathfrak{G} \rangle</math> that is ''generated'' by the formal grammar <math>\mathfrak{G} = ( \, ^{\backprime\backprime} S \, ^{\prime\prime}, \, \mathfrak{Q}, \, \mathfrak{A}, \, \mathfrak{K} \, )</math> is the set of strings over the terminal alphabet <math>\mathfrak{A}</math> that are derivable from the initial symbol <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> by way of the intermediate symbols in <math>\mathfrak{Q}</math> according to the characterizations in <math>\mathfrak{K}.</math>  In sum:
  
                  = {x in : -(p, q)-^$ (x)}
+
{| align="center" cellpadding="8" width="90%"
 +
| <math>\mathfrak{L} (\mathfrak{G}) \ = \ \langle \mathfrak{G} \rangle \ = \ \{ \, W \in \mathfrak{A}^* \, : \, ^{\backprime\backprime} S \, ^{\prime\prime} \, :\!*\!:> \, W \, \}.</math>
 +
|}
  
                  =  {x in X  :  -(p(x), q(x))- }
+
Finally, a string <math>W\!</math> is called a ''word'', a ''sentence'', or so on, of the language generated by <math>\mathfrak{G}</math> if and only if <math>W\!</math> is in <math>\mathfrak{L} (\mathfrak{G}).</math>
  
                  = {x in X  : p(x) ± q(x)}
+
===The Cactus Language : Stylistics===
  
                  =  {x in X  :  p(x) =/= q(x)}
+
{| align="center" cellpadding="0" cellspacing="0" width="90%"
 
 
                  =  {x in X  :  -{P}- (x) =/= -{Q}- (x)}
 
 
 
                  =  {x in X  :  x in P <=/=> x in Q}
 
 
 
                  =  {x in X  :  x in P-Q or x in Q-P}
 
 
 
                  =  {x in X  :  x in P-Q |_| Q-P}
 
 
 
                  =  {x in X  :  x in P ± Q}
 
 
 
                  =  P ± Q          c  X
 
 
 
                  =  [|p|] ± [|q|]  c  X.
 
 
 
Which was to be shown.
 
</pre>
 
 
 
==References==
 
 
 
* Bernstein, Herbert J. (1987), "Idols of Modern Science and The Reconstruction of Knowledge", pp. 37-68 in Marcus G. Raskin and Herbert J. Bernstein, ''New Ways of Knowing : The Sciences, Society, and Reconstructive Knowledge', Rowman and Littlefield, Totowa, NJ, 1987.
 
 
 
* Nietzsche, Friedrich, ''Beyond Good and Evil : Prelude to a Philosophy of the Future'', R.J. Hollingdale (trans.), Michael Tanner (intro.), Penguin Books, London, UK, 1973, 1990.
 
 
 
* Raskin, Marcus G., and Bernstein, Herbert J. (1987, eds.), ''New Ways of Knowing : The Sciences, Society, and Reconstructive Knowledge', Rowman and Littlefield, Totowa, NJ, 1987.
 
 
 
==Document History==
 
 
 
<pre>
 
| Subject:  Inquiry Driven Systems : An Inquiry Into Inquiry
 
| Contact:  Jon Awbrey <jawbrey@oakland.edu>
 
| Version:  Draft 8.70
 
| Created:  23 Jun 1996
 
| Revised:  06 Jan 2002
 
| Advisor:  M.A. Zohdy
 
| Setting:  Oakland University, Rochester, Michigan, USA
 
| Excerpt:  Section 1.3.10 (Recurring Themes)
 
| Excerpt:  Subsections 1.3.10.8 - 1.3.10.13
 
</pre>
 
 
 
==Notes Found in a Cactus Patch==
 
 
 
===Cactus Language===
 
 
 
<pre>
 
Table 13 illustrates the "existential interpretation"
 
of cactus graphs and cactus expressions by providing
 
English translations for a few of the most basic and
 
commonly occurring forms.
 
 
 
Even though I do most of my thinking in the existential interpretation,
 
I will continue to speak of these forms as "logical graphs", because
 
I think it is an important fact about them that the formal validity
 
of the axioms and theorems is not dependent on the choice between
 
the entitative and the existential interpretations.
 
 
 
The first extension is the "reflective extension of logical graphs" (RefLog).
 
It is obtained by generalizing the negation operator "(_)" in a certain way,
 
calling "(_)" the "controlled", "moderated", or "reflective" negation operator
 
of order 1, then adding another such operator for each finite k = 2, 3, ... .
 
In sum, these operators are symbolized by bracketed argument lists as follows:
 
"(_)", "(_,_)", "(_,_,_)", ..., where the number of slots is the order of the
 
reflective negation operator in question.
 
             
 
The cactus graph and the cactus expression
 
shown here are both described as a "spike".
 
 
 
o---------------------------------------o
 
|                                      |
 
|                  o                  |
 
|                  |                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|                  ( )                  |
 
o---------------------------------------o
 
 
 
The rule of reduction for a lobe is:
 
 
 
    x_1  x_2  ...  x_k
 
    o-----o--- ... ---o
 
      \              /
 
      \            /
 
        \          /
 
        \        /
 
          \      /
 
          \    /
 
            \  /
 
            \ /
 
              @            =     @
 
 
 
if and only if exactly one of the x_j is a spike.
 
 
 
In Ref Log, an expression of the form "(( e_1 ),( e_2 ),( ... ),( e_k ))"
 
expresses the fact that "exactly one of the e_j is true, for j = 1 to k".
 
Expressions of this form are called "universal partition" expressions, and
 
they parse into a type of graph called a "painted and rooted cactus" (PARC):
 
 
 
    e_1  e_2  ...  e_k
 
    o    o          o
 
    |    |          |
 
    o-----o--- ... ---o
 
      \              /
 
      \            /
 
        \          /
 
        \        /
 
          \      /
 
          \    /
 
            \  /
 
            \ /
 
              @
 
 
 
 
 
| ( x1, x2, ..., xk )  =  [blank]
 
 
|
 
|
| iff
+
<p>As a result, we can hardly conceive of how many possibilities there are for what we call objective reality.  Our sharp quills of knowledge are so narrow and so concentrated in particular directions that with science there are myriads of totally different real worlds, each one accessible from the next simply by slight alterations &mdash; shifts of gaze &mdash; of every particular discipline and subspecialty.
|
+
</p>
| Just one of the arguments x1, x2, ..., xk  =  ()
+
|-
 +
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
 +
|}
  
The interpretation of these operators, read as assertions
+
This Subsection highlights an issue of ''style'' that arises in describing a formal language.  In broad terms, I use the word ''style'' to refer to a loosely specified class of formal systems, typically ones that have a set of distinctive features in common.  For instance, a style of proof system usually dictates one or more rules of inference that are acknowledged as conforming to that style.  In the present context, the word ''style'' is a natural choice to characterize the varieties of formal grammars, or any other sorts of formal systems that can be contemplated for deriving the sentences of a formal language.
about the values of their listed arguments, is as follows:
 
  
1Existential Interpretation:   "Just one of the k argument is false."
+
In looking at what seems like an incidental issue, the discussion arrives at a critical pointThe question is: What decides the issue of style?  Taking a given language as the object of discussion, what factors enter into and determine the choice of a style for its presentation, that is, a particular way of arranging and selecting the materials that come to be involved in a description, a grammar, or a theory of the language?  To what degree is the determination accidental, empirical, pragmatic, rhetorical, or stylistic, and to what extent is the choice essential, logical, and necessary? For that matter, what determines the order of signs in a word, a sentence, a text, or a discussion? All of the corresponding parallel questions about the character of this choice can be posed with regard to the constituent part as well as with regard to the main constitution of the formal language.
2. Entitative Interpretation:  "Not just one of the k arguments is true."
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
In order to answer this sort of question, at any level of articulation, one has to inquire into the type of distinction that it invokes, between arrangements and orders that are essential, logical, and necessary and orders and arrangements that are accidental, rhetorical, and stylistic.  As a rough guide to its comprehension, a ''logical order'', if it resides in the subject at all, can be approached by considering all of the ways of saying the same things, in all of the languages that are capable of saying roughly the same things about that subject.  Of course, the ''all'' that appears in this rule of thumb has to be interpreted as a fittingly qualified sort of universal.  For all practical purposes, it simply means ''all of the ways that a person can think of'' and ''all of the languages that a person can conceive of'', with all things being relative to the particular moment of investigation.  For all of these reasons, the rule must stand as little more than a rough idea of how to approach its object.
  
o-------------------o-------------------o-------------------o
+
If it is demonstrated that a given formal language can be presented in any one of several styles of formal grammar, then the choice of a format is accidental, optional, and stylistic to the very extent that it is freeBut if it can be shown that a particular language cannot be successfully presented in a particular style of grammar, then the issue of style is no longer free and rhetorical, but becomes to that very degree essential, necessary, and obligatory, in other words, a question of the objective logical order that can be found to reside in the object language.
|      Graph      |      String      |    Translation    |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        @        |        " "        |      true.      |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |        ( )        |      untrue.      |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        r        |                  |                  |
 
|        @        |        r        |        r.        |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        r        |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |        (r)        |      not r.      |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      r s t      |                  |                  |
 
|        @        |      r s t      |  r and s and t.  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      r s t      |                  |                  |
 
|      o o o      |                  |                  |
 
|        \|/        |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |    ((r)(s)(t))    |    r or s or t.  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |    r implies s.  |
 
|        r  s    |                  |                  |
 
|        o---o    |                  |    if r then s.  |
 
|        |        |                  |                  |
 
|        @        |      (r (s))      |    no r sans s.  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      r  s      |                  |                  |
 
|      o---o      |                  | r exclusive-or s. |
 
|        \ /        |                  |                  |
 
|        @        |      (r , s)      | r not equal to s. |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      r  s      |                  |                  |
 
|      o---o      |                  |                  |
 
|        \ /        |                  |                  |
 
|        o        |                  | r if & only if s. |
 
|        |        |                  |                  |
 
|        @        |    ((r , s))    | r equates with s. |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      r  s  t      |                  |                  |
 
|      o--o--o      |                  |                  |
 
|      \  /      |                  |                  |
 
|        \ /        |                  |  just one false  |
 
|        @        |    (r , s , t)    |  out of r, s, t|
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      r  s  t      |                  |                  |
 
|      o  o  o      |                  |                  |
 
|      |  |  |      |                  |                  |
 
|      o--o--o      |                  |                  |
 
|      \  /      |                  |                  |
 
|        \ /        |                  |  just one true  |
 
|        @        |  ((r),(s),(t))  |  among r, s, t.  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |  genus t over    |
 
|        r  s      |                  |  species r, s.  |
 
|        o  o      |                  |                  |
 
|      t  |  |      |                  |  partition t    |
 
|      o--o--o      |                  |  among r & s.    |
 
|      \  /      |                  |                  |
 
|        \ /        |                  |  whole pie t:    |
 
|        @        |  ( t ,(r),(s))  |  slices r, s.   |
 
o-------------------o-------------------o-------------------o
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
As a rough illustration of the difference between logical and rhetorical orders, consider the kinds of order that are expressed and exhibited in the following conjunction of implications:
  
Table 13.  The Existential Interpretation
+
{| align="center" cellpadding="8" width="90%"
o-------------------o-------------------o-------------------o
+
| <math>X \Rightarrow Y\ \operatorname{and}\ Y \Rightarrow Z.</math>
|   Cactus Graph    | Cactus Expression |    Existential    |
+
|}
|                  |                  |  Interpretation  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        @        |        " "       |      true.      |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |        ( )        |      untrue.      |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        a        |                  |                  |
 
|        @        |        a        |        a.        |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        a        |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |        (a)        |      not a.      |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a b c      |                  |                  |
 
|        @        |      a b c      |  a and b and c.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a b c      |                  |                  |
 
|      o o o      |                  |                  |
 
|        \|/        |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |    ((a)(b)(c))    |    a or b or c.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|                  |                  |    a implies b.  |
 
|        a  b    |                  |                  |
 
|        o---o    |                  |    if a then b.  |
 
|        |        |                  |                  |
 
|        @        |      (a (b))      |    no a sans b.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|       a  b      |                  |                  |
 
|      o---o      |                  | a exclusive-or b. |
 
|        \ /        |                  |                  |
 
|        @        |      (a , b)      | a not equal to b. |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a  b      |                  |                  |
 
|      o---o      |                  |                  |
 
|        \ /        |                  |                  |
 
|        o        |                  | a if & only if b. |
 
|        |        |                  |                  |
 
|        @        |    ((a , b))    | a equates with b. |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a  b  c      |                  |                  |
 
|      o--o--o      |                  |                  |
 
|      \   /      |                  |                  |
 
|        \ /        |                  |  just one false  |
 
|        @        |    (a , b , c)    |  out of a, b, c.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a  b  c      |                  |                  |
 
|      o  o  o      |                  |                  |
 
|      |  |  |      |                  |                  |
 
|      o--o--o      |                  |                  |
 
|      \   /      |                  |                  |
 
|        \ /        |                  |  just one true  |
 
|        @        |  ((a),(b),(c))  |  among a, b, c. |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|                  |                  |  genus a over    |
 
|        b  c      |                  |  species b, c.  |
 
|        o  o      |                  |                  |
 
|      a  |  |      |                  |  partition a    |
 
|      o--o--o      |                  |  among b & c.    |
 
|      \  /      |                  |                  |
 
|        \ /       |                  |  whole pie a:    |
 
|        @        |  ( a ,(b),(c))  |  slices b, c.    |
 
|                   |                  |                  |
 
o-------------------o-------------------o-------------------o
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
Here, there is a happy conformity between the logical content and the rhetorical form, indeed, to such a degree that one hardly notices the difference between them.  The rhetorical form is given by the order of sentences in the two implications and the order of implications in the conjunction.  The logical content is given by the order of propositions in the extended implicational sequence:
  
Table 14.  The Entitative Interpretation
+
{| align="center" cellpadding="8" width="90%"
o-------------------o-------------------o-------------------o
+
| <math>X\ \le\ Y\ \le\ Z.</math>
|   Cactus Graph    | Cactus Expression |    Entitative    |
+
|}
|                  |                  |  Interpretation  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        @        |        " "       |      untrue.      |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |        ( )        |      true.      |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        a        |                  |                  |
 
|        @        |        a        |        a.        |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|        a        |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |        (a)        |      not a.      |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|       a b c      |                  |                  |
 
|        @        |      a b c      |    a or b or c.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a b c      |                  |                  |
 
|      o o o      |                  |                  |
 
|        \|/        |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |                  |
 
|        @        |    ((a)(b)(c))    |  a and b and c.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|                  |                  |    a implies b.  |
 
|                  |                  |                  |
 
|        o a      |                  |    if a then b.  |
 
|        |        |                  |                  |
 
|        @ b      |      (a) b        |    not a, or b.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a  b      |                  |                  |
 
|      o---o      |                  | a if & only if b. |
 
|        \ /        |                  |                  |
 
|        @        |      (a , b)      | a equates with b. |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a  b      |                  |                  |
 
|      o---o      |                  |                  |
 
|        \ /        |                  |                  |
 
|        o        |                  | a exclusive-or b. |
 
|        |        |                  |                  |
 
|        @        |    ((a , b))    | a not equal to b. |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a  b  c      |                  |                  |
 
|      o--o--o      |                  |                  |
 
|      \   /      |                  |                  |
 
|        \ /        |                  | not just one true |
 
|        @        |    (a , b , c)    | out of a, b, c.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a  b  c      |                  |                  |
 
|      o--o--o      |                  |                  |
 
|      \  /      |                  |                  |
 
|        \ /        |                  |                  |
 
|        o        |                  |                  |
 
|        |        |                  |  just one true  |
 
|        @        |  ((a , b , c))  |  among a, b, c.  |
 
|                  |                  |                  |
 
o-------------------o-------------------o-------------------o
 
|                  |                  |                  |
 
|      a            |                  |                  |
 
|      o            |                  |  genus a over    |
 
|      |  b  c      |                  |  species b, c.   |
 
|      o--o--o      |                  |                  |
 
|      \  /      |                  |  partition a    |
 
|        \ /       |                  |  among b & c.    |
 
|        o        |                  |                  |
 
|        |        |                  |  whole pie a:    |
 
|        @        |  ( a ,(b),(c))  |  slices b, c.    |
 
|                   |                  |                  |
 
o-------------------o-------------------o-------------------o
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
To see the difference between form and content, or manner and matter, it is enough to observe a few of the ways that the expression can be varied without changing its meaning, for example:
  
o-----------------o-----------------o-----------------o-----------------o
+
{| align="center" cellpadding="8" width="90%"
|     Graph      |    String      |  Entitative    |  Existential  |
+
| <math>Z \Leftarrow Y\ \operatorname{and}\ Y \Leftarrow X.</math>
o-----------------o-----------------o-----------------o-----------------o
+
|}
|                |                |                |                |
 
|        @        |      " "       |    untrue.    |      true.      |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|        o        |                |                |                |
 
|        |        |                |                |                |
 
|        @        |      ( )      |      true.      |    untrue.    |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|        r        |                |                |                |
 
|        @        |        r        |        r.      |        r.      |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|        r        |                |                |                |
 
|        o        |                |                |                |
 
|        |        |                |                |                |
 
|        @        |      (r)      |      not r.    |      not r.    |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|      r s t      |                |                |                |
 
|        @        |      r s t      |  r or s or t.  |  r and s and t. |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|      r s t      |                |                |                |
 
|      o o o      |                |                |                |
 
|      \|/      |                |                |                |
 
|        o        |                |                |                |
 
|        |        |                |                |                |
 
|        @        |  ((r)(s)(t))  |  r and s and t. |  r or s or t.  |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |  r implies s.  |
 
|                |                |                |                |
 
|        o r      |                |                |  if r then s.  |
 
|        |        |                |                |                |
 
|        @ s      |      (r) s      |  not r, or s    |  no r sans s.  |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |  r implies s.  |
 
|        r  s    |                |                |                |
 
|        o---o    |                |                |  if r then s.  |
 
|        |        |                |                |                |
 
|        @        |    (r (s))    |                |  no r sans s.  |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|      r  s      |                |                |                |
 
|      o---o      |                |                |r exclusive-or s.|
 
|      \ /      |                |                |                |
 
|        @        |    (r , s)    |                |r not equal to s.|
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|      r  s      |                |                |                |
 
|      o---o      |                |                |                |
 
|      \ /      |                |                |                |
 
|        o        |                |                |r if & only if s.|
 
|        |        |                |                |                |
 
|        @        |    ((r , s))    |                |r equates with s.|
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|    r  s  t    |                |                |                |
 
|    o--o--o    |                |                |                |
 
|      \  /      |                |                |                |
 
|      \ /      |                |                | just one false  |
 
|        @        |  (r , s , t)  |                | out of r, s, t. |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |                |
 
|    r  s  t    |                |                |                |
 
|    o  o  o    |                |                |                |
 
|    |  |  |    |                |                |                |
 
|    o--o--o    |                |                |                |
 
|      \  /      |                |                |                |
 
|      \ /      |                |                |  just one true  |
 
|        @        |  ((r),(s),(t))  |                |  among r, s, t. |
 
o-----------------o-----------------o-----------------o-----------------o
 
|                |                |                |  genus t over  |
 
|        r  s    |                |                |  species r, s.  |
 
|        o  o    |                |                |                |
 
|    t  |  |    |                |                |  partition t    |
 
|    o--o--o    |                |                |  among r & s.  |
 
|      \  /      |                |                |                |
 
|      \ /      |                |                |  whole pie t:  |
 
|        @        |  ( t ,(r),(s))  |                |  slices r, s.  |
 
o-----------------o-----------------o-----------------o-----------------o
 
</pre>
 
 
 
===Differential Logic===
 
 
 
<pre>
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 1
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
One of the first things that you can do, once you
 
have a really decent calculus for boolean functions
 
or propositional logic, whatever you want to call it,
 
is to compute the differentials of these functions or
 
propositions.
 
 
 
Now there are many ways to dance around this idea,
 
and I feel like I have tried them all, before one
 
gets down to acting on it, and there many issues
 
of interpretation and justification that we will
 
have to clear up after the fact, that is, before
 
we can be sure that it all really makes any sense,
 
but I think this time I'll just jump in, and show
 
you the form in which this idea first came to me.
 
 
 
Start with a proposition of the form x & y, which
 
I graph as two labels attached to a root node, so:
 
 
 
o---------------------------------------o
 
|                                      |
 
|                  x y                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|                x and y                |
 
o---------------------------------------o
 
 
 
Written as a string, this is just the concatenation "x y".
 
 
 
The proposition xy may be taken as a boolean function f(x, y)
 
having the abstract type f : B x B -> B, where B = {0, 1} is
 
read in such a way that 0 means "false" and 1 means "true".
 
 
 
In this style of graphical representation,
 
the value "true" looks like a blank label
 
and the value "false" looks like an edge.
 
 
 
o---------------------------------------o
 
|                                      |
 
|                                      |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|                true                  |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|                  o                  |
 
|                  |                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|                false                |
 
o---------------------------------------o
 
 
 
Back to the proposition xy.  Imagine yourself standing
 
in a fixed cell of the corresponding venn diagram, say,
 
the cell where the proposition xy is true, as pictured:
 
 
 
o---------------------------------------o
 
|                                      |
 
|                o    o                |
 
|              / \  / \              |
 
|              /  \ /  \              |
 
|            /    ·    \            |
 
|            /    /%\    \            |
 
|          /    /%%%\    \          |
 
|          /    /%%%%%\    \          |
 
|        /    /%%%%%%%\    \        |
 
|        /    /%%%%%%%%%\    \        |
 
|      o  x  o%%%%%%%%%%%o  y  o      |
 
|        \    \%%%%%%%%%/    /        |
 
|        \    \%%%%%%%/    /        |
 
|          \    \%%%%%/    /          |
 
|          \    \%%%/    /          |
 
|            \    \%/    /            |
 
|            \    ·    /            |
 
|              \  / \  /              |
 
|              \ /  \ /              |
 
|                o    o                |
 
|                                      |
 
o---------------------------------------o
 
 
 
Now ask yourself:  What is the value of the
 
proposition xy at a distance of dx and dy
 
from the cell xy where you are standing?
 
 
 
Don't think about it -- just compute:
 
 
 
o---------------------------------------o
 
|                                      |
 
|              dx o  o dy              |
 
|                / \ / \                |
 
|            x o---@---o y            |
 
|                                      |
 
o---------------------------------------o
 
|        (x + dx) and (y + dy)        |
 
o---------------------------------------o
 
 
 
To make future graphs easier to draw in Ascii land,
 
I will use devices like @=@=@ and o=o=o to identify
 
several nodes into one, as in this next redrawing:
 
 
 
o---------------------------------------o
 
|                                      |
 
|              x  dx y  dy              |
 
|              o---o o---o              |
 
|              \  | |  /              |
 
|                \ | | /                |
 
|                \| |/                |
 
|                  @=@                  |
 
|                                      |
 
o---------------------------------------o
 
|        (x + dx) and (y + dy)        |
 
o---------------------------------------o
 
 
 
However you draw it, these expressions follow because the
 
expression x + dx, where the plus sign indicates (mod 2)
 
addition in B, and thus corresponds to an exclusive-or
 
in logic, parses to a graph of the following form:
 
 
 
o---------------------------------------o
 
|                                      |
 
|                x    dx                |
 
|                o---o                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|                x + dx                |
 
o---------------------------------------o
 
 
 
Next question:  What is the difference between
 
the value of the proposition xy "over there" and
 
the value of the proposition xy where you are, all
 
expressed as general formula, of course?  Here 'tis:
 
 
 
o---------------------------------------o
 
|                                       |
 
|        x  dx y  dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /                      |
 
|          \| |/        x y          |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|      ((x + dx) & (y + dy)) - xy      |
 
o---------------------------------------o
 
 
 
Oh, I forgot to mention:  Computed over B,
 
plus and minus are the very same operation.
 
This will make the relationship between the
 
differential and the integral parts of the
 
resulting calculus slightly stranger than
 
usual, but never mind that now.
 
 
 
Last question, for now:  What is the value of this expression
 
from your current standpoint, that is, evaluated at the point
 
where xy is true?  Well, substituting 1 for x and 1 for y in
 
the graph amounts to the same thing as erasing those labels:
 
 
 
o---------------------------------------o
 
|                                      |
 
|          dx    dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /                      |
 
|          \| |/                      |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|      ((1 + dx) & (1 + dy)) - 1·1      |
 
o---------------------------------------o
 
 
 
And this is equivalent to the following graph:
 
 
 
o---------------------------------------o
 
|                                      |
 
|                dx  dy                |
 
|                o  o                |
 
|                  \ /                  |
 
|                  o                  |
 
|                  |                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
|              dx or dy                |
 
o---------------------------------------o
 
 
 
Have to break here -- will explain later.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 2
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
We have just met with the fact that
 
the differential of the "and" is
 
the "or" of the differentials.
 
 
 
x and y  --Diff--> dx or dy.
 
 
 
o---------------------------------------o
 
|                                      |
 
|                            dx  dy  |
 
|                              o  o    |
 
|                              \ /    |
 
|                                o      |
 
|      x y                      |      |
 
|      @      --Diff-->        @      |
 
|                                      |
 
o---------------------------------------o
 
|      x y      --Diff-->  ((dx)(dy))  |
 
o---------------------------------------o
 
 
 
It will be necessary to develop a more refined analysis of
 
this statement directly, but that is roughly the nub of it.
 
 
 
If the form of the above statement reminds you of DeMorgan's rule,
 
it is no accident, as differentiation and negation turn out to be
 
closely related operations.  Indeed, one can find discussions of
 
logical difference calculus in the Boole-DeMorgan correspondence
 
and Peirce also made use of differential operators in a logical
 
context, but the exploration of these ideas has been hampered
 
by a number of factors, not the least of which being a syntax
 
adequate to handle the complexity of expressions that evolve.
 
 
 
For my part, it was definitely a case of the calculus being smarter
 
than the calculator thereof.  The graphical pictures were catalytic
 
in their power over my thinking process, leading me so quickly past
 
so many obstructions that I did not have time to think about all of
 
the difficulties that would otherwise have inhibited the derivation.
 
It did eventually became necessary to write all this up in a linear
 
script, and to deal with the various problems of interpretation and
 
justification that I could imagine, but that took another 120 pages,
 
and so, if you don't like this intuitive approach, then let that be
 
your sufficient notice.
 
 
 
Let us run through the initial example again, this time attempting
 
to interpret the formulas that develop at each stage along the way.
 
 
 
We begin with a proposition or a boolean function f(x, y) = xy.
 
 
 
o---------------------------------------o
 
|                                      |
 
|                o    o                |
 
|              / \   / \               |
 
|              /  \ /  \              |
 
|            /    ·    \            |
 
|            /    /`\    \            |
 
|          /    /```\    \          |
 
|          /    /`````\    \          |
 
|        /    /```````\    \        |
 
|        /    /`````````\    \        |
 
|      o  x  o`````f`````o  y  o      |
 
|        \    \`````````/    /        |
 
|        \    \```````/    /        |
 
|          \    \`````/    /          |
 
|          \    \```/    /          |
 
|            \    \`/    /            |
 
|            \    ·    /            |
 
|              \  / \  /              |
 
|              \ /  \ /              |
 
|                o    o                |
 
|                                      |
 
o---------------------------------------o
 
|                                      |
 
|                  x y                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| f =              x y                  |
 
o---------------------------------------o
 
 
 
A function like this has an abstract type and a concrete type.
 
The abstract type is what we invoke when we write things like
 
f : B x B -> B or f : B^2 -> B.  The concrete type takes into
 
account the qualitative dimensions or the "units" of the case,
 
which can be explained as follows.
 
 
 
1.  Let X be the set of values {(x), x} = {not x, x}.
 
 
 
2.  Let Y be the set of values {(y), y} = {not y, y}.
 
 
 
Then interpret the usual propositions about x, y
 
as functions of the concrete type f : X x Y -> B.
 
 
 
We are going to consider various "operators" on these functions.
 
Here, an operator F is a function that takes one function f into
 
another function Ff.
 
 
 
The first couple of operators that we need to consider are logical analogues
 
of those that occur in the classical "finite difference calculus", namely:
 
 
 
1.  The "difference" operator [capital Delta], written here as D.
 
 
 
2.  The "enlargement" operator [capital Epsilon], written here as E.
 
 
 
These days, E is more often called the "shift" operator.
 
 
 
In order to describe the universe in which these operators operate,
 
it will be necessary to enlarge our original universe of discourse.
 
We mount up from the space U = X x Y to its "differential extension",
 
EU = U x dU = X x Y x dX x dY, with dX = {(dx), dx} and dY = {(dy), dy}.
 
The interpretations of these new symbols can be diverse, but the easiest
 
for now is just to say that dx means "change x" and dy means "change y".
 
To draw the differential extension EU of our present universe U = X x Y
 
as a venn diagram, it would take us four logical dimensions X, Y, dX, dY,
 
but we can project a suggestion of what it's about on the universe X x Y
 
by drawing arrows that cross designated borders, labeling the arrows as
 
dx when crossing the border between x and (x) and as dy when crossing
 
the border between y and (y), in either direction, in either case.
 
 
 
o---------------------------------------o
 
|                                      |
 
|                o    o                |
 
|              / \  / \              |
 
|              /  \ /  \             |
 
|            /    ·    \            |
 
|            / dy  /`\  dx \            |
 
|          /  ^ /```\ ^  \          |
 
|          /    \`````/    \          |
 
|        /    /`\```/`\    \        |
 
|        /    /```\`/```\    \        |
 
|      o  x  o`````o`````o  y  o      |
 
|        \    \`````````/    /        |
 
|        \    \```````/    /        |
 
|          \    \`````/    /          |
 
|          \    \```/    /          |
 
|            \    \`/    /            |
 
|            \    ·    /            |
 
|              \  / \  /              |
 
|              \ /  \ /              |
 
|                o    o                |
 
|                                      |
 
o---------------------------------------o
 
 
 
We can form propositions from these differential variables in the same way
 
that we would any other logical variables, for instance, interpreting the
 
proposition (dx (dy)) to say "dx => dy", in other words, however you wish
 
to take it, whether indicatively or injunctively, as saying something to
 
the effect that there is "no change in x without a change in y".
 
 
 
Given the proposition f(x, y) in U = X x Y,
 
the (first order) 'enlargement' of f is the
 
proposition Ef in EU that is defined by the
 
formula Ef(x, y, dx, dy) = f(x + dx, y + dy).
 
 
 
In the example f(x, y) = xy, we obtain:
 
 
 
Ef(x, y, dx, dy)  =  (x + dx)(y + dy).
 
 
 
o---------------------------------------o
 
|                                      |
 
|              x  dx y  dy              |
 
|              o---o o---o              |
 
|              \  | |  /              |
 
|                \ | | /                |
 
|                \| |/                |
 
|                  @=@                  |
 
|                                      |
 
o---------------------------------------o
 
| Ef =      (x, dx) (y, dy)            |
 
o---------------------------------------o
 
 
 
Given the proposition f(x, y) in U = X x Y,
 
the (first order) 'difference' of f is the
 
proposition Df in EU that is defined by the
 
formula Df = Ef - f, or, written out in full,
 
Df(x, y, dx, dy) = f(x + dx, y + dy) - f(x, y).
 
 
 
In the example f(x, y) = xy, the result is:
 
 
 
Df(x, y, dx, dy)  =  (x + dx)(y + dy) - xy.
 
 
 
o---------------------------------------o
 
|                                      |
 
|        x  dx y  dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /                      |
 
|          \| |/        x y          |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| Df =      ((x, dx)(y, dy), xy)      |
 
o---------------------------------------o
 
 
 
We did not yet go through the trouble to interpret this (first order)
 
"difference of conjunction" fully, but were happy simply to evaluate
 
it with respect to a single location in the universe of discourse,
 
namely, at the point picked out by the singular proposition xy,
 
in as much as if to say, at the place where x = 1 and y = 1.
 
This evaluation is written in the form Df|xy or Df|<1, 1>,
 
and we arrived at the locally applicable law that states
 
that f = xy = x & y  =>  Df|xy = ((dx)(dy)) = dx or dy.
 
 
 
o---------------------------------------o
 
|                                      |
 
|                dx dy                |
 
|                  ^                  |
 
|                o  |  o                |
 
|              / \ | / \              |
 
|              /  \|/  \              |
 
|            /dy  |  dx\            |
 
|            /(dx) /|\ (dy)\            |
 
|          /  ^ /`|`\ ^  \          |
 
|          /    \``|``/    \          |
 
|        /    /`\`|`/`\    \        |
 
|        /    /```\|/```\    \        |
 
|      o  x  o`````o`````o  y  o      |
 
|        \    \`````````/    /        |
 
|        \    \```````/    /        |
 
|          \    \`````/    /          |
 
|          \    \```/    /          |
 
|            \    \`/    /            |
 
|            \    ·    /            |
 
|              \  / \  /              |
 
|              \ /  \ /              |
 
|                o    o                |
 
|                                      |
 
o---------------------------------------o
 
|                                      |
 
|                dx  dy                |
 
|                o  o                |
 
|                  \ /                  |
 
|                  o                  |
 
|                  |                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| Df|xy =      ((dx)(dy))              |
 
o---------------------------------------o
 
 
 
The picture illustrates the analysis of the inclusive disjunction ((dx)(dy))
 
into the exclusive disjunction:  dx(dy) + dy(dx) + dx dy, a proposition that
 
may be interpreted to say "change x or change y or both".  And this can be
 
recognized as just what you need to do if you happen to find yourself in
 
the center cell and desire a detailed description of ways to depart it.
 
 
 
Jon Awbrey --
 
 
 
Formerly Of:
 
Center Cell,
 
Chateau Dif.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 3
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Last time we computed what will variously be called
 
the "difference map", the "difference proposition",
 
or the "local proposition" Df_p for the proposition
 
f(x, y) = xy at the point p where x = 1 and y = 1.
 
 
 
In the universe U = X x Y, the four propositions
 
xy, x(y), (x)y, (x)(y) that indicate the "cells",
 
or the smallest regions of the venn diagram, are
 
called "singular propositions".  These serve as
 
an alternative notation for naming the points
 
<1, 1>, <1, 0>, <0, 1>, <0, 0>, respectively.
 
 
 
Thus, we can write Df_p = Df|p = Df|<1, 1> = Df|xy,
 
so long as we know the frame of reference in force.
 
 
 
Sticking with the example f(x, y) = xy, let us compute the
 
value of the difference proposition Df at all of the points.
 
 
 
o---------------------------------------o
 
|                                      |
 
|        x  dx y  dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /                      |
 
|          \| |/        x y          |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| Df =      ((x, dx)(y, dy), xy)        |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|          dx    dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /                      |
 
|          \| |/                      |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| Df|xy =      ((dx)(dy))              |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|              o                        |
 
|          dx |  dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /        o            |
 
|          \| |/          |            |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| Df|x(y) =      (dx) dy                |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|        o                              |
 
|        |  dx    dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /        o            |
 
|          \| |/          |            |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| Df|(x)y =      dx (dy)                |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|        o    o                        |
 
|        |  dx |  dy                    |
 
|        o---o o---o                    |
 
|        \  | |  /                    |
 
|          \ | | /      o  o          |
 
|          \| |/        \ /          |
 
|            o=o-----------o            |
 
|            \          /            |
 
|              \        /              |
 
|              \      /              |
 
|                \    /                |
 
|                \  /                |
 
|                  \ /                  |
 
|                  @                  |
 
|                                      |
 
o---------------------------------------o
 
| Df|(x)(y) =    dx dy                |
 
o---------------------------------------o
 
 
 
The easy way to visualize the values of these graphical
 
expressions is just to notice the following equivalents:
 
 
 
o---------------------------------------o
 
|                                      |
 
|  x                                    |
 
|  o-o-o-...-o-o-o                      |
 
|  \          /                      |
 
|    \        /                        |
 
|    \      /                        |
 
|      \    /                x        |
 
|      \  /                o        |
 
|        \ /                  |        |
 
|        @        =        @        |
 
|                                      |
 
o---------------------------------------o
 
|  (x, , ... , , )  =        (x)        |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|                o                      |
 
| x_1 x_2  x_k  |                      |
 
|  o---o-...-o---o                      |
 
|  \          /                      |
 
|    \        /                        |
 
|    \      /                        |
 
|      \    /                          |
 
|      \  /                          |
 
|        \ /            x_1 ... x_k    |
 
|        @        =        @        |
 
|                                      |
 
o---------------------------------------o
 
| (x_1, ..., x_k, ()) = x_1 · ... · x_k |
 
o---------------------------------------o
 
 
 
Laying out the arrows on the augmented venn diagram,
 
one gets a picture of a "differential vector field".
 
 
 
o---------------------------------------o
 
|                                      |
 
|                dx dy                |
 
|                  ^                  |
 
|                o  |  o                |
 
|              / \ | / \              |
 
|              /  \|/  \              |
 
|            /dy  |  dx\            |
 
|            /(dx) /|\ (dy)\            |
 
|          /  ^ /`|`\ ^  \          |
 
|          /    \``|``/    \          |
 
|        /    /`\`|`/`\    \        |
 
|        /    /```\|/```\    \        |
 
|      o  x  o`````o`````o  y  o      |
 
|        \    \`````````/    /        |
 
|        \  o---->```<----o  /        |
 
|          \  dy \``^``/ dx  /          |
 
|          \(dx) \`|`/ (dy)/          |
 
|            \    \|/    /            |
 
|            \    |    /            |
 
|              \  /|\  /              |
 
|              \ / | \ /              |
 
|                o  |  o                |
 
|                  |                  |
 
|                dx | dy                |
 
|                  o                  |
 
|                                      |
 
o---------------------------------------o
 
 
 
This really just constitutes a depiction of
 
the interpretations in EU = X x Y x dX x dY
 
that satisfy the difference proposition Df,
 
namely, these:
 
 
 
1.  x  y  dx  dy
 
2.  x  y  dx (dy)
 
3.  x  y (dx) dy
 
4.  x (y)(dx) dy
 
5.  (x) y  dx (dy)
 
6.  (x)(y) dx  dy
 
 
 
By inspection, it is fairly easy to understand Df
 
as telling you what you have to do from each point
 
of U in order to change the value borne by f(x, y).
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 4
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
We have been studying the action of the difference operator D,
 
also known as the "localization operator", on the proposition
 
f : X x Y -> B that is commonly known as the conjunction x·y.
 
We described Df as a (first order) differential proposition,
 
that is, a proposition of the type Df : X x Y x dX x dY -> B.
 
Abstracting from the augmented venn diagram that illustrates
 
how the "models", or the "satisfying interpretations", of Df
 
distribute within the extended universe EU = X x Y x dX x dY,
 
we can depict Df in the form of a "digraph" or directed graph,
 
one whose points are labeled with the elements of  U =  X x Y
 
and whose arrows are labeled with the elements of dU = dX x dY.
 
 
 
o---------------------------------------o
 
|                                      |
 
|                x · y                |
 
|                                      |
 
|                  o                  |
 
|                  ^^^                  |
 
|                / | \                |
 
|      (dx)· dy  /  |  \  dx ·(dy)      |
 
|              /  |  \              |
 
|              /    |    \              |
 
|            v    |    v            |
 
|  x ·(y)  o      |      o  (x)· y  |
 
|                  |                  |
 
|                  |                  |
 
|                dx · dy                |
 
|                  |                  |
 
|                  |                  |
 
|                  v                  |
 
|                  o                  |
 
|                                      |
 
|                (x)·(y)                |
 
|                                      |
 
o---------------------------------------o
 
|                                      |
 
|  f    =    x  y                      |
 
|                                      |
 
| Df    =    x  y  · ((dx)(dy))        |
 
|                                      |
 
|      +    x (y) ·  (dx) dy          |
 
|                                      |
 
|      +    (x) y  ·  dx (dy)        |
 
|                                      |
 
|      +    (x)(y) ·  dx  dy          |
 
|                                      |
 
o---------------------------------------o
 
 
 
Any proposition worth its salt, as they say,
 
has many equivalent ways to look at it, any
 
of which may reveal some unsuspected aspect
 
of its meaning.  We will encounter more and
 
more of these alternative readings as we go.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 5
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
The enlargement operator E, also known as the "shift operator",
 
has many interesting and very useful properties in its own right,
 
so let us not fail to observe a few of the more salient features
 
that play out on the surface of our simple example, f(x, y) = xy.
 
 
 
Introduce a suitably generic definition of the extended universe of discourse:
 
 
 
Let U = X_1 x ... x X_k and EU = U x dU = X_1 x ... x X_k x dX_1 x ... x dX_k.
 
 
 
For a proposition f : X_1 x ... x X_k -> B,
 
the (first order) 'enlargement' of f is the
 
proposition Ef : EU -> B that is defined by:
 
 
 
Ef(x_1, ..., x_k, dx_1, ..., dx_k)  =  f(x_1 + dx_1, ..., x_k + dx_k).
 
 
 
It should be noted that the so-called "differential variables" dx_j
 
are really just the same kind of boolean variables as the other x_j.
 
It is conventional to give the additional variables these brands of
 
inflected names, but whatever extra connotations we might choose to
 
attach to these syntactic conveniences are wholly external to their
 
purely algebraic meanings.
 
 
 
For the example f(x, y) = xy, we obtain:
 
 
 
Ef(x, y, dx, dy)  =  (x + dx)(y + dy).
 
 
 
Given that this expression uses nothing more than the "boolean ring"
 
operations of addition (+) and multiplication (·), it is permissible
 
to "multiply things out" in the usual manner to arrive at the result:
 
 
 
Ef(x, y, dx, dy)  =  x·y  +  x·dy  +  y·dx  +  dx·dy.
 
 
 
To understand what this means in logical terms, for instance, as expressed
 
in a boolean expansion or a "disjunctive normal form" (DNF), it is perhaps
 
a little better to go back and analyze the expression the same way that we
 
did for Df.  Thus, let us compute the value of the enlarged proposition Ef
 
at each of the points in the universe of discourse U = X x Y.
 
 
 
o---------------------------------------o
 
|                                      |
 
|              x  dx y  dy              |
 
|              o---o o---o              |
 
|              \  | |  /              |
 
|                \ | | /                |
 
|                \| |/                |
 
|                  @=@                  |
 
|                                      |
 
o---------------------------------------o
 
| Ef =      (x, dx)·(y, dy)            |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|                dx    dy              |
 
|              o---o o---o              |
 
|              \  | |  /              |
 
|                \ | | /               |
 
|                \| |/                |
 
|                  @=@                  |
 
|                                      |
 
o---------------------------------------o
 
| Ef|xy =      (dx)·(dy)              |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|                    o                  |
 
|                dx |  dy              |
 
|              o---o o---o              |
 
|              \  | |  /              |
 
|                \ | | /                |
 
|                \| |/                |
 
|                  @=@                  |
 
|                                      |
 
o---------------------------------------o
 
| Ef|x(y) =    (dx)· dy                |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|              o                        |
 
|              |  dx    dy              |
 
|              o---o o---o              |
 
|              \  | |  /              |
 
|                \ | | /                |
 
|                \| |/                |
 
|                  @=@                  |
 
|                                      |
 
o---------------------------------------o
 
| Ef|(x)y =      dx ·(dy)              |
 
o---------------------------------------o
 
 
 
o---------------------------------------o
 
|                                      |
 
|              o    o                  |
 
|              |  dx |  dy              |
 
|              o---o o---o              |
 
|              \  | |  /              |
 
|                \ | | /                |
 
|                \| |/                |
 
|                  @=@                  |
 
|                                      |
 
o---------------------------------------o
 
| Ef|(x)(y) =    dx · dy                |
 
o---------------------------------------o
 
 
 
Given the sort of data that arises from this form of analysis,
 
we can now fold the disjoined ingredients back into a boolean
 
expansion or a DNF that is equivalent to the proposition Ef.
 
 
 
Ef  =  xy · Ef_xy  +  x(y) · Ef_x(y)  +  (x)y · Ef_(x)y  +  (x)(y) · Ef_(x)(y).
 
 
 
Here is a summary of the result, illustrated by means of a digraph picture,
 
where the "no change" element (dx)(dy) is drawn as a loop at the point x·y.
 
 
 
o---------------------------------------o
 
|                                      |
 
|                x · y                |
 
|              (dx)·(dy)              |
 
|                -->--                |
 
|                \  /                |
 
|                 \ /                  |
 
|                  o                  |
 
|                  ^^^                  |
 
|                / | \                |
 
|                /  |  \                |
 
|    (dx)· dy  /  |  \  dx ·(dy)    |
 
|              /    |    \              |
 
|            /    |    \            |
 
|  x ·(y)  o      |      o  (x)· y  |
 
|                  |                  |
 
|                  |                  |
 
|                dx · dy                |
 
|                  |                  |
 
|                  |                  |
 
|                  o                  |
 
|                                      |
 
|                (x)·(y)                |
 
|                                      |
 
o---------------------------------------o
 
|                                      |
 
|  f    =    x  y                      |
 
|                                      |
 
| Ef    =    x  y  · (dx)(dy)          |
 
|                                      |
 
|      +    x (y) · (dx) dy          |
 
|                                      |
 
|      +    (x) y  ·  dx (dy)          |
 
|                                      |
 
|      +    (x)(y) ·  dx  dy          |
 
|                                      |
 
o---------------------------------------o
 
 
 
We may understand the enlarged proposition Ef
 
as telling us all the different ways to reach
 
a model of f from any point of the universe U.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 6
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
To broaden our experience with simple examples, let us now contemplate the
 
sixteen functions of concrete type X x Y -> B and abstract type B x B -> B.
 
For future reference, I will set here a few tables that detail the actions
 
of E and D and on each of these functions, allowing us to view the results
 
in several different ways.
 
 
 
By way of initial orientation, Table 0 lists equivalent expressions for the
 
sixteen functions in a number of different languages for zeroth order logic.
 
 
 
 
 
Table 0.  Propositional Forms On Two Variables
 
o---------o---------o---------o----------o------------------o----------o
 
| L_1    | L_2    | L_3    | L_4      | L_5              | L_6      |
 
|        |        |        |          |                  |          |
 
| Decimal | Binary  | Vector  | Cactus  | English          | Vulgate  |
 
o---------o---------o---------o----------o------------------o----------o
 
|        |      x = 1 1 0 0 |          |                  |          |
 
|        |      y = 1 0 1 0 |          |                  |          |
 
o---------o---------o---------o----------o------------------o----------o
 
|        |        |        |          |                  |          |
 
| f_0    | f_0000  | 0 0 0 0 |    ()    | false            |    0    |
 
|        |        |        |          |                  |          |
 
| f_1    | f_0001  | 0 0 0 1 |  (x)(y)  | neither x nor y  | ~x & ~y  |
 
|        |        |        |          |                  |          |
 
| f_2    | f_0010  | 0 0 1 0 |  (x) y  | y and not x      | ~x &  y  |
 
|        |        |        |          |                  |          |
 
| f_3    | f_0011  | 0 0 1 1 |  (x)    | not x            | ~x      |
 
|        |        |        |          |                  |          |
 
| f_4    | f_0100  | 0 1 0 0 |  x (y)  | x and not y      |  x & ~y  |
 
|        |        |        |          |                  |          |
 
| f_5    | f_0101  | 0 1 0 1 |    (y)  | not y            |      ~y  |
 
|        |        |        |          |                  |          |
 
| f_6    | f_0110  | 0 1 1 0 |  (x, y)  | x not equal to y |  x +  y  |
 
|        |        |        |          |                  |          |
 
| f_7    | f_0111  | 0 1 1 1 |  (x  y)  | not both x and y | ~x v ~y  |
 
|        |        |        |          |                  |          |
 
| f_8    | f_1000  | 1 0 0 0 |  x  y  | x and y          |  x &  y  |
 
|        |        |        |          |                  |          |
 
| f_9    | f_1001  | 1 0 0 1 | ((x, y)) | x equal to y    |  x =  y  |
 
|        |        |        |          |                  |          |
 
| f_10    | f_1010  | 1 0 1 0 |      y  | y                |      y  |
 
|        |        |        |          |                  |          |
 
| f_11    | f_1011  | 1 0 1 1 |  (x (y)) | not x without y  |  x => y  |
 
|        |        |        |          |                  |          |
 
| f_12    | f_1100  | 1 1 0 0 |  x      | x                |  x      |
 
|        |        |        |          |                  |          |
 
| f_13    | f_1101  | 1 1 0 1 | ((x) y)  | not y without x  |  x <= y  |
 
|        |        |        |          |                  |          |
 
| f_14    | f_1110  | 1 1 1 0 | ((x)(y)) | x or y          |  x v  y  |
 
|        |        |        |          |                  |          |
 
| f_15    | f_1111  | 1 1 1 1 |  (())  | true            |    1    |
 
|        |        |        |          |                  |          |
 
o---------o---------o---------o----------o------------------o----------o
 
 
 
 
 
The next four Tables expand the expressions of Ef and Df
 
in two different ways, for each of the sixteen functions.
 
Notice that the functions are given in a different order,
 
here being collected into a set of seven natural classes.
 
 
 
 
 
Table 1.  Ef Expanded Over Ordinary Features {x, y}
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
|      |    f      |  Ef | xy  | Ef | x(y)  | Ef | (x)y  | Ef | (x)(y)|
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_0  |    ()    |    ()    |    ()    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_1  |  (x)(y)  |  dx  dy  |  dx (dy)  |  (dx) dy  |  (dx)(dy)  |
 
|      |            |            |            |            |            |
 
| f_2  |  (x) y    |  dx (dy)  |  dx  dy  |  (dx)(dy)  |  (dx) dy  |
 
|      |            |            |            |            |            |
 
| f_4  |    x (y)  |  (dx) dy  |  (dx)(dy)  |  dx  dy  |  dx (dy)  |
 
|      |            |            |            |            |            |
 
| f_8  |    x  y    |  (dx)(dy)  |  (dx) dy  |  dx (dy)  |  dx  dy  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_3  |  (x)      |  dx      |  dx      |  (dx)      |  (dx)      |
 
|      |            |            |            |            |            |
 
| f_12 |    x      |  (dx)      |  (dx)      |  dx      |  dx      |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_6  |  (x, y)  |  (dx, dy)  | ((dx, dy)) | ((dx, dy)) |  (dx, dy)  |
 
|      |            |            |            |            |            |
 
| f_9  |  ((x, y))  | ((dx, dy)) |  (dx, dy)  |  (dx, dy)  | ((dx, dy)) |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_5  |      (y)  |      dy  |      (dy)  |      dy  |      (dy)  |
 
|      |            |            |            |            |            |
 
| f_10 |      y    |      (dy)  |      dy  |      (dy)  |      dy  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_7  |  (x  y)  | ((dx)(dy)) | ((dx) dy)  |  (dx (dy)) |  (dx  dy)  |
 
|      |            |            |            |            |            |
 
| f_11 |  (x (y))  | ((dx) dy)  | ((dx)(dy)) |  (dx  dy)  |  (dx (dy)) |
 
|      |            |            |            |            |            |
 
| f_13 |  ((x) y)  |  (dx (dy)) |  (dx  dy)  | ((dx)(dy)) | ((dx) dy)  |
 
|      |            |            |            |            |            |
 
| f_14 |  ((x)(y))  |  (dx  dy)  |  (dx (dy)) | ((dx) dy)  | ((dx)(dy)) |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_15 |    (())    |    (())    |    (())    |    (())    |    (())    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
 
 
 
 
Table 2.  Df Expanded Over Ordinary Features {x, y}
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
|      |    f      |  Df | xy  | Df | x(y)  | Df | (x)y  | Df | (x)(y)|
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_0  |    ()    |    ()    |    ()    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_1  |  (x)(y)  |  dx  dy  |  dx (dy)  |  (dx) dy  | ((dx)(dy)) |
 
|      |            |            |            |            |            |
 
| f_2  |  (x) y    |  dx (dy)  |  dx  dy  | ((dx)(dy)) |  (dx) dy  |
 
|      |            |            |            |            |            |
 
| f_4  |    x (y)  |  (dx) dy  | ((dx)(dy)) |  dx  dy  |  dx (dy)  |
 
|      |            |            |            |            |            |
 
| f_8  |    x  y    | ((dx)(dy)) |  (dx) dy  |  dx (dy)  |  dx  dy  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_3  |  (x)      |  dx      |  dx      |  dx      |  dx      |
 
|      |            |            |            |            |            |
 
| f_12 |    x      |  dx      |  dx      |  dx      |  dx      |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_6  |  (x, y)  |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |
 
|      |            |            |            |            |            |
 
| f_9  |  ((x, y))  |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |  (dx, dy)  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_5  |      (y)  |      dy  |      dy  |      dy  |      dy  |
 
|      |            |            |            |            |            |
 
| f_10 |      y    |      dy  |      dy  |      dy  |      dy  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_7  |  (x  y)  | ((dx)(dy)) |  (dx) dy  |  dx (dy)  |  dx  dy  |
 
|      |            |            |            |            |            |
 
| f_11 |  (x (y))  |  (dx) dy  | ((dx)(dy)) |  dx  dy  |  dx (dy)  |
 
|      |            |            |            |            |            |
 
| f_13 |  ((x) y)  |  dx (dy)  |  dx  dy  | ((dx)(dy)) |  (dx) dy  |
 
|      |            |            |            |            |            |
 
| f_14 |  ((x)(y))  |  dx  dy  |  dx (dy)  |  (dx) dy  | ((dx)(dy)) |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_15 |    (())    |    ()    |    ()    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
  
 +
Any style of declarative programming, also called ''logic programming'', depends on a capacity, as embodied in a programming language or other formal system, to describe the relation between problems and solutions in logical terms.  A recurring problem in building this capacity is in bridging the gap between ostensibly non-logical orders and the logical orders that are used to describe and to represent them.  For instance, to mention just a couple of the most pressing cases, and the ones that are currently proving to be the most resistant to a complete analysis, one has the orders of dynamic evolution and rhetorical transition that manifest themselves in the process of inquiry and in the communication of its results.
  
Table 3.  Ef Expanded Over Differential Features {dx, dy}
+
This patch of the ongoing discussion is concerned with describing a particular variety of formal languages, whose typical representative is the painted cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}).\!</math> It is the intention of this work to interpret this language for propositional logic, and thus to use it as a sentential calculus, an order of reasoning that forms an active ingredient and a significant component of all logical reasoning. To describe this language, the standard devices of formal grammars and formal language theory are more than adequate, but this only raises the next question: What sorts of devices are exactly adequate, and fit the task to a "T"? The ultimate desire is to turn the tables on the order of description, and so begins a process of eversion that evolves to the point of asking: To what extent can the language capture the essential features and laws of its own grammar and describe the active principles of its own generation? In other words: How well can the language be described by using the language itself to do so?
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
|      |    f      |  T_11 f  |  T_10 f  |  T_01 f  |  T_00 f  |
 
|      |            |            |            |            |            |
 
|      |            | Ef| dx·dy  | Ef| dx(dy) | Ef| (dx)dy | Ef|(dx)(dy)|
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_0  |    ()    |    ()    |    ()    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_1  |  (x)(y)  |    x  y    |    x (y)  |  (x) y    |  (x)(y)  |
 
|      |            |            |            |            |            |
 
| f_2  |  (x) y    |    x (y)  |    x  y    |  (x)(y)  |  (x) y    |
 
|      |            |            |            |            |            |
 
| f_4  |    x (y)  |  (x) y    |  (x)(y)  |    x  y    |    x (y)  |
 
|      |            |            |            |            |            |
 
| f_8  |    x y    |  (x)(y)  |  (x) y    |    x (y)  |    x  y    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_3  |  (x)      |    x      |    x      |  (x)      |  (x)      |
 
|      |            |            |            |            |            |
 
| f_12 |    x      |  (x)      |  (x)      |    x      |    x      |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_6  |  (x, y)  |  (x, y)  |  ((x, y))  | ((x, y))  |  (x, y)  |
 
|      |            |            |            |            |            |
 
| f_9  | ((x, y))  | ((x, y))  |  (x, y)  |  (x, y)  |  ((x, y))  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_5  |      (y)  |      y    |      (y)  |      y    |      (y)  |
 
|      |            |            |            |            |            |
 
| f_10 |      y    |      (y)  |      y    |      (y)  |      y    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_7  |  (x  y)  |  ((x)(y))  |  ((x) y)  |  (x (y))  |  (x  y)  |
 
|      |            |            |            |            |            |
 
| f_11 |  (x (y))  |  ((x) y)  |  ((x)(y))  |  (x  y)  |  (x (y))  |
 
|      |            |            |            |            |            |
 
| f_13 |  ((x) y)  |  (x (y))  |  (x  y)  |  ((x)(y))  |  ((x) y)  |
 
|      |            |            |            |            |            |
 
| f_14 |  ((x)(y))  |  (x  y)  |  (x (y)) | ((x) y)  | ((x)(y))  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_15 |    (())    |    (())    |    (())    |    (())    |    (())    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|                  |            |            |            |            |
 
| Fixed Point Total |      4    |      4    |      4    |    16    |
 
|                  |            |            |            |            |
 
o-------------------o------------o------------o------------o------------o
 
  
 +
In order to speak to these questions, I have to express what a grammar says about a language in terms of what a language can say on its own.  In effect, it is necessary to analyze the kinds of meaningful statements that grammars are capable of making about languages in general and to relate them to the kinds of meaningful statements that the syntactic ''sentences'' of the cactus language might be interpreted as making about the very same topics.  So far in the present discussion, the sentences of the cactus language do not make any meaningful statements at all, much less any meaningful statements about languages and their constitutions.  As of yet, these sentences subsist in the form of purely abstract, formal, and uninterpreted combinatorial constructions.
  
Table 4Df Expanded Over Differential Features {dx, dy}
+
Before the capacity of a language to describe itself can be evaluated, the missing link to meaning has to be supplied for each of its stringsThis calls for a dimension of semantics and a notion of interpretation, topics that are taken up for the case of the cactus language <math>\mathfrak{C} (\mathfrak{P})</math> in Subsection 1.3.10.12. Once a plausible semantics is prescribed for this language it will be possible to return to these questions and to address them in a meaningful way.
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
|      |    f      | Df| dx·dy  | Df| dx(dy) | Df| (dx)dy | Df|(dx)(dy)|
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_0 |    ()    |    ()    |    ()    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_1  |  (x)(y)  |  ((x, y))  |    (y)    |    (x)    |    ()    |
 
|      |            |            |            |            |            |
 
| f_2  |  (x) y    |  (x, y)  |    y      |    (x)    |    ()    |
 
|      |            |            |            |            |            |
 
| f_4  |    x (y)  |  (x, y)  |    (y)    |    x      |    ()    |
 
|      |            |            |            |            |            |
 
| f_8  |    x  y    |  ((x, y))  |    y      |    x      |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_3  |  (x)      |    (())    |    (())    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
| f_12 |    x      |    (())    |    (())    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_6  |  (x, y)  |    ()    |    (())    |    (())    |    ()    |
 
|      |            |            |            |            |            |
 
| f_9  |  ((x, y))  |    ()    |    (())    |    (())    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_5  |      (y)  |    (())    |    ()    |    (())    |    ()    |
 
|      |            |            |            |            |            |
 
| f_10 |      y    |    (())    |    ()    |    (())    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_7  |  (x  y)  |  ((x, y))  |    y      |    x      |    ()    |
 
|      |            |            |            |            |            |
 
| f_11 |  (x (y))  |  (x, y)  |    (y)    |    x      |    ()    |
 
|      |            |            |            |            |            |
 
| f_13 |  ((x) y)  |  (x, y)  |    y      |    (x)    |    ()    |
 
|      |            |            |            |            |            |
 
| f_14 |  ((x)(y))  |  ((x, y))  |    (y)    |    (x)    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_15 |    (())    |    ()    |    ()    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
  
 +
The prominent issue at this point is the distinct placements of formal languages and formal grammars with respect to the question of meaning.  The sentences of a formal language are merely the abstract strings of abstract signs that happen to belong to a certain set.  They do not by themselves make any meaningful statements at all, not without mounting a separate effort of interpretation, but the rules of a formal grammar make meaningful statements about a formal language, to the extent that they say what strings belong to it and what strings do not.  Thus, the formal grammar, a formalism that appears to be even more skeletal than the formal language, still has bits and pieces of meaning attached to it.  In a sense, the question of meaning is factored into two parts, structure and value, leaving the aspect of value reduced in complexity and subtlety to the simple question of belonging.  Whether this single bit of meaningful value is enough to encompass all of the dimensions of meaning that we require, and whether it can be compounded to cover the complexity that actually exists in the realm of meaning &mdash; these are questions for an extended future inquiry.
  
If the medium truly is the message,
+
Perhaps I ought to comment on the differences between the present and the standard definition of a formal grammar, since I am attempting to strike a compromise with several alternative conventions of usage, and thus to leave certain options open for future exploration.  All of the changes are minor, in the sense that they are not intended to alter the classes of languages that are able to be generated, but only to clear up various ambiguities and sundry obscurities that affect their conception.
the blank slate is the innate idea.
 
  
 +
Primarily, the conventional scope of non-terminal symbols was expanded to encompass the sentence symbol, mainly on account of all the contexts where the initial and the intermediate symbols are naturally invoked in the same breath.  By way of compensating for the usual exclusion of the sentence symbol from the non-terminal class, an equivalent distinction was introduced in the fashion of a distinction between the initial and the intermediate symbols, and this serves its purpose in all of those contexts where the two kind of symbols need to be treated separately.
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
At the present point, I remain a bit worried about the motivations and the justifications for introducing this distinction, under any name, in the first place.  It is purportedly designed to guarantee that the process of derivation at least gets started in a definite direction, while the real questions have to do with how it all ends.  The excuses of efficiency and expediency that I offered as plausible and sufficient reasons for distinguishing between empty and significant sentences are likely to be ephemeral, if not entirely illusory, since intermediate symbols are still permitted to characterize or to cover themselves, not to mention being allowed to cover the empty string, and so the very types of traps that one exerts oneself to avoid at the outset are always there to afflict the process at all of the intervening times.
  
Note 7
+
If one reflects on the form of grammar that is being prescribed here, it looks as if one sought, rather futilely, to avoid the problems of recursion by proscribing the main program from calling itself, while allowing any subprogram to do so.  But any trouble that is avoidable in the part is also avoidable in the main, while any trouble that is inevitable in the part is also inevitable in the main.  Consequently, I am reserving the right to change my mind at a later stage, perhaps to permit the initial symbol to characterize, to cover, to regenerate, or to produce itself, if that turns out to be the best way in the end.
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
Before I leave this Subsection, I need to say a few things about the manner in which the abstract theory of formal languages and the pragmatic theory of sign relations interact with each other.
  
If you think that I linger in the realm of logical difference calculus
+
Formal language theory can seem like an awfully picky subject at times, treating every symbol as a thing in itself the way it does, sorting out the nominal types of symbols as objects in themselves, and singling out the passing tokens of symbols as distinct entities in their own rights.  It has to continue doing this, if not for any better reason than to aid in clarifying the kinds of languages that people are accustomed to use, to assist in writing computer programs that are capable of parsing real sentences, and to serve in designing programming languages that people would like to become accustomed to useAs a matter of fact, the only time that formal language theory becomes too picky, or a bit too myopic in its focus, is when it leads one to think that one is dealing with the thing itself and not just with the sign of it, in other words, when the people who use the tools of formal language theory forget that they are dealing with the mere signs of more interesting objects and not with the objects of ultimate interest in and of themselves.
out of sheer vacillation about getting down to the differential proper,
 
it is probably out of a prior expectation that you derive from the art
 
or the long-engrained practice of real analysisBut the fact is that
 
ordinary calculus only rushes on to the sundry orders of approximation
 
because the strain of comprehending the full import of E and D at once
 
whelm over its discrete and finite powers to grasp them.  But here, in
 
the fully serene idylls of ZOL, we find ourselves fit with the compass
 
of a wit that is all we'd ever wish to explore their effects with care.
 
  
So let us do just that.
+
As a result, there a number of deleterious effects that can arise from the extreme pickiness of formal language theory, arising, as is often the case, when formal theorists forget the practical context of theorization.  It frequently happens that the exacting task of defining the membership of a formal language leads one to think that this object and this object alone is the justifiable end of the whole exercise.  The distractions of this mediate objective render one liable to forget that one's penultimate interest lies always with various kinds of equivalence classes of signs, not entirely or exclusively with their more meticulous representatives.
  
I will first rationalize the novel grouping of propositional forms
+
When this happens, one typically goes on working oblivious to the fact that many details about what transpires in the meantime do not matter at all in the end, and one is likely to remain in blissful ignorance of the circumstance that many special details of language membership are bound, destined, and pre-determined to be glossed over with some measure of indifference, especially when it comes down to the final constitution of those equivalence classes of signs that are able to answer for the genuine objects of the whole enterprise of language.  When any form of theory, against its initial and its best intentions, leads to this kind of absence of mind that is no longer beneficial in all of its main effects, the situation calls for an antidotal form of theory, one that can restore the presence of mind that all forms of theory are meant to augment.
in the last set of Tables, as that will extend a gentle invitation
 
to the mathematical subject of "group theory", and demonstrate its
 
relevance to differential logic in a strikingly apt and useful way.
 
The data for that account is contained in Table 3.
 
  
Table 3Ef Expanded Over Differential Features {dx, dy}
+
The pragmatic theory of sign relations is called for in settings where everything that can be named has many other names, that is to say, in the usual caseOf course, one would like to replace this superfluous multiplicity of signs with an organized system of canonical signs, one for each object that needs to be denoted, but reducing the redundancy too far, beyond what is necessary to eliminate the factor of "noise" in the language, that is, to clear up its effectively useless distractions, can destroy the very utility of a typical language, which is intended to provide a ready means to express a present situation, clear or not, and to describe an ongoing condition of experience in just the way that it seems to present itself. Within this fleshed out framework of language, moreover, the process of transforming the manifestations of a sign from its ordinary appearance to its canonical aspect is the whole problem of computation in a nutshell.
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
|      |    f      |  T_11 f  |  T_10 f  |  T_01 f  |  T_00 f  |
 
|      |            |            |            |            |            |
 
|      |            | Ef| dx·dy  | Ef| dx(dy) | Ef| (dx)dy | Ef|(dx)(dy)|
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_0  |    ()    |    ()    |    ()    |    ()    |    ()    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_1  |  (x)(y)  |    x  y    |    x (y)  |  (x) y    |  (x)(y)  |
 
|      |            |            |            |            |            |
 
| f_2  |  (x) y    |    x (y)  |    x  y    |  (x)(y)  |  (x) y    |
 
|      |            |            |            |            |            |
 
| f_4  |    x (y)  |  (x) y    |  (x)(y)  |    x  y    |    x (y)  |
 
|      |            |            |            |            |            |
 
| f_8  |    x  y    |  (x)(y)  |  (x) y    |    x (y)  |    x  y    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_3  |  (x)      |    x      |    x      |  (x)      |  (x)      |
 
|      |            |            |            |            |            |
 
| f_12 |    x      |  (x)      |  (x)      |    x      |    x      |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_6  |  (x, y)  |  (x, y)  |  ((x, y))  |  ((x, y))  |  (x, y)  |
 
|      |            |            |            |            |            |
 
| f_9  |  ((x, y))  |  ((x, y))  |  (x, y)  |  (x, y)  | ((x, y))  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_5  |      (y)  |      y    |      (y)  |      y    |      (y)  |
 
|      |            |            |            |            |            |
 
| f_10 |      y    |      (y)  |      y    |      (y)  |      y    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_7  |  (x  y)  |  ((x)(y))  |  ((x) y)  |  (x (y))  |  (x  y)  |
 
|      |            |            |            |            |            |
 
| f_11 |  (x (y))  |  ((x) y)  |  ((x)(y))  |  (x  y)  |  (x (y))  |
 
|      |            |            |            |            |            |
 
| f_13 |  ((x) y)  |  (x (y))  |  (x  y)  |  ((x)(y))  |  ((x) y)  |
 
|      |            |            |            |            |            |
 
| f_14 |  ((x)(y))  |  (x  y)  |  (x (y))  |  ((x) y)  |  ((x)(y))  |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|      |            |            |            |            |            |
 
| f_15 |    (())    |    (())    |    (())    |    (())    |    (())    |
 
|      |            |            |            |            |            |
 
o------o------------o------------o------------o------------o------------o
 
|                  |            |            |            |            |
 
| Fixed Point Total |      4    |      4    |      4    |    16    |
 
|                  |            |            |            |            |
 
o-------------------o------------o------------o------------o------------o
 
  
The shift operator E can be understood as enacting a substitution operation
+
It is a well-known truth, but an often forgotten fact, that nobody computes with numbers, but solely with numerals in respect of numbers, and numerals themselves are symbols.  Among other things, this renders all discussion of numeric versus symbolic computation a bit beside the point, since it is only a question of what kinds of symbols are best for one's immediate application or for one's selection of ongoing objectivesThe numerals that everybody knows best are just the canonical symbols, the standard signs or the normal terms for numbers, and the process of computation is a matter of getting from the arbitrarily obscure signs that the data of a situation are capable of throwing one's way to the indications of its character that are clear enough to motivate action.
on the proposition that is given as its argumentIn our immediate example,
 
we have the following data and definition:
 
  
E : (U -> B) -> (EU -> B),
+
Having broached the distinction between propositions and sentences, one can see its similarity to the distinction between numbers and numerals. What are the implications of the foregoing considerations for reasoning about propositions and for the realm of reckonings in sentential logic?  If the purpose of a sentence is just to denote a proposition, then the proposition is just the object of whatever sign is taken for a sentence. This means that the computational manifestation of a piece of reasoning about propositions amounts to a process that takes place entirely within a language of sentences, a procedure that can rationalize its account by referring to the denominations of these sentences among propositions.
  
E f(x, y)  ->   Ef(x, y, dx, dy),
+
The application of these considerations in the immediate setting is thisDo not worry too much about what roles the empty string <math>\varepsilon \, = \, ^{\backprime\backprime\prime\prime}</math> and the blank symbol <math>m_1 \, = \, ^{\backprime\backprime} \operatorname{~} ^{\prime\prime}</math> are supposed to play in a given species of formal languages.  As it happens, it is far less important to wonder whether these types of formal tokens actually constitute genuine sentences than it is to decide what equivalence classes it makes sense to form over all of the sentences in the resulting language, and only then to bother about what equivalence classes these limiting cases of sentences are most conveniently taken to represent.
  
Ef(x, y, dx, dy)  =  f(x + dx, y + dy).
+
These concerns about boundary conditions betray a more general issue.  Already by this point in discussion the limits of the purely syntactic approach to a language are beginning to be visible.  It is not that one cannot go a whole lot further by this road in the analysis of a particular language and in the study of languages in general, but when it comes to the questions of understanding the purpose of a language, of extending its usage in a chosen direction, or of designing a language for a particular set of uses, what matters above all else are the ''pragmatic equivalence classes'' of signs that are demanded by the application and intended by the designer, and not so much the peculiar characters of the signs that represent these classes of practical meaning.
  
Therefore, if we evaluate Ef at particular values of dx and dy,
+
Any description of a language is bound to have alternative descriptions.  More precisely, a circumscribed description of a formal language, as any effectively finite description is bound to be, is certain to suggest the equally likely existence and the possible utility of other descriptions.  A single formal grammar describes but a single formal language, but any formal language is described by many different formal grammars, not all of which afford the same grasp of its structure, provide an equivalent comprehension of its character, or yield an interchangeable view of its aspects.  Consequently, even with respect to the same formal language, different formal grammars are typically better for different purposes.
for example, dx = i and dy = j, where i, j are in B, we obtain:
 
  
E_ij : (U -> B)  -> (U -> B),
+
With the distinctions that evolve among the different styles of grammar, and with the preferences that different observers display toward them, there naturally comes the questionWhat is the root of this evolution?
  
E_ij :    f      ->  E_ij f,
+
One dimension of variation in the styles of formal grammars can be seen by treating the union of languages, and especially the disjoint union of languages, as a ''sum'', by treating the concatenation of languages as a ''product'', and then by distinguishing the styles of analysis that favor ''sums of products'' from those that favor ''products of sums'' as their canonical forms of description.  If one examines the relation between languages and grammars carefully enough to see the presence and the influence of these different styles, and when one comes to appreciate the ways that different styles of grammars can be used with different degrees of success for different purposes, then one begins to see the possibility that alternative styles of description can be based on altogether different linguistic and logical operations.
  
E_ij f = Ef | <dx = i, dy = j> = f(x + i, y + j).
+
It possible to trace this divergence of styles to an even more primitive division, one that distinguishes the ''additive'' or the ''parallel'' styles from the ''multiplicative'' or the ''serial'' styles. The issue is somewhat confused by the fact that an ''additive'' analysis is typically expressed in the form of a ''series'', in other words, a disjoint union of sets or a
 +
linear sum of their independent effects. But it is easy enough to sort this out if one observes the more telling connection between ''parallel'' and ''independent''. Another way to keep the right associations straight is to employ the term ''sequential'' in preference to the more misleading term ''serial''. Whatever one calls this broad division of styles, the scope and sweep of their dimensions of variation can be delineated in the following way:
  
The notation is a little bit awkward, but the data of the Table should
+
# The ''additive'' or ''parallel'' styles favor ''sums of products'' <math>(\textstyle\sum\prod)</math> as canonical forms of expression, pulling sums, unions, co-products, and logical disjunctions to the outermost layers of analysis and synthesis, while pushing products, intersections, concatenations, and logical conjunctions to the innermost levels of articulation and generationIn propositional logic, this style leads to the ''disjunctive normal form'' (DNF).
make the sense clear.  The important thing to observe is that E_ij has
+
# The ''multiplicative'' or ''serial'' styles favor ''products of sums'' <math>(\textstyle\prod\sum)</math> as canonical forms of expression, pulling products, intersections, concatenations, and logical conjunctions to the outermost layers of analysis and synthesis, while pushing sums, unions, co-products, and logical disjunctions to the innermost levels of articulation and generationIn propositional logic, this style leads to the ''conjunctive normal form'' (CNF).
the effect of transforming each proposition f : U -> B into some other
 
proposition f' : U -> B.  As it happens, the action is one-to-one and
 
onto for each E_ij, so the gang of four operators {E_ij : i, j in B}
 
is an example of what is called a "transformation group" on the set
 
of sixteen propositionsBowing to a longstanding local and linear
 
tradition, I will therefore redub the four elements of this group
 
as T_00, T_01, T_10, T_11, to bear in mind their transformative
 
character, or nature, as the case may beAbstractly viewed,
 
this group of order four has the following operation table:
 
  
o----------o----------o----------o----------o----------o
+
There is a curious sort of diagnostic clue that often serves to reveal the dominance of one mode or the other within an individual thinker's cognitive style.  Examined on the question of what constitutes the ''natural numbers'', an ''additive'' thinker tends to start the sequence at 0, while a ''multiplicative'' thinker tends to regard it as beginning at 1.
|          %          |          |          |          |
 
|    ·    %  T_00  |  T_01  |  T_10  |  T_11  |
 
|          %          |          |          |          |
 
o==========o==========o==========o==========o==========o
 
|          %          |          |          |          |
 
|  T_00  %  T_00  |  T_01  |  T_10  |  T_11  |
 
|          %          |          |          |          |
 
o----------o----------o----------o----------o----------o
 
|          %          |          |          |          |
 
|  T_01  %  T_01  |  T_00  |  T_11  |  T_10  |
 
|          %          |          |          |          |
 
o----------o----------o----------o----------o----------o
 
|          %          |          |          |          |
 
|  T_10  %  T_10  |  T_11  |  T_00  |  T_01  |
 
|          %          |          |          |          |
 
o----------o----------o----------o----------o----------o
 
|          %          |          |          |          |
 
|  T_11  %  T_11  |  T_10  |  T_01  |  T_00  |
 
|          %          |          |          |          |
 
o----------o----------o----------o----------o----------o
 
  
It happens that there are just two possible groups of 4 elements.
+
In any style of description, grammar, or theory of a language, it is usually possible to tease out the influence of these contrasting traits, namely, the ''additive'' attitude versus the ''mutiplicative'' tendency that go to make up the particular style in question, and even to determine the dominant inclination or point of view that establishes its perspective on the target domain.
One is the cyclic group Z_4 (German "Zyklus"), which this is not.
 
The other is Klein's four-group V_4 (German "Vier"), which it is.
 
  
More concretely viewed, the group as a whole pushes the set
+
In each style of formal grammar, the ''multiplicative'' aspect is present in the sequential concatenation of signs, both in the augmented strings and in the terminal stringsIn settings where the non-terminal symbols classify types of strings, the concatenation of the non-terminal symbols signifies the cartesian product over the corresponding sets of strings.
of sixteen propositions around in such a way that they fall
 
into seven natural classes, called "orbits".  One says that
 
the orbits are preserved by the action of the groupThere
 
is an "Orbit Lemma" of immense utility to "those who count"
 
which, depending on your upbringing, you may associate with
 
the names of Burnside, Cauchy, Frobenius, or some subset or
 
superset of these three, vouching that the number of orbits
 
is equal to the mean number of fixed points, in other words,
 
the total number of points (in our case, propositions) that
 
are left unmoved by the separate operations, divided by the
 
order of the group.  In this instance, T_00 operates as the
 
group identity, fixing all 16 propositions, while the other
 
three group elements fix 4 propositions each, and so we get:
 
Number of orbits  =  (4 + 4 + 4 + 16) / 4  =  7. Amazing!
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
In the context-free style of formal grammar, the ''additive'' aspect is easy enough to spot.  It is signaled by the parallel covering of many augmented strings or sentential forms by the same non-terminal symbol.  Expressed in active terms, this calls for the independent rewriting of that non-terminal symbol by a number of different successors, as in the following scheme:
  
Note 8
+
{| align="center" cellpadding="8" width="90%"
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
We have been contemplating functions of the type f : U -> B
 
studying the action of the operators E and D on this family.
 
These functions, that we may identify for our present aims
 
with propositions, inasmuch as they capture their abstract
 
forms, are logical analogues of "scalar potential fields".
 
These are the sorts of fields that are so picturesquely
 
presented in elementary calculus and physics textbooks
 
by images of snow-covered hills and parties of skiers
 
who trek down their slopes like least action heroes.
 
The analogous scene in propositional logic presents
 
us with forms more reminiscent of plateaunic idylls,
 
being all plains at one of two levels, the mesas of
 
verity and falsity, as it were, with nary a niche
 
to inhabit between them, restricting our options
 
for a sporting gradient of downhill dynamics to
 
just one of two, standing still on level ground
 
or falling off a bluff.
 
 
 
We are still working well within the logical analogue of the
 
classical finite difference calculus, taking in the novelties
 
that the logical transmutation of familiar elements is able to
 
bring to light.  Soon we will take up several different notions
 
of approximation relationships that may be seen to organize the
 
space of propositions, and these will allow us to define several
 
different forms of differential analysis applying to propositions.
 
In time we will find reason to consider more general types of maps,
 
having concrete types of the form X_1 x ... x X_k -> Y_1 x ... x Y_n
 
and abstract types B^k -> B^n.  We will think of these mappings as
 
transforming universes of discourse into themselves or into others,
 
in short, as "transformations of discourse".
 
 
 
Before we continue with this intinerary, however, I would like to highlight
 
another sort of "differential aspect" that concerns the "boundary operator"
 
or the "marked connective" that serves as one of the two basic connectives
 
in the cactus language for ZOL.
 
 
 
For example, consider the proposition f of concrete type f : X x Y x Z -> B
 
and abstract type f : B^3 -> B that is written "(x, y, z)" in cactus syntax.
 
Taken as an assertion in what Peirce called the "existential interpretation",
 
(x, y, z) says that just one of x, y, z is false.  It is useful to consider
 
this assertion in relation to the conjunction xyz of the features that are
 
engaged as its arguments.  A venn diagram of (x, y, z) looks like this:
 
 
 
o-----------------------------------------------------------o
 
| U                                                        |
 
|                                                          |
 
|                      o-------------o                      |
 
|                    /              \                    |
 
|                    /                \                    |
 
|                  /                  \                  |
 
|                  /                    \                  |
 
|                /                      \                |
 
|                o            x            o                |
 
|                |                        |                |
 
|                |                        |                |
 
|                |                        |                |
 
|                |                        |                |
 
|                |                        |                |
 
|            o--o----------o  o----------o--o            |
 
|            /    \%%%%%%%%%%\ /%%%%%%%%%%/    \            |
 
|          /      \%%%%%%%%%%o%%%%%%%%%%/      \          |
 
|          /        \%%%%%%%%/ \%%%%%%%%/        \          |
 
|        /          \%%%%%%/  \%%%%%%/          \        |
 
|        /            \%%%%/    \%%%%/            \        |
 
|      o              o--o-------o--o              o      |
 
|      |                |%%%%%%%|                |      |
 
|      |                |%%%%%%%|                |      |
 
|      |                |%%%%%%%|                |      |
 
|      |                |%%%%%%%|                |      |
 
|      |                |%%%%%%%|                |      |
 
|      o        y        o%%%%%%%o        z        o      |
 
|        \                \%%%%%/                /        |
 
|        \                \%%%/                /        |
 
|          \                \%/                /          |
 
|          \                o                /          |
 
|            \              / \              /            |
 
|            o-------------o  o-------------o            |
 
|                                                          |
 
|                                                          |
 
o-----------------------------------------------------------o
 
 
 
In relation to the center cell indicated by the conjunction xyz,
 
the region indicated by (x, y, z) is comprised of the "adjacent"
 
or the "bordering" cells.  Thus they are the cells that are just
 
across the boundary of the center cell, as if reached by way of
 
Leibniz's "minimal changes" from the point of origin, here, xyz.
 
 
 
The same sort of boundary relationship holds for any cell of origin that
 
one might elect to indicate, say, by means of the conjunction of positive
 
or negative basis features u_1 · ... · u_k, with u_j = x_j or u_j = (x_j),
 
for j = 1 to k.  The proposition (u_1, ..., u_k) indicates the disjunctive
 
region consisting of the cells that are just next door to u_1 · ... · u_k.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 9
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might conceivably have
 
| practical bearings you conceive the objects of your
 
| conception to have.  Then, your conception of those
 
| effects is the whole of your conception of the object.
 
 
|
 
|
| Charles Sanders Peirce, "The Maxim of Pragmatism, CP 5.438.
+
<math>\begin{matrix}
 
+
q & :> & W_1 \\
One other subject that it would be opportune to mention at this point,
+
\\
while we have an object example of a mathematical group fresh in mind,
+
\cdots & \cdots & \cdots \\
is the relationship between the pragmatic maxim and what are commonly
+
\\
known in mathematics as "representation principles".  As it turns out,
+
q & :> & W_k \\
with regard to its formal characteristics, the pragmatic maxim unites
+
\end{matrix}</math>
the aspects of a representation principle with the attributes of what
+
|}
would ordinarily be known as a "closure principle".  We will consider
 
the form of closure that is invoked by the pragmatic maxim on another
 
occasion, focusing here and now on the topic of group representations.
 
 
 
Let us return to the example of the so-called "four-group" V_4.
 
We encountered this group in one of its concrete representations,
 
namely, as a "transformation group" that acts on a set of objects,
 
in this particular case a set of sixteen functions or propositions.
 
Forgetting about the set of objects that the group transforms among
 
themselves, we may take the abstract view of the group's operational
 
structure, say, in the form of the group operation table copied here:
 
 
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    ·    %    e    |    f    |    g    |    h    |
 
|        %        |        |        |        |
 
o=========o=========o=========o=========o=========o
 
|        %        |        |        |        |
 
|    e    %    e    |    f    |    g    |    h    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    f    %    f    |    e    |    h    |    g    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    g    %    g    |    h    |    e    |    f    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    h    %    h    |    g    |    f    |    e    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
 
 
This table is abstractly the same as, or isomorphic to, the versions with
 
the E_ij operators and the T_ij transformations that we discussed earlier.
 
That is to say, the story is the same -- only the names have been changed.
 
An abstract group can have a multitude of significantly and superficially
 
different representations.  Even after we have long forgotten the details
 
of the particular representation that we may have come in with, there are
 
species of concrete representations, called the "regular representations",
 
that are always readily available, as they can be generated from the mere
 
data of the abstract operation table itself.
 
 
 
For example, select a group element from the top margin of the Table,
 
and "consider its effects" on each of the group elements as they are
 
listed along the left margin.  We may record these effects as Peirce
 
usually did, as a logical "aggregate" of elementary dyadic relatives,
 
that is to say, a disjunction or a logical sum whose terms represent
 
the ordered pairs of <input : output> transactions that are produced
 
by each group element in turn.  This yields what is usually known as
 
one of the "regular representations" of the group, specifically, the
 
"first", the "post-", or the "right" regular representation.  It has
 
long been conventional to organize the terms in the form of a matrix:
 
  
Reading "+" as a logical disjunction:
+
It is useful to examine the relationship between the grammatical covering or production relation <math>(:>\!)</math> and the logical relation of implication <math>(\Rightarrow),</math> with one eye to what they have in common and one eye to how they differ.  The production <math>q :> W\!</math> says that the appearance of the symbol <math>q\!</math> in a sentential form implies the possibility of exchanging it for <math>W.\!</math>  Although this sounds like a ''possible implication'', to the extent that ''<math>q\!</math> implies a possible <math>W\!</math>'' or that ''<math>q\!</math> possibly implies <math>W,\!</math>'' the qualifiers ''possible'' and ''possibly'' are the critical elements in these statements, and they are crucial to the meaning of what is actually being implied.  In effect, these qualifications reverse the direction of implication, yielding <math>^{\backprime\backprime} \, q \Leftarrow W \, ^{\prime\prime}</math> as the best analogue for the sense of the production.
  
G = e  +  f  +  g  + h,
+
One way to sum this up is to say that non-terminal symbols have the significance of hypotheses. The terminal strings form the empirical matter of a language, while the non-terminal symbols mark the patterns or the types of substrings that can be noticed in the profusion of data. If one observes a portion of a terminal string that falls into the pattern of the sentential form <math>W,\!</math> then it is an admissible hypothesis, according to the theory of the language that is constituted by the formal grammar, that this piece not only fits the type <math>q\!</math> but even comes to be generated under the auspices of the non-terminal symbol <math>^{\backprime\backprime} q ^{\prime\prime}.</math>
  
And so, by expanding effects, we get:
+
A moment's reflection on the issue of style, giving due consideration to the received array of stylistic choices, ought to inspire at least the question:  "Are these the only choices there are?"  In the present setting, there are abundant indications that other options, more differentiated varieties of description and more integrated ways of approaching individual languages, are likely to be conceivable, feasible, and even more ultimately viable.  If a suitably generic style, one that incorporates the full scope of logical combinations and operations, is broadly available, then it would no longer be necessary, or even apt, to argue in universal terms about which style is best, but more useful to investigate how we might adapt the local styles to the local requirements.  The medium of a generic style would yield a viable compromise between additive and multiplicative canons, and render the choice between parallel and serial a false alternative, at least, when expressed in the globally exclusive terms that are currently most commonly adopted to pose it.
  
G = e:e +  f:f  +  g:g  +  h:h
+
One set of indications comes from the study of machines, languages, and computation, especially the theories of their structures and relations. The forms of composition and decomposition that are generally known as ''parallel'' and ''serial'' are merely the extreme special cases, in variant directions of specialization, of a more generic form, usually called the ''cascade'' form of combination. This is a well-known fact in the theories that deal with automata and their associated formal languages, but its implications do not seem to be widely appreciated outside these fields. In particular, it dispells the need to choose one extreme or the other, since most of the natural cases are likely to exist somewhere in between.
  
  +  e:f  +  f:e  +  g:h  +  h:g
+
Another set of indications appears in algebra and category theory, where forms of composition and decomposition related to the cascade combination, namely, the ''semi-direct product'' and its special case, the ''wreath product'', are encountered at higher levels of generality than the cartesian products of sets or the direct products of spaces.
  
  + e:g +  f:h  +  g:e  +  h:f
+
In these domains of operation, one finds it necessary to consider also the ''co-product'' of sets and spaces, a construction that artificially creates a disjoint union of sets, that is, a union of spaces that are being treated as independent. It does this, in effect, by ''indexing'',
 +
''coloring'', or ''preparing'' the otherwise possibly overlapping domains that are being combined. What renders this a ''chimera'' or a ''hybrid'' form of combination is the fact that this indexing is tantamount to a cartesian product of a singleton set, namely, the conventional ''index'', ''color'', or ''affix'' in question, with the individual domain that is entering as a factor, a term, or a participant in the final result.
  
  + e:h +  f:g  +  g:f  +  h:e
+
One of the insights that arises out of Peirce's logical work is that the set operations of complementation, intersection, and union, along with the logical operations of negation, conjunction, and disjunction that operate in isomorphic tandem with them, are not as fundamental as they first appear. This is because all of them can be constructed from or derived from a smaller set of operations, in fact, taking the logical side of things, from either one of two ''sole sufficient'' operators, called ''amphecks'' by Peirce, ''strokes'' by those who re-discovered them later, and known in computer science as the NAND and the NNOR operators. For this reason, that is, by virtue of their precedence in the orders of construction and derivation, these operations have to be regarded as the simplest and the most primitive in principle, even if they are scarcely recognized as lying among the more familiar elements of logic.
  
More on the pragmatic maxim as a representation principle later.
+
I am throwing together a wide variety of different operations into each of the bins labeled ''additive'' and ''multiplicative'', but it is easy to observe a natural organization and even some relations approaching isomorphisms among and between the members of each class.
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
The relation between logical disjunction and set-theoretic union and the relation between logical conjunction and set-theoretic intersection ought to be clear enough for the purposes of the immediately present context.  In any case, all of these relations are scheduled to receive a thorough examination in a subsequent discussion (Subsection 1.3.10.13).  But the relation of a set-theoretic union to a category-theoretic co-product and the relation of a set-theoretic intersection to a syntactic concatenation deserve a closer look at this point.
  
Note 10
+
The effect of a co-product as a ''disjointed union'', in other words, that creates an object tantamount to a disjoint union of sets in the resulting co-product even if some of these sets intersect non-trivially and even if some of them are identical ''in reality'', can be achieved in several ways.  The most usual conception is that of making a ''separate copy'', for each part of the intended co-product, of the set that is intended to go there.  Often one thinks of the set that is assigned to a particular part of the co-product as being distinguished by a particular ''color'', in other words, by the attachment of a distinct ''index'', ''label'', or ''tag'', being a marker that is inherited by and passed on to every element of the set in that part.  A concrete image of this construction can be achieved by imagining that each set and each element of each set is placed in an ordered pair with the sign of its color, index, label, or tag.  One describes this as the ''injection'' of each set into the corresponding ''part'' of the co-product.
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
For example, given the sets <math>P\!</math> and <math>Q,\!</math> overlapping or not, one can define the ''indexed'' or ''marked'' sets <math>P_{[1]}\!</math> and <math>Q_{[2]},\!</math> amounting to the copy of <math>P\!</math> into the first part of the co-product and the copy of <math>Q\!</math> into the second part of the co-product, in the following manner:
  
| Consider what effects that might conceivably have
+
{| align="center" cellpsadding="8" width="90%"
| practical bearings you conceive the objects of your
 
| conception to have.  Then, your conception of those
 
| effects is the whole of your conception of the object.
 
 
|
 
|
| Charles Sanders Peirce, "The Maxim of Pragmatism, CP 5.438.
+
<math>\begin{array}{lllll}
 
+
P_{[1]} & = & (P, 1) & = & \{ (x, 1) : x \in P \}, \\
The genealogy of this conception of pragmatic representation is very intricate.
+
Q_{[2]} & = & (Q, 2) & = & \{ (x, 2) : x \in Q \}. \\
I will delineate some details that I presently fancy I remember clearly enough,
+
\end{array}</math>
subject to later correction.  Without checking historical accounts, I will not
+
|}
be able to pin down anything like a real chronology, but most of these notions
 
were standard furnishings of the 19th Century mathematical study, and only the
 
last few items date as late as the 1920's.
 
  
The idea about the regular representations of a group is universally known
+
Using the coproduct operator (<math>\textstyle\coprod</math>) for this construction, the ''sum'', the ''coproduct'', or the ''disjointed union'' of <math>P\!</math> and <math>Q\!</math> in that order can be represented as the ordinary union of <math>P_{[1]}\!</math> and <math>Q_{[2]}.\!</math>
as "Cayley's Theorem", usually in the form:  "Every group is isomorphic to
 
a subgroup of Aut(S), the group of automorphisms of an appropriate set S".
 
There is a considerable generalization of these regular representations to
 
a broad class of relational algebraic systems in Peirce's earliest papers.
 
The crux of the whole idea is this:
 
  
| Consider the effects of the symbol, whose meaning you wish to investigate,
+
{| align="center" cellpsadding="8" width="90%"
| as they play out on "all" of the different stages of context on which you
 
| can imagine that symbol playing a role.
 
 
 
This idea of contextual definition is basically the same as Jeremy Bentham's
 
notion of "paraphrasis", a "method of accounting for fictions by explaining
 
various purported terms away" (Quine, in Van Heijenoort, page 216).  Today
 
we'd call these constructions "term models".  This, again, is the big idea
 
behind Schönfinkel's combinators {S, K, I}, and hence of lambda calculus,
 
and I reckon you know where that leads.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 11
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
<math>\begin{array}{lll}
| "Maxim of Pragmaticism", CP 5.438.
+
P \coprod Q & = & P_{[1]} \cup Q_{[2]}. \\
 +
\end{array}</math>
 +
|}
  
Continuing to draw on the reduced example of group representations,
+
The concatenation <math>\mathfrak{L}_1 \cdot \mathfrak{L}_2</math> of the formal languages <math>\mathfrak{L}_1\!</math> and <math>\mathfrak{L}_2\!</math> is just the cartesian product of sets <math>\mathfrak{L}_1 \times \mathfrak{L}_2</math> without the extra <math>\times\!</math>'s, but the relation of cartesian products to set-theoretic intersections and thus to logical conjunctions is far from being clear.  One way of seeing a type of relation is to focus on the information that is needed to specify each construction, and thus to reflect on the signs that are used to carry this information.  As a first approach to the topic of information, according to a strategy that seeks to be as elementary and as informal as possible, I introduce the following set of ideas, intended to be taken in a very provisional way.
I would like to draw out a few of the finer points and problems of
 
regarding the maxim of pragmatism as a principle of representation.
 
  
Let us revisit the example of an abstract group that we had befour:
+
A ''stricture'' is a specification of a certain set in a certain place, relative to a number of other sets, yet to be specified.  It is assumed that one knows enough to tell if two strictures are equivalent as pieces of information, but any more determinate indications, like names for the places that are mentioned in the stricture, or bounds on the number of places that are involved, are regarded as being extraneous impositions, outside the proper concern of the definition, no matter how convenient they are found to be for a particular discussion.  As a schematic form of illustration, a stricture can be pictured in the following shape:
  
Table 1.  Klein Four-Group V_4
+
:{| cellpadding="8"
o---------o---------o---------o---------o---------o
+
| <math>^{\backprime\backprime}</math>
|         %        |        |        |        |
+
| <math>\ldots \times X \times Q \times X \times \ldots</math>
|    ·    %    e    |    f    |    g    |    h    |
+
| <math>^{\prime\prime}</math>
|        %        |        |        |        |
+
|}
o=========o=========o=========o=========o=========o
 
|         %        |        |        |        |
 
|   e    %    e    |    f    |    g    |    h    |
 
|         %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|   f    %    f    |    e    |    h    |    g    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    g    %    g    |    h    |    e    |    f    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    h    %    h    |    g    |    f    |    e    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
  
I presented the regular post-representation
+
A ''strait'' is the object that is specified by a stricture, in effect, a certain set in a certain place of an otherwise yet to be specified relation.  Somewhat sketchily, the strait that corresponds to the stricture just given can be pictured in the following shape:
of the four-group V_4 in the following form:
 
  
Reading "+" as a logical disjunction:
+
:{| cellpadding="8"
 
+
| &nbsp;
  G  =  e  +  f  +  g  + h
+
| <math>\ldots \times X \times Q \times X \times \ldots</math>
 
+
| &nbsp;
And so, by expanding effects, we get:
+
|}
 
 
  G  =  e:e  +  f:f  +  g:g  +  h:h
 
 
 
      +  e:f  +  f:e  +  g:h  +  h:g
 
 
 
      +  e:g  +  f:h  +  g:e  +  h:f
 
 
 
      +  e:h  +  f:g  +  g:f  +  h:e
 
 
 
This presents the group in one big bunch,
 
and there are occasions when one regards
 
it this way, but that is not the typical
 
form of presentation that we'd encounter.
 
More likely, the story would go a little
 
bit like this:
 
 
 
I cannot remember any of my math teachers
 
ever invoking the pragmatic maxim by name,
 
but it would be a very regular occurrence
 
for such mentors and tutors to set up the
 
subject in this wise:  Suppose you forget
 
what a given abstract group element means,
 
that is, in effect, 'what it is'.  Then a
 
sure way to jog your sense of 'what it is'
 
is to build a regular representation from
 
the formal materials that are necessarily
 
left lying about on that abstraction site.
 
 
 
Working through the construction for each
 
one of the four group elements, we arrive
 
at the following exegeses of their senses,
 
giving their regular post-representations:
 
 
 
  e  =  e:e  +  f:f  +  g:g  +  h:h
 
 
 
  f  =  e:f  +  f:e  +  g:h  +  h:g
 
 
 
  g  =  e:g  +  f:h  +  g:e  +  h:f
 
 
 
  h  = e:h  +  f:g  +  g:f  +  h:e
 
 
 
So if somebody asks you, say, "What is g?",
 
you can say, "I don't know for certain but
 
in practice its effects go a bit like this:
 
Converting e to g, f to h, g to e, h to f".
 
 
 
I will have to check this out later on, but my impression is
 
that Peirce tended to lean toward the other brand of regular,
 
the "second", the "left", or the "ante-representation" of the
 
groups that he treated in his earliest manuscripts and papers.
 
I believe that this was because he thought of the actions on
 
the pattern of dyadic relative terms like the "aftermath of".
 
 
 
Working through this alternative for each
 
one of the four group elements, we arrive
 
at the following exegeses of their senses,
 
giving their regular ante-representations:
 
 
 
  e  =  e:e  +  f:f  +  g:g  +  h:h
 
 
 
  f  =  f:e  +  e:f  +  h:g  +  g:h
 
 
 
  g  =  g:e  +  h:f  +  e:g  +  f:h
 
 
 
  h  =  h:e  +  g:f  +  f:g  +  e:h
 
 
 
Your paraphrastic interpretation of what this all
 
means would come out precisely the same as before.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 12
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Erratum
 
 
 
Oops!  I think that I have just confounded two entirely different issues:
 
1.  The substantial difference between right and left regular representations.
 
2.  The inessential difference between two conventions of presenting matrices.
 
I will sort this out and correct it later, as need be.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 13
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
In this picture <math>Q\!</math> is a certain set and <math>X\!</math> is the universe of discourse that is relevant to a given discussion.  Since a stricture does not, by itself, contain a sufficient amount of information to specify the number of sets that it intends to set in place, or even to specify the absolute location of the set that its does set in place, it appears to place an unspecified number of unspecified sets in a vague and uncertain strait.  Taken out of its interpretive context, the residual information that a stricture can convey makes all of the following potentially equivalent as strictures:
  
| Consider what effects that might 'conceivably'
+
{| align="center" cellpadding="8" width="90%"
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
<math>\begin{array}{ccccccc}
| "Maxim of Pragmaticism", CP 5.438.
+
^{\backprime\backprime} Q ^{\prime\prime}
 +
& , &
 +
^{\backprime\backprime} X \times Q \times X ^{\prime\prime}
 +
& , &
 +
^{\backprime\backprime} X \times X \times Q \times X \times X ^{\prime\prime}
 +
& , &
 +
\ldots
 +
\\
 +
\end{array}</math>
 +
|}
  
Let me return to Peirce's early papers on the algebra of relatives
+
With respect to what these strictures specify, this leaves all of the following equivalent as straits:
to pick up the conventions that he used there, and then rewrite my
 
account of regular representations in a way that conforms to those.
 
  
Peirce expresses the action of an "elementary dual relative" like so:
+
{| align="center" cellpadding="8" width="90%"
 
 
| [Let] A:B be taken to denote
 
| the elementary relative which
 
| multiplied into B gives A.
 
 
|
 
|
| Peirce, 'Collected Papers', CP 3.123.
+
<math>\begin{array}{ccccccc}
 
+
Q
And though he is well aware that it is not at all necessary to arrange
+
& = &
elementary relatives into arrays, matrices, or tables, when he does so
+
X \times Q \times X
he tends to prefer organizing dyadic relations in the following manner:
+
& = &
 
+
X \times X \times Q \times X \times X
|  A:A  A:B  A:C  |
+
& = &
|                  |
+
\ldots
|  B:A  B:B  B:C  |
+
\\
|                  |
+
\end{array}</math>
| C:A  C:B  C:C  |
+
|}
  
That conforms to the way that the last school of thought
+
Within the framework of a particular discussion, it is customary to set a bound on the number of places and to limit the variety of sets that are regarded as being under active consideration, and it is also convenient to index the places of the indicated relations, and of their encompassing cartesian products, in some fixed way.  But the whole idea of a stricture is to specify a strait that is capable of extending through and beyond any fixed frame of discussion.  In other words, a stricture is conceived to constrain a strait at a certain point, and then to leave it literally embedded, if tacitly expressed, in a yet to be fully specified relation, one that involves an unspecified number of unspecified domains.
I matriculated into stipulated that we tabulate material:
 
  
| e_11 e_12  e_13  |
+
A quantity of information is a measure of constraint. In this respect, a set of comparable strictures is ordered on account of the information that each one conveys, and a system of comparable straits is ordered in accord with the amount of information that it takes to pin each one of them down. Strictures that are more constraining and straits that are more constrained are placed at higher levels of information than those that are less so, and entities that involve more information are said to have a greater ''complexity'' in comparison with those entities that involve less information, that are said to have a greater ''simplicity''.
|                    |
 
|  e_21  e_22  e_23  |
 
|                    |
 
|  e_31  e_32  e_33  |
 
  
So, for example, let us suppose that we have the small universe {A, B, C},
+
In order to create a concrete example, let me now institute a frame of discussion where the number of places in a relation is bounded at two, and where the variety of sets under active consideration is limited to the typical subsets <math>P\!</math> and <math>Q\!</math> of a universe <math>X.\!</math>  Under these conditions, one can use the following sorts of expression as schematic strictures:
and the 2-adic relation m = "mover of" that is represented by this matrix:
 
  
=
+
{| align="center" cellpadding="8" width="90%"
 
 
|  m_AA (A:A)  m_AB (A:B)  m_AC (A:C)  |
 
|                                        |
 
|  m_BA (B:A)  m_BB (B:B)  m_BC (B:C)  |
 
|                                        |
 
|  m_CA (C:A)  m_CB (C:B)  m_CC (C:C)  |
 
 
 
Also, let m be such that
 
A is a mover of A and B,
 
B is a mover of B and C,
 
C is a mover of C and A.
 
 
 
In sum:
 
 
 
=
 
 
 
|  1 · (A:A)  1 · (A:B)  0 · (A:C)  |
 
|                                    |
 
|  0 · (B:A)  1 · (B:B)  1 · (B:C)  |
 
|                                    |
 
|  1 · (C:A)  0 · (C:B)  1 · (C:C)  |
 
 
 
For the sake of orientation and motivation,
 
compare with Peirce's notation in CP 3.329.
 
 
 
I think that will serve to fix notation
 
and set up the remainder of the account.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 14
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
<math>\begin{matrix}
| "Maxim of Pragmaticism", CP 5.438.
+
^{\backprime\backprime} X ^{\prime\prime} &
 
+
^{\backprime\backprime} P ^{\prime\prime} &
I am beginning to see how I got confused.
+
^{\backprime\backprime} Q ^{\prime\prime} \\
It is common in algebra to switch around
+
\\
between different conventions of display,
+
^{\backprime\backprime} X \times X ^{\prime\prime} &
as the momentary fancy happens to strike,
+
^{\backprime\backprime} X \times P ^{\prime\prime} &
and I see that Peirce is no different in
+
^{\backprime\backprime} X \times Q ^{\prime\prime} \\
this sort of shiftiness than anyone else.
+
\\
A changeover appears to occur especially
+
^{\backprime\backprime} P \times X ^{\prime\prime} &
whenever he shifts from logical contexts
+
^{\backprime\backprime} P \times P ^{\prime\prime} &
to algebraic contexts of application.
+
^{\backprime\backprime} P \times Q ^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime} Q \times X ^{\prime\prime} &
 +
^{\backprime\backprime} Q \times P ^{\prime\prime} &
 +
^{\backprime\backprime} Q \times Q ^{\prime\prime} \\
 +
\end{matrix}</math>
 +
|}
  
In the paper "On the Relative Forms of Quaternions" (CP 3.323),
+
These strictures and their corresponding straits are stratified according to their amounts of information, or their levels of constraint, as follows:
we observe Peirce providing the following sorts of explanation:
 
  
| If X, Y, Z denote the three rectangular components of a vector, and W denote
+
{| align="center" cellpadding="8" width="90%"
| numerical unity (or a fourth rectangular component, involving space of four
 
| dimensions), and (Y:Z) denote the operation of converting the Y component
 
| of a vector into its Z component, then
 
 
|
 
|
|    1  =  (W:W) + (X:X) + (Y:Y) + (Z:Z)
+
<math>\begin{array}{lcccc}
|
+
\text{High:}
|    i  =  (X:W) - (W:X) - (Y:Z) + (Z:Y)
+
& ^{\backprime\backprime} P \times P ^{\prime\prime}
|
+
& ^{\backprime\backprime} P \times Q ^{\prime\prime}
|    j  =  (Y:W) - (W:Y) - (Z:X) + (X:Z)
+
& ^{\backprime\backprime} Q \times P ^{\prime\prime}
|
+
& ^{\backprime\backprime} Q \times Q ^{\prime\prime}
|    k  =  (Z:W) - (W:Z) - (X:Y) + (Y:X)
+
\\
|
+
\\
| In the language of logic (Y:Z) is a relative term whose relate is
+
\text{Med:}
| a Y component, and whose correlate is a Z component.  The law of
+
& ^{\backprime\backprime} P ^{\prime\prime}
| multiplication is plainly (Y:Z)(Z:X) = (Y:X), (Y:Z)(X:W) = 0,
+
& ^{\backprime\backprime} X \times P ^{\prime\prime}
| and the application of these rules to the above values of
+
& ^{\backprime\backprime} P \times X ^{\prime\prime}
| 1, i, j, k gives the quaternion relations
+
\\
|
+
\\
|    i^2  =  j^2  =  k^2  =  -1,
+
\text{Med:}
|
+
& ^{\backprime\backprime} Q ^{\prime\prime}
|    ijk  =  -1,
+
& ^{\backprime\backprime} X \times Q ^{\prime\prime}
|
+
& ^{\backprime\backprime} Q \times X ^{\prime\prime}
|    etc.
+
\\
|
+
\\
| The symbol a(Y:Z) denotes the changing of Y to Z and the
+
\text{Low:}
| multiplication of the result by 'a'.  If the relatives be
+
& ^{\backprime\backprime} X ^{\prime\prime}
| arranged in a block
+
& ^{\backprime\backprime} X \times X ^{\prime\prime}
|
+
\\
|    W:W    W:X     W:Y    W:Z
+
\end{array}</math>
|
+
|}
|    X:W    X:X    X:Y    X:Z
 
|
 
|    Y:W    Y:X    Y:Y    Y:Z
 
|
 
|    Z:W    Z:X    Z:Y    Z:Z
 
|
 
| then the quaternion w + xi + yj + zk
 
| is represented by the matrix of numbers
 
|
 
|    w      -x      -y      -z
 
|
 
|    x        w      -z      y
 
|
 
|    y        z      w      -x
 
|
 
|    z      -y      x      w
 
|
 
| The multiplication of such matrices follows the same laws as the
 
| multiplication of quaternions.  The determinant of the matrix =
 
| the fourth power of the tensor of the quaternion.
 
|
 
| The imaginary x + y(-1)^(1/2) may likewise be represented by the matrix
 
|
 
|      x      y
 
|
 
|    -y      x
 
|
 
| and the determinant of the matrix = the square of the modulus.
 
|
 
| Charles Sanders Peirce, 'Collected Papers', CP 3.323.
 
|'Johns Hopkins University Circulars', No. 13, p. 179, 1882.
 
  
This way of talking is the mark of a person who opts
+
Within this framework, the more complex strait <math>P \times Q</math> can be expressed
to multiply his matrices "on the rignt", as they say.
+
in terms of the simpler straits, <math>P \times X</math> and <math>X \times Q.</math>  More specifically, it lends itself to being analyzed as their intersection, in the following way:
Yet Peirce still continues to call the first element
 
of the ordered pair (I:J) its "relate" while calling
 
the second element of the pair (I:J) its "correlate".
 
That doesn't comport very well, so far as I can tell,
 
with his customary reading of relative terms, suited
 
more to the multiplication of matrices "on the left".
 
  
So I still have a few wrinkles to iron out before
+
{| align="center" cellpadding="8" width="90%"
I can give this story a smooth enough consistency.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 15
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
<math>\begin{array}{lllll}
| "Maxim of Pragmaticism", CP 5.438.
+
P \times Q & = & P \times X & \cap & X \times Q. \\
 +
\end{array}</math>
 +
|}
  
I have been planning for quite some time now to make my return to Peirce's
+
From here it is easy to see the relation of concatenation, by virtue of these types of intersection, to the logical conjunction of propositions. The cartesian product <math>P \times Q</math> is described by a conjunction of propositions, namely, <math>P_{[1]} \land Q_{[2]},</math> subject to the following interpretation:
skyshaking "Description of a Notation for the Logic of Relatives" (1870),
 
and I can see that it's just about time to get down tuit, so let this
 
current bit of rambling inquiry function as the preamble to that.
 
All we need at the present, though, is a modus vivendi/operandi
 
for telling what is substantial from what is inessential in
 
the brook between symbolic conceits and dramatic actions
 
that we find afforded by means of the pragmatic maxim.
 
  
Back to our "subinstance", the example in support of our first example.
+
# <math>P_{[1]}\!</math> asserts that there is an element from the set <math>P\!</math> in the first place of the product.
I will now reconstruct it in a way that may prove to be less confusing.
+
# <math>Q_{[2]}\!</math> asserts that there is an element from the set <math>Q\!</math> in the second place of the product.
  
Let us make up the model universe $1$ = A + B + C and the 2-adic relation
+
The integration of these two pieces of information can be taken in that measure to specify a yet to be fully determined relation.
n = "noder of", as when "X is a data record that contains a pointer to Y".
 
That interpretation is not important, it's just for the sake of intuition.
 
In general terms, the 2-adic relation n can be represented by this matrix:
 
  
n  =
+
In a corresponding fashion at the level of the elements, the ordered pair <math>(p, q)\!</math> is described by a conjunction of propositions, namely, <math>p_{[1]} \land q_{[2]},</math> subject to the following interpretation:
  
|  n_AA (A:A)  n_AB (A:B)  n_AC (A:C)  |
+
# <math>p_{[1]}\!</math> says that <math>p\!</math> is in the first place of the product element under construction.
|                                        |
+
# <math>q_{[2]}\!</math> says that <math>q\!</math> is in the second place of the product element under construction.
|  n_BA (B:A)  n_BB (B:B)  n_BC (B:C)  |
 
|                                        |
 
|  n_CA (C:A)  n_CB (C:B)  n_CC (C:C)  |
 
  
Also, let n be such that
+
Notice that, in construing the cartesian product of the sets <math>P\!</math> and <math>Q\!</math> or the concatenation of the languages <math>\mathfrak{L}_1\!</math> and <math>\mathfrak{L}_2\!</math> in this way, one shifts the level of the active construction from the tupling of the elements in <math>P\!</math> and <math>Q\!</math> or the concatenation of the strings that are internal to the languages <math>\mathfrak{L}_1\!</math> and <math>\mathfrak{L}_2\!</math> to the concatenation of the external signs that it takes to indicate these sets or these languages, in other words, passing to a conjunction of indexed propositions, <math>P_{[1]}\!</math> and <math>Q_{[2]},\!</math> or to a conjunction of assertions, <math>(\mathfrak{L}_1)_{[1]}</math> and <math>(\mathfrak{L}_2)_{[2]},</math> that marks the sets or the languages in question for insertion in the indicated places of a product set or a product language, respectively.  In effect, the subscripting by the indices <math>^{\backprime\backprime} [1] ^{\prime\prime}</math> and <math>^{\backprime\backprime} [2] ^{\prime\prime}</math> can be recognized as a special case of concatenation, albeit through the posting of editorial remarks from an external ''mark-up'' language.
A is a noder of A and B,
 
B is a noder of B and C,
 
C is a noder of C and A.
 
  
Filling in the instantial values of the "coefficients" n_ij,
+
In order to systematize the relations that strictures and straits placed at higher levels of complexity, constraint, information, and organization have with those that are placed at the associated lower levels, I introduce the following pair of definitions:
as the indices i and j range over the universe of discourse:
 
  
n =
+
The <math>j^\text{th}\!</math> ''excerpt'' of a stricture of the form <math>^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime},</math> regarded within a frame of discussion where the number of places is limited to <math>k,\!</math> is the stricture of the form <math>^{\backprime\backprime} \, X \times \ldots \times S_j \times \ldots \times X \, ^{\prime\prime}.</math> In the proper context, this can be written more succinctly as the stricture <math>^{\backprime\backprime} \, (S_j)_{[j]} \, ^{\prime\prime},</math> an assertion that places the <math>j^\text{th}\!</math> set in the <math>j^\text{th}\!</math> place of the product.
  
| 1 · (A:A)   1 · (A:B)  0 · (A:C)  |
+
The <math>j^\text{th}\!</math> ''extract'' of a strait of the form <math>S_1 \times \ldots \times S_k,\!</math> constrained to a frame of discussion where the number of places is restricted to <math>k,\!</math> is the strait of the form <math>X \times \ldots \times S_j \times \ldots \times X.</math> In the appropriate context, this can be denoted more succinctly by the stricture <math>^{\backprime\backprime} \, (S_j)_{[j]} \, ^{\prime\prime},</math> an assertion that places the <math>j^\text{th}\!</math> set in the <math>j^\text{th}\!</math> place of the product.
|                                    |
 
|  0 · (B:A)  1 · (B:B)  1 · (B:C)  |
 
|                                    |
 
|  1 · (C:A)  0 · (C:B)  1 · (C:C)  |
 
  
In Peirce's time, and even in some circles of mathematics today,
+
In these terms, a stricture of the form <math>^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime}</math> can be expressed in terms of simpler strictures, to wit, as a conjunction of its <math>k\!</math> excerpts:
the information indicated by the elementary relatives (I:J), as
 
I, J range over the universe of discourse, would be referred to
 
as the "umbral elements" of the algebraic operation represented
 
by the matrix, though I seem to recall that Peirce preferred to
 
call these terms the "ingredients".  When this ordered basis is
 
understood well enough, one will tend to drop any mention of it
 
from the matrix itself, leaving us nothing but these bare bones:
 
  
n  =
+
{| align="center" cellpadding="8" width="90%"
 
 
|  1  1  0  |
 
|           |
 
|  0  1  1  |
 
|          |
 
|  1  0  1  |
 
 
 
However the specification may come to be written, this
 
is all just convenient schematics for stipulating that:
 
 
 
n  =  A:A  +  B:B  +  C:C  +  A:B  +  B:C  +  C:A
 
 
 
Recognizing !1! = A:A + B:B + C:C to be the identity transformation,
 
the 2-adic relation n = "noder of" may be represented by an element
 
!1! + A:B + B:C + C:A of the so-called "group ring", all of which
 
just makes this element a special sort of linear transformation.
 
 
 
Up to this point, we are still reading the elementary relatives of
 
the form I:J in the way that Peirce reads them in logical contexts:
 
I is the relate, J is the correlate, and in our current example we
 
read I:J, or more exactly, n_ij = 1, to say that I is a noder of J.
 
This is the mode of reading that we call "multiplying on the left".
 
 
 
In the algebraic, permutational, or transformational contexts of
 
application, however, Peirce converts to the alternative mode of
 
reading, although still calling I the relate and J the correlate,
 
the elementary relative I:J now means that I gets changed into J.
 
In this scheme of reading, the transformation A:B + B:C + C:A is
 
a permutation of the aggregate $1$ = A + B + C, or what we would
 
now call the set {A, B, C}, in particular, it is the permutation
 
that is otherwise notated as:
 
 
 
( A B C )
 
<      >
 
( B C A )
 
 
 
This is consistent with the convention that Peirce uses in
 
the paper "On a Class of Multiple Algebras" (CP 3.324-327).
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 16
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
<math>\begin{array}{lll}
| "Maxim of Pragmaticism", CP 5.438.
+
^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime}
 
+
& = &
We have been contemplating the virtues and the utilities of
+
^{\backprime\backprime} \, (S_1)_{[1]} \, ^{\prime\prime}
the pragmatic maxim as a hermeneutic heuristic, specifically,
+
\, \land \, \ldots \, \land \,
as a principle of interpretation that guides us in finding a
+
^{\backprime\backprime} \, (S_k)_{[k]} \, ^{\prime\prime}.
clarifying representation for a problematic corpus of symbols
+
\end{array}</math>
in terms of their actions on other symbols or their effects on
+
|}
the syntactic contexts in which we conceive to distribute them.
 
I started off considering the regular representations of groups
 
as constituting what appears to be one of the simplest possible
 
applications of this overall principle of representation.
 
  
There are a few problems of implementation that have to be worked out
+
In a similar vein, a strait of the form <math>S_1 \times \ldots \times S_k\!</math> can be expressed in terms of simpler straits, namely, as an intersection of its <math>k\!</math> extracts:
in practice, most of which are cleared up by keeping in mind which of
 
several possible conventions we have chosen to follow at a given time.
 
But there does appear to remain this rather more substantial question:
 
  
Are the effects we seek relates or correlates, or does it even matter?
+
{| align="center" cellpadding="8" width="90%"
 
 
I will have to leave that question as it is for now,
 
in hopes that a solution will evolve itself in time.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 17
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
<math>\begin{array}{lll}
| "Maxim of Pragmaticism", CP 5.438.
+
S_1 \times \ldots \times S_k & = & (S_1)_{[1]} \, \cap \, \ldots \, \cap \, (S_k)_{[k]}.
 +
\end{array}</math>
 +
|}
  
There a big reasons and little reasons for caring about this humble example.
+
There is a measure of ambiguity that remains in this formulation, but it is the best that I can do in the present informal context.
The little reasons we find all under our feet.  One big reason I can now
 
quite blazonly enounce in the fashion of this not so subtle subtitle:
 
  
Obstacles to Applying the Pragmatic Maxim
+
===The Cactus Language : Mechanics===
  
No sooner do you get a good idea and try to apply it
+
{| align="center" cellpadding="0" cellspacing="0" width="90%"
than you find that a motley array of obstacles arise.
 
 
 
It seems as if I am constantly lamenting the fact these days that people,
 
and even admitted Peircean persons, do not in practice more consistently
 
apply the maxim of pragmatism to the purpose for which it is purportedly
 
intended by its author.  That would be the clarification of concepts, or
 
intellectual symbols, to the point where their inherent senses, or their
 
lacks thereof, would be rendered manifest to all and sundry interpreters.
 
 
 
There are big obstacles and little obstacles to applying the pragmatic maxim.
 
In good subgoaling fashion, I will merely mention a few of the bigger blocks,
 
as if in passing, and then get down to the devilish details that immediately
 
obstruct our way.
 
 
 
Obstacle 1.  People do not always read the instructions very carefully.
 
There is a tendency in readers of particular prior persuasions to blow
 
the problem all out of proportion, to think that the maxim is meant to
 
reveal the absolutely positive and the totally unique meaning of every
 
preconception to which they might deign or elect to apply it.  Reading
 
the maxim with an even minimal attention, you can see that it promises
 
no such finality of unindexed sense, but ties what you conceive to you.
 
I have lately come to wonder at the tenacity of this misinterpretation.
 
Perhaps people reckon that nothing less would be worth their attention.
 
I am not sure.  I can only say the achievement of more modest goals is
 
the sort of thing on which our daily life depends, and there can be no
 
final end to inquiry nor any ultimate community without a continuation
 
of life, and that means life on a day to day basis.  All of which only
 
brings me back to the point of persisting with local meantime examples,
 
because if we can't apply the maxim there, we can't apply it anywhere.
 
 
 
And now I need to go out of doors and weed my garden for a time ...
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 18
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
<p>We are only now beginning to see how this works.  Clearly one of the mechanisms for picking a reality is the sociohistorical sense of what is important &mdash; which research program, with all its particularity of knowledge, seems most fundamental, most productive, most penetrating.  The very judgments which make us push narrowly forward simultaneously make us forget how little we know.  And when we look back at history, where the lesson is plain to find, we often fail to imagine ourselves in a parallel situation.  We ascribe the differences in world view to error, rather than to unexamined but consistent and internally justified choice.</p>
| "Maxim of Pragmaticism", CP 5.438.
+
|-
 +
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
 +
|}
  
Obstacles to Applying the Pragmatic Maxim
+
In this Subsection, I discuss the ''mechanics'' of parsing the cactus language into the corresponding class of computational data structures.  This provides each sentence of the language with a translation into a computational form that articulates its syntactic structure and prepares it for automated modes of processing and evaluation.  For this purpose, it is necessary to describe the target data structures at a fairly high level of abstraction only, ignoring the details of address pointers and record structures and leaving the more operational aspects of implementation to the imagination of prospective programmers.  In this way, I can put off to another stage of elaboration and refinement the description of the program that constructs these pointers and operates on these graph-theoretic data structures.
  
Obstacle 2.  Applying the pragmatic maxim, even with a moderate aim, can be hard.
+
The structure of a ''painted cactus'', insofar as it presents itself to the visual imagination, can be described as follows. The overall structure, as given by its underlying graph, falls within the species of graph that is commonly known as a ''rooted cactus'', and the only novel feature that it adds to this is that each of its nodes can be ''painted'' with a finite sequence of ''paints'', chosen from a ''palette'' that is given by the parametric set <math>\{ \, ^{\backprime\backprime} \operatorname{~} ^{\prime\prime} \, \} \cup \mathfrak{P} = \{ m_1 \} \cup \{ p_1, \ldots, p_k \}.</math>
I think that my present example, deliberately impoverished as it is, affords us
 
with an embarassing richness of evidence of just how complex the simple can be.
 
  
All the better reason for me to see if I can finish it up before moving on.
+
It is conceivable, from a purely graph-theoretical point of view, to have a class of cacti that are painted but not rooted, and so it is frequently necessary, for the sake of precision, to more exactly pinpoint the target species of graphical structure as a ''painted and rooted cactus'' (PARC).
  
Expressed most simply, the idea is to replace the question of "what it is",
+
A painted cactus, as a rooted graph, has a distinguished node that is called its ''root''.  By starting from the root and working recursively, the rest of its structure can be described in the following fashion.
which modest people know is far too difficult for them to answer right off,
 
with the question of "what it does", which most of us know a modicum about.
 
  
In the case of regular representations of groups we found
+
Each ''node'' of a PARC consists of a graphical ''point'' or ''vertex'' plus a finite sequence of ''attachments'', described in relative terms as the attachments ''at'' or ''to'' that node.  An empty sequence of attachments defines the ''empty node''. Otherwise, each attachment is one of three kinds:  a blank, a paint, or a type of PARC that is called a ''lobe''.
a non-plussing surplus of answers to sort our way through.
 
So let us track back one more time to see if we can learn
 
any lessons that might carry over to more realistic cases.
 
  
Here is is the operation table of V_4 once again:
+
Each ''lobe'' of a PARC consists of a directed graphical ''cycle'' plus a finite sequence of ''accoutrements'', described in relative terms as the accoutrements ''of'' or ''on'' that lobe.  Recalling the circumstance that every lobe that comes under consideration comes already attached to a particular node, exactly one vertex of the corresponding cycle is the vertex that comes from that very node.  The remaining vertices of the cycle have their definitions filled out according to the accoutrements of the lobe in question.  An empty sequence of accoutrements is taken to be tantamount to a sequence that contains a single empty node as its unique accoutrement, and either one of these ways of approaching it can be regarded as defining a graphical structure that is called a ''needle'' or a ''terminal edge''.  Otherwise, each accoutrement of a lobe is itself an arbitrary PARC.
  
Table 1Klein Four-Group V_4
+
Although this definition of a lobe in terms of its intrinsic structural components is logically sufficient, it is also useful to characterize the structure of a lobe in comparative terms, that is, to view the structure that typifies a lobe in relation to the structures of other PARC's and to mark the inclusion of this special type within the general run of PARC'sThis approach to the question of types results in a form of description that appears to be a bit more analytic, at least, in mnemonic or prima facie terms, if not ultimately more revealing.  Working in this vein, a ''lobe'' can be characterized as a special type of PARC that is called an ''unpainted root plant'' (UR-plant).
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    ·    %    e    |    f    |    g    |    h    |
 
|        %        |        |        |        |
 
o=========o=========o=========o=========o=========o
 
|        %        |        |        |        |
 
|    e    %    e    |    f    |    g    |    h    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    f    %    f    |    e    |    h    |    g    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    g    %    g    |    h    |    e    |    f    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
|        %        |        |        |        |
 
|    h    %    h    |    g    |    f    |    e    |
 
|        %        |        |        |        |
 
o---------o---------o---------o---------o---------o
 
  
A group operation table is really just a device for
+
An ''UR-plant'' is a PARC of a simpler sort, at least, with respect to the recursive ordering of structures that is being followed here.  As a type, it is defined by the presence of two properties, that of being ''planted'' and that of having an ''unpainted root''. These are defined as follows:
recording a certain 3-adic relation, to be specific,
 
the set of triples of the form <x, y, z> satisfying
 
the equation x·y = z where · is the group operation.
 
  
In the case of V_4 = (G, ·), where G is the "underlying set"
+
# A PARC is ''planted'' if its list of attachments has just one PARC.
{e, f, g, h}, we have the 3-adic relation L(V_4) c G x G x G
+
# A PARC is ''UR'' if its list of attachments has no blanks or paints.
whose triples are listed below:
 
  
|  <e, e, e>
+
In short, an UR-planted PARC has a single PARC as its only attachment, and since this attachment is prevented from being a blank or a paint, the single attachment at its root has to be another sort of structure, that which we call a ''lobe''.
|  <e, f, f>
 
|  <e, g, g>
 
|  <e, h, h>
 
|
 
|  <f, e, f>
 
|  <f, f, e>
 
|  <f, g, h>
 
|  <f, h, g>
 
|
 
|  <g, e, g>
 
|  <g, f, h>
 
|  <g, g, e>
 
|  <g, h, f>
 
|
 
|  <h, e, h>
 
|  <h, f, g>
 
|  <h, g, f>
 
|  <h, h, e>
 
  
It is part of the definition of a group that the 3-adic
+
To express the description of a PARC in terms of its nodes, each node can be specified in the fashion of a functional expression, letting a citation of the generic function name "<math>\operatorname{Node}</math>" be followed by a list of arguments that enumerates the attachments of the node in question, and letting a citation of the generic function name "<math>\operatorname{Lobe}</math>" be followed by a list of arguments that details the accoutrements of the lobe in question.  Thus, one can write expressions of the following forms:
relation L c G^3 is actually a function L : G x G -> G.
 
It is from this functional perspective that we can see
 
an easy way to derive the two regular representations.
 
Since we have a function of the type L : G x G -> G,
 
we can define a couple of substitution operators:
 
  
1. Sub(x, <_, y>) puts any specified x into
+
{| align="center" cellpadding="4" width="90%"
    the empty slot of the rheme <_, y>, with
+
| <math>1.\!</math>
    the effect of producing the saturated
+
| <math>\operatorname{Node}^0</math>
    rheme <x, y> that evaluates to x·y.
+
| <math>=\!</math>
 +
| <math>\operatorname{Node}()</math>
 +
|-
 +
| &nbsp;
 +
| &nbsp;
 +
| <math>=\!</math>
 +
| a node with no attachments.
 +
|-
 +
| &nbsp;
 +
| <math>\operatorname{Node}_{j=1}^k C_j</math>
 +
| <math>=\!</math>
 +
| <math>\operatorname{Node} (C_1, \ldots, C_k)</math>
 +
|-
 +
| &nbsp;
 +
| &nbsp;
 +
| <math>=\!</math>
 +
| a node with the attachments <math>C_1, \ldots, C_k.</math>
 +
|-
 +
| <math>2.\!</math>
 +
| <math>\operatorname{Lobe}^0</math>
 +
| <math>=\!</math>
 +
| <math>\operatorname{Lobe}()</math>
 +
|-
 +
| &nbsp;
 +
| &nbsp;
 +
| <math>=\!</math>
 +
| a lobe with no accoutrements.
 +
|-
 +
| &nbsp;
 +
| <math>\operatorname{Lobe}_{j=1}^k C_j</math>
 +
| <math>=\!</math>
 +
| <math>\operatorname{Lobe} (C_1, \ldots, C_k)</math>
 +
|-
 +
| &nbsp;
 +
| &nbsp;
 +
| <math>=\!</math>
 +
| a lobe with the accoutrements <math>C_1, \ldots, C_k.</math>
 +
|}
  
2.  Sub(x, <y, _>) puts any specified x into
+
Working from a structural description of the cactus language, or any suitable formal grammar for <math>\mathfrak{C} (\mathfrak{P}),\!</math> it is possible to give a recursive definition of the function called <math>\operatorname{Parse}</math> that maps each sentence in <math>\operatorname{PARCE} (\mathfrak{P})\!</math> to the corresponding graph in <math>\operatorname{PARC} (\mathfrak{P}).\!</math>  One way to do this proceeds as follows:
    the empty slot of the rheme <y, >, with
 
    the effect of producing the saturated
 
    rheme <y, x> that evaluates to y·x.
 
  
In (1), we consider the effects of each x in its
+
<ol style="list-style-type:decimal">
practical bearing on contexts of the form <_, y>,
 
as y ranges over G, and the effects are such that
 
x takes <_, y> into x·y, for y in G, all of which
 
is summarily notated as x = {(y : x·y) : y in G}.
 
The pairs (y : x·y) can be found by picking an x
 
from the left margin of the group operation table
 
and considering its effects on each y in turn as
 
these run across the top margin.  This aspect of
 
pragmatic definition we recognize as the regular
 
ante-representation:
 
  
    e  = e:e  +  f:f  +  g:g  +  h:h
+
<li>The parse of the concatenation <math>\operatorname{Conc}_{j=1}^k</math> of the <math>k\!</math> sentences <math>(s_j)_{j=1}^k</math> is defined recursively as follows:</li>
  
    f  = e:f  +  f:e  +  g:h  +  h:g
+
<ol style="list-style-type:lower-alpha">
  
    g  = e:g  +  f:h  +  g:e  +  h:f
+
<li><math>\operatorname{Parse} (\operatorname{Conc}^0) ~=~ \operatorname{Node}^0.</math>
  
    h  =  e:h  +  f:g  +  g:f  +  h:e
+
<li>
 +
<p>For <math>k > 0,\!</math></p>
  
In (2), we consider the effects of each x in its
+
<p><math>\operatorname{Parse} (\operatorname{Conc}_{j=1}^k s_j) ~=~ \operatorname{Node}_{j=1}^k \operatorname{Parse} (s_j).</math></p></li>
practical bearing on contexts of the form <y, _>,
 
as y ranges over G, and the effects are such that
 
x takes <y, _> into y·x, for y in G, all of which
 
is summarily notated as x = {(y : y·x) : y in G}.
 
The pairs (y : y·x) can be found by picking an x
 
from the top margin of the group operation table
 
and considering its effects on each y in turn as
 
these run down the left margin. This aspect of
 
pragmatic definition we recognize as the regular
 
post-representation:
 
  
    e  =  e:e  +  f:f  +  g:g  +  h:h
+
</ol>
  
    f  = e:f  +  f:e  +  g:h  +  h:g
+
<li>The parse of the surcatenation <math>\operatorname{Surc}_{j=1}^k</math> of the <math>k\!</math> sentences <math>(s_j)_{j=1}^k</math> is defined recursively as follows:</li>
  
    g  = e:g  +  f:h  +  g:e  +  h:f
+
<ol style="list-style-type:lower-alpha">
  
    h  = e:h  +  f:g  +  g:f  +  h:e
+
<li><math>\operatorname{Parse} (\operatorname{Surc}^0) ~=~ \operatorname{Lobe}^0.</math>
  
If the ante-rep looks the same as the post-rep,
+
<li>
now that I'm writing them in the same dialect,
+
<p>For <math>k > 0,\!</math></p>
that is because V_4 is abelian (commutative),
 
and so the two representations have the very
 
same effects on each point of their bearing.
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
<p><math>\operatorname{Parse} (\operatorname{Surc}_{j=1}^k s_j) ~=~ \operatorname{Lobe}_{j=1}^k \operatorname{Parse} (s_j).</math></p></li>
  
Note 19
+
</ol></ol>
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
For ease of reference, Table&nbsp;13 summarizes the mechanics of these parsing rules.
  
| Consider what effects that might 'conceivably'
+
<br>
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
|
 
| Charles Sanders Peirce,
 
| "Maxim of Pragmaticism", CP 5.438.
 
  
So long as we're in the neighborhood, we might as well take in
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
some more of the sights, for instance, the smallest example of
+
|+ style="height:30px" | <math>\text{Table 13.} ~~ \text{Algorithmic Translation Rules}\!</math>
a non-abelian (non-commutative) group.  This is a group of six
+
|- style="height:40px; background:ghostwhite"
elements, say, G = {e, f, g, h, i, j}, with no relation to any
 
other employment of these six symbols being implied, of course,
 
and it can be most easily represented as the permutation group
 
on a set of three letters, say, X = {A, B, C}, usually notated
 
as G = Sym(X) or more abstractly and briefly, as Sym(3) or S_3.
 
Here are the permutation (= substitution) operations in Sym(X):
 
 
 
Table 2.  Permutations or Substitutions in Sym_{A, B, C}
 
o---------o---------o---------o---------o---------o---------o
 
|        |        |        |        |        |        |
 
|    e    |    f    |    g    |    h    |    i    |    j    |
 
|        |        |         |        |        |        |
 
o=========o=========o=========o=========o=========o=========o
 
|        |        |        |        |        |        |
 
|  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |
 
|        |        |        |        |        |        |
 
|  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |
 
|  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |
 
|        |        |        |        |        |        |
 
|  A B C  |  C A B  |  B C A  |  A C B  |  C B A  |  B A C  |
 
|        |        |        |        |        |        |
 
o---------o---------o---------o---------o---------o---------o
 
 
 
Here is the operation table for S_3, given in abstract fashion:
 
 
 
Table 3.  Symmetric Group S_3
 
 
 
|                       _
 
|                     e / \ e
 
|                      /  \
 
|                    /  e  \
 
|                  f / \  / \ f
 
|                  /  \ /  \
 
|                  /  f  \  f  \
 
|              g / \  / \  / \ g
 
|                /  \ /  \ /  \
 
|              /  g  \  g  \  g  \
 
|            h / \  / \  / \  / \ h
 
|            /  \ /  \ /  \ /  \
 
|            /  h  \  e  \  e  \  h  \
 
|        i / \  / \  / \  / \  / \ i
 
|          /  \ /  \ /  \ /  \ /  \
 
|        /  i  \  i  \  f  \  j  \  i  \
 
|      j / \  / \  / \  / \  / \  / \ j
 
|      /  \ /  \ /  \ /  \ /  \ /  \
 
|      (  j  \  j  \  j  \  i  \  h  \  j  )
 
|      \  / \  / \  / \  / \  / \  /
 
|        \ /  \ /  \ /  \ /  \ /  \ /
 
|        \  h  \  h  \  e  \  j  \  i  /
 
|          \  / \  / \  / \  / \  /
 
|          \ /  \ /  \ /  \ /  \ /
 
|            \  i  \  g  \  f  \  h  /
 
|            \  / \  / \  / \  /
 
|              \ /  \ /  \ /  \ /
 
|              \  f  \  e  \  g  /
 
|                \  / \  / \  /
 
|                \ /  \ /  \ /
 
|                  \  g  \  f  /
 
|                  \  / \  /
 
|                    \ /  \ /
 
|                     \  e  /
 
|                      \  /
 
|                      \ /
 
|                        ¯
 
 
 
By the way, we will meet with the symmetric group S_3 again
 
when we return to take up the study of Peirce's early paper
 
"On a Class of Multiple Algebras" (CP 3.324-327), and also
 
his late unpublished work "The Simplest Mathematics" (1902)
 
(CP 4.227-323), with particular reference to the section
 
that treats of "Trichotomic Mathematics" (CP 4.307-323).
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Work Area
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 20
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
{| align="center" border="0" cellpadding="8" cellspacing="0" style="background:ghostwhite; text-align:center; width:100%"
| "Maxim of Pragmaticism", CP 5.438.
+
| width="33%" | <math>\text{Sentence in PARCE}\!</math>
 
+
| width="33%" | <math>\xrightarrow{\mathrm{Parse}}\!</math>
By way of collecting a shot-term pay-off for all the work --
+
| width="33%" | <math>\text{Graph in PARC}\!</math>
not to mention the peirce-spiration -- that we sweated out
+
|}
over the regular representations of V_4 and S_3
+
|-
 
 
Table 2.  Permutations or Substitutions in Sym_{A, B, C}
 
o---------o---------o---------o---------o---------o---------o
 
|        |        |        |        |        |        |
 
|    e    |    f    |    g    |    h    |    i    |    j    |
 
|        |        |        |        |        |        |
 
o=========o=========o=========o=========o=========o=========o
 
|         |        |        |        |        |        |
 
| A B C  |  A B C  |  A B C  |  A B C  |  A B C  |  A B C  |
 
|         |        |        |        |        |        |
 
| | | |  |  | | |  |  | | |  |  | | |  |  | | |  |  | | |  |
 
|  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |  v v v  |
 
|        |        |        |        |        |        |
 
| A B C  |  C A B  |  B C A  |  A C B  |  C B A  |  B A C  |
 
|        |        |        |        |        |        |
 
o---------o---------o---------o---------o---------o---------o
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Note 21
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
| Consider what effects that might 'conceivably'
 
| have practical bearings you 'conceive' the
 
| objects of your 'conception' to have.  Then,
 
| your 'conception' of those effects is the
 
| whole of your 'conception' of the object.
 
 
|
 
|
| Charles Sanders Peirce,
+
{| align="center" border="0" cellpadding="8" cellspacing="0" style="text-align:center; width:100%"
| "Maxim of Pragmaticism", CP 5.438.
+
| width="33%" | <math>\mathrm{Conc}^0\!</math>
 
+
| width="33%" | <math>\xrightarrow{\mathrm{Parse}}\!</math>
problem about writing
+
| width="33%" | <math>\mathrm{Node}^0\!</math>
 
+
|-
  e  = e:e  +  f:f  +  g:g  +  h:h
+
| width="33%" | <math>\mathrm{Conc}_{j=1}^k s_j\!</math>
 
+
| width="33%" | <math>\xrightarrow{\mathrm{Parse}}\!</math>
no recursion intended
+
| width="33%" | <math>\mathrm{Node}_{j=1}^k \mathrm{Parse} (s_j)\!</math>
need for a work-around
+
|}
ways way explaining it away
+
|-
 
 
action on signs not objects
 
 
 
math def of rep
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Zeroth Order Logic
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Here is a scaled-down version of one of my very first applications,
 
having to do with the demographic variables in a survey data base.
 
 
 
This Example illustrates the use of 2-variate logical forms
 
for expressing and reasoning about the logical constraints
 
that are involved in the following types of situations:
 
 
 
1.  Distinction:    A =/= B
 
    Also known as:   logical inequality, exclusive disjunction
 
    Represented as: ( A , B )
 
    Graphed as:
 
    |
 
    |  A  B
 
    |   o---o
 
    |    \ /
 
    |    @
 
 
 
2.  Equality:        A = B
 
    Also known as:  logical equivalence, if and only if, A <=> B
 
    Represented as:  (( A , B ))
 
    Graphed as:
 
    |
 
    |  A  B
 
    |  o---o
 
    |    \ /
 
    |     o
 
    |    |
 
    |    @
 
 
 
3.  Implication:    A => B
 
    Also known as:  entailment, if-then
 
    Represented as:  ( A ( B ))
 
    Graphed as:
 
    |
 
    |  A  B
 
    |  o---o
 
    |  |
 
    |  @
 
 
 
Example of a proposition expressing a "zeroth order theory" (ZOT):
 
 
 
Consider the following text, written in what I am calling "Ref Log",
 
also known as the "Cactus Language" synpropositional logic:
 
 
 
|   ( male  , female )
 
|  (( boy  , male child ))
 
|  (( girl , female child ))
 
|  ( child ( human ))
 
 
 
Graphed as:
 
 
 
|                  boy  male    girl  female
 
|                    o---o child    o---o child
 
|  male  female      \ /            \ /          child  human
 
|    o---o            o              o              o---o
 
|      \ /             |              |              |
 
|       @              @              @              @|
 
 
 
Nota Bene.  Due to graphic constraints -- no, the other
 
kind of graphic constraints -- of the immediate medium,
 
I am forced to string out the logical conjuncts of the
 
actual cactus graph for this situation, one that might
 
sufficiently be reasoned out from the exhibit supra by
 
fusing together the four roots of the severed cactus.
 
 
 
Either of these expressions, text or graph, is equivalent to
 
what would otherwise be written in a more ordinary syntax as:
 
 
 
|  male  =/=  female
 
| boy  <=> male child
 
|  girl  <=> female child
 
| child  =>  human
 
 
 
This is a actually a single proposition, a conjunction of four lines:
 
one distinction, two equations, and one implication.  Together these
 
amount to a set of definitions conjointly constraining the logical
 
compatibility of the six feature names that appear.  They may be
 
thought of as sculpting out a space of models that is some subset
 
of the 2^6 = 64 possible interpretations, and thereby shaping some
 
universe of discourse.
 
 
 
Once this backdrop is defined, it is possible to "query" this universe,
 
simply by conjoining additional propositions in further constraint of
 
the underlying set of models.  This has many uses, as we shall see.
 
 
 
We are considering an Example of a propositional expression
 
that is formed on the following "alphabet" or "lexicon" of
 
six "logical features" or "boolean variables":
 
 
 
$A$  = {"boy", "child", "female", "girl", "human", "male"}.
 
 
 
The expression is this:
 
 
 
|  ( male  , female )
 
|  (( boy  , male child ))
 
|  (( girl , female child ))
 
|  ( child ( human ))
 
 
 
Putting it very roughly -- and putting off a better description
 
of it till later -- we may think of this expression as notation
 
for a boolean function f : %B%^6 -> %B%.  This is what we might
 
call the "abstract type" of the function, but we will also find
 
it convenient on many occasions to represent the points of this
 
particular copy of the space %B%^6 in terms of the positive and
 
negative versions of the features from $A$ that serve to encase
 
them as logical "cells", as they are called in the venn diagram
 
picture of the corresponding universe of discourse X = [$A$].
 
 
 
Just for concreteness, this form of representation begins and ends:
 
 
 
<0,0,0,0,0,0> =  (boy)(child)(female)(girl)(human)(male),
 
<0,0,0,0,0,1> =  (boy)(child)(female)(girl)(human) male ,
 
<0,0,0,0,1,0>  = (boy)(child)(female)(girl) human (male),
 
<0,0,0,0,1,1> = (boy)(child)(female)(girl) human  male ,
 
...
 
<1,1,1,1,0,0>  =  boy  child  female  girl (human)(male),
 
<1,1,1,1,0,1> =  boy  child  female  girl (human) male ,
 
<1,1,1,1,1,0>  =  boy  child  female  girl  human (male),
 
<1,1,1,1,1,1>  =  boy  child  female  girl  human  male .
 
 
 
I continue with the previous Example, that I bring forward and sum up here:
 
 
 
|                 boy  male          girl  female
 
|                    o---o child          o---o child
 
|  male  female      \ /                  \ /              child  human
 
|     o---o            o                    o                    o--o
 
|      \ /            |                    |                    |
 
|      @              @                    @                    @
 
 
|
 
|
| (male , female)((boy , male child))((girl , female child))(child (human))
+
{| align="center" border="0" cellpadding="8" cellspacing="0" style="text-align:center; width:100%"
 +
| width="33%" | <math>\mathrm{Surc}^0\!</math>
 +
| width="33%" | <math>\xrightarrow{\mathrm{Parse}}\!</math>
 +
| width="33%" | <math>\mathrm{Lobe}^0\!</math>
 +
|-
 +
| width="33%" | <math>\mathrm{Surc}_{j=1}^k s_j\!</math>
 +
| width="33%" | <math>\xrightarrow{\mathrm{Parse}}\!</math>
 +
| width="33%" | <math>\mathrm{Lobe}_{j=1}^k \mathrm{Parse} (s_j)\!</math>
 +
|}
 +
|}
  
For my master's piece in Quantitative Psychology (Michigan State, 1989),
+
<br>
I wrote a program, "Theme One" (TO) by name, that among its other duties
 
operates to process the expressions of the cactus language in many of the
 
most pressing ways that we need in order to be able to use it effectively
 
as a propositional calculus.  The operational component of TO where one
 
does the work of this logical modeling is called "Study", and the core
 
of the logical calculator deep in the heart of this Study section is
 
a suite of computational functions that evolve a particular species
 
of "normal form", analogous to a "disjunctive normal form" (DNF),
 
from whatever expression they are prebendered as their input.
 
  
This "canonical", "normal", or "stable" form of logical expression --
+
A ''substructure'' of a PARC is defined recursively as follows.  Starting at the root node of the cactus <math>C,\!</math> any attachment is a substructure of <math>C.\!</math>  If a substructure is a blank or a paint, then it constitutes a minimal substructure, meaning that no further substructures of <math>C\!</math> arise from it.  If a substructure is a lobe, then each one of its accoutrements is also a substructure of <math>C,\!</math> and has to be examined for further substructures.
I'll refine the distinctions among these subforms all in good time --
 
permits succinct depiction as an "arboreal boolean expansion" (ABE).
 
  
Once again, the graphic limitations of this space prevail against
+
The concept of substructure can be used to define varieties of deletion and erasure operations that respect the structure of the abstract graph.  For the purposes of this depiction, a blank symbol <math>^{\backprime\backprime} ~ ^{\prime\prime}</math> is treated as a ''primer'', in other words, as a ''clear paint'' or a ''neutral tint''.  In effect, one is letting <math>m_1 = p_0.\!</math>  In this frame of discussion, it is useful to make the following distinction:
any disposition that I might have to lay out a really substantial
 
case before you, of the brand that might have a chance to impress
 
you with the aptitude of this ilk of ABE in rooting out the truth
 
of many a complexly obscurely subtly adamant whetstone of our wit.
 
  
So let me just illustrate the way of it with one conjunct of our Example.
+
# To ''delete'' a substructure is to replace it with an empty node, in effect, to reduce the whole structure to a trivial point.
What follows will be a sequence of expressions, each one after the first
+
# To ''erase'' a substructure is to replace it with a blank symbol, in effect, to paint it out of the picture or to overwrite it.
being logically equal to the one that precedes it:
 
  
Step 1
+
A ''bare PARC'', loosely referred to as a ''bare cactus'', is a PARC on the empty palette <math>\mathfrak{P} = \varnothing.</math>  In other veins, a bare cactus can be described in several different ways, depending on how the form arises in practice.
  
|    g    fc
+
<ol style="list-style-type:decimal">
|    o---o
 
|      \ /
 
|      o
 
|      |
 
|      @
 
  
Step 2
+
<li>Leaning on the definition of a bare PARCE, a bare PARC can be described as the kind of a parse graph that results from parsing a bare cactus expression, in other words, as the kind of a graph that issues from the requirements of processing a sentence of the bare cactus language <math>\mathfrak{C}^0 = \operatorname{PARCE}^0.</math></li>
  
|                  o
+
<li>To express it more in its own terms, a bare PARC can be defined by tracing the recursive definition of a generic PARC, but then by detaching an independent form of description from the source of that analogy.  The method is sufficiently sketched as follows:</li>
|        fc        |  fc
 
|    o---o        o---o
 
|      \ /           \ /
 
|      o            o
 
|      |            |
 
|    g o-------------o--o g
 
|        \          /
 
|        \        /
 
|          \      /
 
|          \    /
 
|            \  /
 
|            \ /
 
|              @
 
  
Step 3
+
<ol style="list-style-type:lower-latin">
  
|      f c
+
<li>A ''bare PARC'' is a PARC whose attachments are limited to blanks and ''bare lobes''.</li>
|      o
 
|      |            f c
 
|      o            o
 
|      |            |
 
|    g o-------------o--o g
 
|        \          /
 
|        \        /
 
|          \      /
 
|          \    /
 
|            \  /
 
|            \ /
 
|              @
 
  
Step 4
+
<li>A ''bare lobe'' is a lobe whose accoutrements are limited to bare PARC's.</li>
  
|          o
+
</ol>
|          |
 
|    c o  o c          o
 
|      |  |            |
 
|      o  o      c o  o c
 
|      |  |        |  |
 
|    f o---o--o f  f o---o--o f
 
|        \ /           \ /
 
|      g o-------------o--o g
 
|          \          /
 
|          \        /
 
|            \      /
 
|            \    /
 
|              \  /
 
|              \ /
 
|                @
 
  
Step 5
+
<li>In practice, a bare cactus is usually encountered in the process of analyzing or handling an arbitrary PARC, the circumstances of which frequently call for deleting or erasing all of its paints.  In particular, this generally makes it easier to observe the various properties of its underlying graphical structure.</li>
  
|          o      c o
+
</ol>
|      c  |        |
 
|    f o---o--o f  f o---o--o f
 
|        \ /           \ /
 
|      g o-------------o--o g
 
|          \          /
 
|          \        /
 
|            \      /
 
|            \    /
 
|              \  /
 
|              \ /
 
|                @
 
  
Step 6
+
===The Cactus Language : Semantics===
  
|                                       o
+
{| align="center" cellpadding="0" cellspacing="0" width="90%"
|                                      |
 
|          o                      o  o
 
|          |                      |  |
 
|    c o---o--o c      o        c o---o--o c
 
|        \ /            |            \ /
 
|      f o-------------o--o f      f o-------------o--o f
 
|          \          /              \          /
 
|          \        /                \        /
 
|            \      /                  \      /
 
|            \    /                    \    /
 
|              \  /                      \  /
 
|              \ /                        \ /
 
|              g o---------------------------o--o g
 
|                \                        /
 
|                  \                      /
 
|                  \                    /
 
|                    \                  /
 
|                    \                /
 
|                      \              /
 
|                      \            /
 
|                        \          /
 
|                        \        /
 
|                          \      /
 
|                          \    /
 
|                            \  /
 
|                            \ /
 
|                              @
 
 
 
Step 7
 
 
 
|          o                      o
 
|          |                      |
 
|    c o---o--o c      o        c o---o--o c
 
|        \ /            |            \ /
 
|      f o-------------o--o f      f o-------------o--o f
 
|          \          /              \          /
 
|          \        /                \        /
 
|            \      /                  \      /
 
|            \    /                    \    /
 
|              \  /                      \  /
 
|              \ /                        \ /
 
|              g o---------------------------o--o g
 
|                \                        /
 
|                  \                      /
 
|                  \                    /
 
|                    \                  /
 
|                    \                /
 
|                      \              /
 
|                      \            /
 
|                        \          /
 
|                        \        /
 
|                          \      /
 
|                          \    /
 
|                            \  /
 
|                            \ /
 
|                              @
 
 
 
This last expression is the ABE of the input expression.
 
It can be transcribed into ordinary logical language as:
 
 
 
| either girl and
 
|        either female and
 
|              either child and true
 
|              or not child and false
 
|        or not female and false
 
| or not girl and
 
|        either female and
 
|              either child and false
 
|              or not child and true
 
|        or not female and true
 
 
 
The expression "((girl , female child))" is sufficiently evaluated
 
by considering its logical values on the coordinate tuples of %B%^3,
 
or its indications on the cells of the associated venn diagram that
 
depicts the universe of discourse, namely, on these eight arguments:
 
     
 
<1, 1, 1>  =  girl  female  child ,
 
<1, 1, 0>  =  girl  female (child),
 
<1, 0, 1>  =  girl (female) child ,
 
<1, 0, 0>  =  girl (female)(child),
 
<0, 1, 1>  = (girl) female  child ,
 
<0, 1, 0>  =  (girl) female (child),
 
<0, 0, 1>  =  (girl)(female) child ,
 
<0, 0, 0>  =  (girl)(female)(child).
 
 
 
The ABE output expression tells us the logical values of
 
the input expression on each of these arguments, doing so
 
by attaching the values to the leaves of a tree, and acting
 
as an "efficient" or "lazy" evaluator in the sense that the
 
process that generates the tree follows each path only up to
 
the point in the tree where it can determine the values on the
 
entire subtree beyond that point.  Thus, the ABE tree tells us:
 
 
 
girl  female  child  -> 1
 
girl  female (child)  -> 0
 
girl (female) -> 0
 
(girl) female  child  -> 0
 
(girl) female (child)  -> 1
 
(girl)(female) -> 1
 
 
 
Picking out the interpretations that yield the truth of the expression,
 
and expanding the corresponding partial argument tuples, we arrive at
 
the following interpretations that satisfy the input expression:
 
 
 
girl  female  child  -> 1
 
(girl) female (child)  -> 1
 
(girl)(female) child  -> 1
 
(girl)(female)(child)  -> 1
 
 
 
In sum, if it's a female and a child, then it's a girl,
 
and if it's either not a female or not a child or both,
 
then it's not a girl.
 
 
 
Brief Automata
 
 
 
By way of providing a simple illustration of Cook's Theorem,
 
that "Propositional Satisfiability is NP-Complete", here is
 
an exposition of one way to translate Turing Machine set-ups
 
into propositional expressions, employing the Ref Log Syntax
 
for Prop Calc that I described in a couple of earlier notes:
 
 
 
Notation:
 
 
 
Stilt(k)  =  Space and Time Limited Turing Machine,
 
            with k units of space and k units of time.
 
 
 
Stunt(k)  =  Space and Time Limited Turing Machine,
 
            for computing the parity of a bit string,
 
            with Number of Tape cells of input equal to k.
 
 
 
I will follow the pattern of the discussion in the book of
 
Herbert Wilf, 'Algorithms & Complexity' (1986), pages 188-201,
 
but translate into Ref Log, which is more efficient with respect
 
to the number of propositional clauses that are required.
 
 
 
Parity Machine
 
 
 
|                    1/1/+1
 
|                  ------->
 
|              /\ /        \ /\
 
|      0/0/+1  ^  0          1  ^  0/0/+1
 
|              \/|\        /|\/
 
|                | <------- |
 
|        #/#/-1  |  1/1/+1  |  #/#/-1
 
|                |          |
 
|                v          v
 
|                #          *
 
 
 
o-------o--------o-------------o---------o------------o
 
| State | Symbol | Next Symbol | Ratchet | Next State |
 
|  Q  |  S    |    S'      |  dR    |    Q'    |
 
o-------o--------o-------------o---------o------------o
 
|  0  |  0    |    0      |  +1    |    0      |
 
|  0  |  1    |    1      |  +1    |    1      |
 
|  0  |  #    |    #      |  -1    |    #      |
 
|  1  |  0    |    0       |  +1    |    1      |
 
|  1  |  1    |    1      |  +1    |    0      |
 
|  1  |  #    |    #      |  -1    |    *      |
 
o-------o--------o-------------o---------o------------o
 
 
 
The TM has a "finite automaton" (FA) as its component.
 
Let us refer to this particular FA by the name of "M".
 
 
 
The "tape-head" (that is, the "read-unit") will be called "H".
 
The "registers" are also called "tape-cells" or "tape-squares".
 
 
 
In order to consider how the finitely "stilted" rendition of this TM
 
can be translated into the form of a purely propositional description,
 
one now fixes k and limits the discussion to talking about a Stilt(k),
 
which is really not a true TM anymore but a finite automaton in disguise.
 
 
 
In this example, for the sake of a minimal illustration, we choose k = 2,
 
and discuss Stunt(2).  Since the zeroth tape cell and the last tape cell
 
are occupied with bof and eof marks "#", this amounts to only one digit
 
of significant computation.
 
 
 
To translate Stunt(2) into propositional form we use
 
the following collection of propositional variables:
 
 
 
For the "Present State Function" QF : P -> Q,
 
 
 
{p0_q#, p0_q*, p0_q0, p0_q1,
 
p1_q#, p1_q*, p1_q0, p1_q1,
 
p2_q#, p2_q*, p2_q0, p2_q1,
 
p3_q#, p3_q*, p3_q0, p3_q1}
 
 
 
The propositional expression of the form "pi_qj" says:
 
 
 
| At the point-in-time p_i,
 
| the finite machine M is in the state q_j.
 
 
 
For the "Present Register Function" RF : P -> R,
 
 
 
{p0_r0, p0_r1, p0_r2, p0_r3,
 
p1_r0, p1_r1, p1_r2, p1_r3,
 
p2_r0, p2_r1, p2_r2, p2_r3,
 
p3_r0, p3_r1, p3_r2, p3_r3}
 
 
 
The propositional expression of the form "pi_rj" says:
 
 
 
| At the point-in-time p_i,
 
| the tape-head H is on the tape-cell r_j.
 
 
 
For the "Present Symbol Function" SF : P -> (R -> S),
 
 
 
{p0_r0_s#, p0_r0_s*, p0_r0_s0, p0_r0_s1,
 
p0_r1_s#, p0_r1_s*, p0_r1_s0, p0_r1_s1,
 
p0_r2_s#, p0_r2_s*, p0_r2_s0, p0_r2_s1,
 
p0_r3_s#, p0_r3_s*, p0_r3_s0, p0_r3_s1,
 
p1_r0_s#, p1_r0_s*, p1_r0_s0, p1_r0_s1,
 
p1_r1_s#, p1_r1_s*, p1_r1_s0, p1_r1_s1,
 
p1_r2_s#, p1_r2_s*, p1_r2_s0, p1_r2_s1,
 
p1_r3_s#, p1_r3_s*, p1_r3_s0, p1_r3_s1,
 
p2_r0_s#, p2_r0_s*, p2_r0_s0, p2_r0_s1,
 
p2_r1_s#, p2_r1_s*, p2_r1_s0, p2_r1_s1,
 
p2_r2_s#, p2_r2_s*, p2_r2_s0, p2_r2_s1,
 
p2_r3_s#, p2_r3_s*, p2_r3_s0, p2_r3_s1,
 
p3_r0_s#, p3_r0_s*, p3_r0_s0, p3_r0_s1,
 
p3_r1_s#, p3_r1_s*, p3_r1_s0, p3_r1_s1,
 
p3_r2_s#, p3_r2_s*, p3_r2_s0, p3_r2_s1,
 
p3_r3_s#, p3_r3_s*, p3_r3_s0, p3_r3_s1}
 
 
 
The propositional expression of the form "pi_rj_sk" says:
 
 
 
| At the point-in-time p_i,
 
| the tape-cell r_j bears the mark s_k.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~INPUTS~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Here are the Initial Conditions
 
for the two possible inputs to the
 
Ref Log redaction of this Parity TM:
 
 
 
o~~~~~~~~~o~~~~~~~~~o~INPUT~0~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Initial Conditions:
 
 
 
p0_q0
 
 
 
p0_r1
 
 
 
p0_r0_s#
 
p0_r1_s0
 
p0_r2_s#
 
 
 
The Initial Conditions are given by a logical conjunction
 
that is composed of 5 basic expressions, altogether stating:
 
 
 
| At the point-in-time p_0, M is in the state q_0, and
 
| At the point-in-time p_0, H is on the cell  r_1, and
 
| At the point-in-time p_0, cell r_0 bears the mark "#", and
 
| At the point-in-time p_0, cell r_1 bears the mark "0", and
 
| At the point-in-time p_0, cell r_2 bears the mark "#".
 
 
 
o~~~~~~~~~o~~~~~~~~~o~INPUT~1~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Initial Conditions:
 
 
 
p0_q0
 
 
 
p0_r1
 
 
 
p0_r0_s#
 
p0_r1_s1
 
p0_r2_s#
 
 
 
The Initial Conditions are given by a logical conjunction
 
that is composed of 5 basic expressions, altogether stating:
 
 
 
| At the point-in-time p_0, M is in the state q_0, and
 
| At the point-in-time p_0, H is on the cell  r_1, and
 
| At the point-in-time p_0, cell r_0 bears the mark "#", and
 
| At the point-in-time p_0, cell r_1 bears the mark "1", and
 
| At the point-in-time p_0, cell r_2 bears the mark "#".
 
 
 
o~~~~~~~~~o~~~~~~~~~o~PROGRAM~o~~~~~~~~~o~~~~~~~~~o
 
 
 
And here, yet again, just to store it nearby,
 
is the logical rendition of the TM's program:
 
 
 
Mediate Conditions:
 
 
 
( p0_q#  ( p1_q# ))
 
( p0_q*  ( p1_q* ))
 
 
 
( p1_q#  ( p2_q# ))
 
( p1_q*  ( p2_q* ))
 
 
 
Terminal Conditions:
 
 
 
(( p2_q# )( p2_q* ))
 
 
 
State Partition:
 
 
 
(( p0_q0 ),( p0_q1 ),( p0_q# ),( p0_q* ))
 
(( p1_q0 ),( p1_q1 ),( p1_q# ),( p1_q* ))
 
(( p2_q0 ),( p2_q1 ),( p2_q# ),( p2_q* ))
 
 
 
Register Partition:
 
 
 
(( p0_r0 ),( p0_r1 ),( p0_r2 ))
 
(( p1_r0 ),( p1_r1 ),( p1_r2 ))
 
(( p2_r0 ),( p2_r1 ),( p2_r2 ))
 
 
 
Symbol Partition:
 
 
 
(( p0_r0_s0 ),( p0_r0_s1 ),( p0_r0_s# ))
 
(( p0_r1_s0 ),( p0_r1_s1 ),( p0_r1_s# ))
 
(( p0_r2_s0 ),( p0_r2_s1 ),( p0_r2_s# ))
 
 
 
(( p1_r0_s0 ),( p1_r0_s1 ),( p1_r0_s# ))
 
(( p1_r1_s0 ),( p1_r1_s1 ),( p1_r1_s# ))
 
(( p1_r2_s0 ),( p1_r2_s1 ),( p1_r2_s# ))
 
 
 
(( p2_r0_s0 ),( p2_r0_s1 ),( p2_r0_s# ))
 
(( p2_r1_s0 ),( p2_r1_s1 ),( p2_r1_s# ))
 
(( p2_r2_s0 ),( p2_r2_s1 ),( p2_r2_s# ))
 
 
 
Interaction Conditions:
 
 
 
(( p0_r0 ) p0_r0_s0 ( p1_r0_s0 ))
 
(( p0_r0 ) p0_r0_s1 ( p1_r0_s1 ))
 
(( p0_r0 ) p0_r0_s# ( p1_r0_s# ))
 
 
 
(( p0_r1 ) p0_r1_s0 ( p1_r1_s0 ))
 
(( p0_r1 ) p0_r1_s1 ( p1_r1_s1 ))
 
(( p0_r1 ) p0_r1_s# ( p1_r1_s# ))
 
 
 
(( p0_r2 ) p0_r2_s0 ( p1_r2_s0 ))
 
(( p0_r2 ) p0_r2_s1 ( p1_r2_s1 ))
 
(( p0_r2 ) p0_r2_s# ( p1_r2_s# ))
 
 
 
(( p1_r0 ) p1_r0_s0 ( p2_r0_s0 ))
 
(( p1_r0 ) p1_r0_s1 ( p2_r0_s1 ))
 
(( p1_r0 ) p1_r0_s# ( p2_r0_s# ))
 
 
 
(( p1_r1 ) p1_r1_s0 ( p2_r1_s0 ))
 
(( p1_r1 ) p1_r1_s1 ( p2_r1_s1 ))
 
(( p1_r1 ) p1_r1_s# ( p2_r1_s# ))
 
 
 
(( p1_r2 ) p1_r2_s0 ( p2_r2_s0 ))
 
(( p1_r2 ) p1_r2_s1 ( p2_r2_s1 ))
 
(( p1_r2 ) p1_r2_s# ( p2_r2_s# ))
 
 
 
Transition Relations:
 
 
 
( p0_q0  p0_r1  p0_r1_s0  ( p1_q0  p1_r2  p1_r1_s0 ))
 
( p0_q0  p0_r1  p0_r1_s1  ( p1_q1  p1_r2  p1_r1_s1 ))
 
( p0_q0  p0_r1  p0_r1_s#  ( p1_q#  p1_r0  p1_r1_s# ))
 
( p0_q0  p0_r2  p0_r2_s#  ( p1_q#  p1_r1  p1_r2_s# ))
 
 
 
( p0_q1  p0_r1  p0_r1_s0  ( p1_q1  p1_r2  p1_r1_s0 ))
 
( p0_q1  p0_r1  p0_r1_s1  ( p1_q0  p1_r2  p1_r1_s1 ))
 
( p0_q1  p0_r1  p0_r1_s#  ( p1_q*  p1_r0  p1_r1_s# ))
 
( p0_q1  p0_r2  p0_r2_s#  ( p1_q*  p1_r1  p1_r2_s# ))
 
 
 
( p1_q0  p1_r1  p1_r1_s0  ( p2_q0  p2_r2  p2_r1_s0 ))
 
( p1_q0  p1_r1  p1_r1_s1  ( p2_q1  p2_r2  p2_r1_s1 ))
 
( p1_q0  p1_r1  p1_r1_s#  ( p2_q#  p2_r0  p2_r1_s# ))
 
( p1_q0  p1_r2  p1_r2_s#  ( p2_q#  p2_r1  p2_r2_s# ))
 
 
 
( p1_q1  p1_r1  p1_r1_s0  ( p2_q1  p2_r2  p2_r1_s0 ))
 
( p1_q1  p1_r1  p1_r1_s1  ( p2_q0  p2_r2  p2_r1_s1 ))
 
( p1_q1  p1_r1  p1_r1_s#  ( p2_q*  p2_r0  p2_r1_s# ))
 
( p1_q1  p1_r2  p1_r2_s#  ( p2_q*  p2_r1  p2_r2_s# ))
 
 
 
o~~~~~~~~~o~~~~~~~~~o~INTERPRETATION~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Interpretation of the Propositional Program:
 
 
 
Mediate Conditions:
 
 
 
( p0_q#  ( p1_q# ))
 
( p0_q*  ( p1_q* ))
 
 
 
( p1_q#  ( p2_q# ))
 
( p1_q*  ( p2_q* ))
 
 
 
In Ref Log, an expression of the form "( X ( Y ))"
 
expresses an implication or an if-then proposition:
 
"Not X without Y",  "If X then Y",  "X => Y",  etc.
 
 
 
A text string expression of the form "( X ( Y ))"
 
parses to a graphical data-structure of the form:
 
 
 
    X  Y
 
    o---o
 
    |
 
    @
 
 
 
All together, these Mediate Conditions state:
 
 
 
| If at p_0  M is in state q_#, then at p_1  M is in state q_#, and
 
| If at p_0  M is in state q_*, then at p_1  M is in state q_*, and
 
| If at p_1  M is in state q_#, then at p_2  M is in state q_#, and
 
| If at p_1  M is in state q_*, then at p_2  M is in state q_*.
 
 
 
Terminal Conditions:
 
 
 
(( p2_q# )( p2_q* ))
 
 
 
In Ref Log, an expression of the form "(( X )( Y ))"
 
expresses a disjunction "X or Y" and it parses into:
 
 
 
    X  Y
 
    o  o
 
    \ /
 
      o
 
      |
 
      @
 
 
 
In effect, the Terminal Conditions state:
 
 
 
| At p_2,  M is in state q_#, or
 
| At p_2,  M is in state q_*.
 
 
 
State Partition:
 
 
 
(( p0_q0 ),( p0_q1 ),( p0_q# ),( p0_q* ))
 
(( p1_q0 ),( p1_q1 ),( p1_q# ),( p1_q* ))
 
(( p2_q0 ),( p2_q1 ),( p2_q# ),( p2_q* ))
 
 
 
In Ref Log, an expression of the form "(( e_1 ),( e_2 ),( ... ),( e_k ))"
 
expresses the fact that "exactly one of the e_j is true, for j = 1 to k".
 
Expressions of this form are called "universal partition" expressions, and
 
they parse into a type of graph called a "painted and rooted cactus" (PARC):
 
 
 
    e_1  e_2  ...  e_k
 
    o    o          o
 
    |    |          |
 
    o-----o--- ... ---o
 
      \              /
 
      \            /
 
        \          /
 
        \        /
 
          \      /
 
          \    /
 
            \  /
 
            \ /
 
              @
 
 
 
The State Partition expresses the conditions that:
 
 
 
| At each of the points-in-time p_i, for i = 0 to 2,
 
| M can be in exactly one state q_j, for j in the set {0, 1, #, *}.
 
 
 
Register Partition:
 
 
 
(( p0_r0 ),( p0_r1 ),( p0_r2 ))
 
(( p1_r0 ),( p1_r1 ),( p1_r2 ))
 
(( p2_r0 ),( p2_r1 ),( p2_r2 ))
 
 
 
The Register Partition expresses the conditions that:
 
 
 
| At each of the points-in-time p_i, for i = 0 to 2,
 
| H can be on exactly one cell  r_j, for j = 0 to 2.
 
 
 
Symbol Partition:
 
 
 
(( p0_r0_s0 ),( p0_r0_s1 ),( p0_r0_s# ))
 
(( p0_r1_s0 ),( p0_r1_s1 ),( p0_r1_s# ))
 
(( p0_r2_s0 ),( p0_r2_s1 ),( p0_r2_s# ))
 
 
 
(( p1_r0_s0 ),( p1_r0_s1 ),( p1_r0_s# ))
 
(( p1_r1_s0 ),( p1_r1_s1 ),( p1_r1_s# ))
 
(( p1_r2_s0 ),( p1_r2_s1 ),( p1_r2_s# ))
 
 
 
(( p2_r0_s0 ),( p2_r0_s1 ),( p2_r0_s# ))
 
(( p2_r1_s0 ),( p2_r1_s1 ),( p2_r1_s# ))
 
(( p2_r2_s0 ),( p2_r2_s1 ),( p2_r2_s# ))
 
 
 
The Symbol Partition expresses the conditions that:
 
 
 
| At each of the points-in-time p_i, for i in {0, 1, 2},
 
| in each of the tape-registers r_j, for j in {0, 1, 2},
 
| there can be exactly one sign s_k, for k in {0, 1, #}.
 
 
 
Interaction Conditions:
 
 
 
(( p0_r0 ) p0_r0_s0 ( p1_r0_s0 ))
 
(( p0_r0 ) p0_r0_s1 ( p1_r0_s1 ))
 
(( p0_r0 ) p0_r0_s# ( p1_r0_s# ))
 
 
 
(( p0_r1 ) p0_r1_s0 ( p1_r1_s0 ))
 
(( p0_r1 ) p0_r1_s1 ( p1_r1_s1 ))
 
(( p0_r1 ) p0_r1_s# ( p1_r1_s# ))
 
 
 
(( p0_r2 ) p0_r2_s0 ( p1_r2_s0 ))
 
(( p0_r2 ) p0_r2_s1 ( p1_r2_s1 ))
 
(( p0_r2 ) p0_r2_s# ( p1_r2_s# ))
 
 
 
(( p1_r0 ) p1_r0_s0 ( p2_r0_s0 ))
 
(( p1_r0 ) p1_r0_s1 ( p2_r0_s1 ))
 
(( p1_r0 ) p1_r0_s# ( p2_r0_s# ))
 
 
 
(( p1_r1 ) p1_r1_s0 ( p2_r1_s0 ))
 
(( p1_r1 ) p1_r1_s1 ( p2_r1_s1 ))
 
(( p1_r1 ) p1_r1_s# ( p2_r1_s# ))
 
 
 
(( p1_r2 ) p1_r2_s0 ( p2_r2_s0 ))
 
(( p1_r2 ) p1_r2_s1 ( p2_r2_s1 ))
 
(( p1_r2 ) p1_r2_s# ( p2_r2_s# ))
 
 
 
In briefest terms, the Interaction Conditions merely express
 
the circumstance that the sign in a tape-cell cannot change
 
between two points-in-time unless the tape-head is over the
 
cell in question at the initial one of those points-in-time.
 
All that we have to do is to see how they manage to say this.
 
 
 
In Ref Log, an expression of the following form:
 
 
 
"(( p<i>_r<j> ) p<i>_r<j>_s<k> ( p<i+1>_r<j>_s<k> ))",
 
 
 
and which parses as the graph:
 
 
 
      p<i>_r<j> o  o  p<i+1>_r<j>_s<k>
 
                  \ /
 
    p<i>_r<j>_s<k> o
 
                  |
 
                  @
 
 
 
can be read in the form of the following implication:
 
 
 
| If
 
| at the point-in-time p<i>, the tape-cell r<j> bears the mark s<k>,
 
| but it is not the case that
 
| at the point-in-time p<i>, the tape-head is on the tape-cell r<j>.
 
| then
 
| at the point-in-time p<i+1>, the tape-cell r<j> bears the mark s<k>.
 
 
 
Folks among us of a certain age and a peculiar manner of acculturation will
 
recognize these as the "Frame Conditions" for the change of state of the TM.
 
 
 
Transition Relations:
 
 
 
( p0_q0  p0_r1  p0_r1_s0  ( p1_q0  p1_r2  p1_r1_s0 ))
 
( p0_q0  p0_r1  p0_r1_s1  ( p1_q1  p1_r2  p1_r1_s1 ))
 
( p0_q0  p0_r1  p0_r1_s#  ( p1_q#  p1_r0  p1_r1_s# ))
 
( p0_q0  p0_r2  p0_r2_s#  ( p1_q#  p1_r1  p1_r2_s# ))
 
 
 
( p0_q1  p0_r1  p0_r1_s0  ( p1_q1  p1_r2  p1_r1_s0 ))
 
( p0_q1  p0_r1  p0_r1_s1  ( p1_q0  p1_r2  p1_r1_s1 ))
 
( p0_q1  p0_r1  p0_r1_s#  ( p1_q*  p1_r0  p1_r1_s# ))
 
( p0_q1  p0_r2  p0_r2_s#  ( p1_q*  p1_r1  p1_r2_s# ))
 
 
 
( p1_q0  p1_r1  p1_r1_s0  ( p2_q0  p2_r2  p2_r1_s0 ))
 
( p1_q0  p1_r1  p1_r1_s1  ( p2_q1  p2_r2  p2_r1_s1 ))
 
( p1_q0  p1_r1  p1_r1_s#  ( p2_q#  p2_r0  p2_r1_s# ))
 
( p1_q0  p1_r2  p1_r2_s#  ( p2_q#  p2_r1  p2_r2_s# ))
 
 
 
( p1_q1  p1_r1  p1_r1_s0  ( p2_q1  p2_r2  p2_r1_s0 ))
 
( p1_q1  p1_r1  p1_r1_s1  ( p2_q0  p2_r2  p2_r1_s1 ))
 
( p1_q1  p1_r1  p1_r1_s#  ( p2_q*  p2_r0  p2_r1_s# ))
 
( p1_q1  p1_r2  p1_r2_s#  ( p2_q*  p2_r1  p2_r2_s# ))
 
 
 
The Transition Conditions merely serve to express,
 
by means of 16 complex implication expressions,
 
the data of the TM table that was given above.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~OUTPUTS~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
And here are the outputs of the computation,
 
as emulated by its propositional rendition,
 
and as actually generated within that form
 
of transmogrification by the program that
 
I wrote for finding all of the satisfying
 
interpretations (truth-value assignments)
 
of propositional expressions in Ref Log:
 
 
 
o~~~~~~~~~o~~~~~~~~~o~OUTPUT~0~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Output Conditions:
 
 
 
p0_q0
 
  p0_r1
 
  p0_r0_s#
 
    p0_r1_s0
 
    p0_r2_s#
 
      p1_q0
 
      p1_r2
 
        p1_r2_s#
 
        p1_r0_s#
 
          p1_r1_s0
 
          p2_q#
 
            p2_r1
 
            p2_r0_s#
 
              p2_r1_s0
 
              p2_r2_s#
 
 
 
The Output Conditions amount to the sole satisfying interpretation,
 
that is, a "sequence of truth-value assignments" (SOTVA) that make
 
the entire proposition come out true, and they state the following:
 
 
 
| At the point-in-time p_0, M is in the state q_0,      and
 
| At the point-in-time p_0, H is on the cell  r_1,      and
 
| At the point-in-time p_0, cell r_0 bears the mark "#", and
 
| At the point-in-time p_0, cell r_1 bears the mark "0", and
 
| At the point-in-time p_0, cell r_2 bears the mark "#", and
 
 
|
 
|
| At the point-in-time p_1, M is in the state q_0,       and  
+
<p>Alas, and yet what ''are'' you, my written and painted thoughts!  It is not long ago that you were still so many-coloured, young and malicious, so full of thorns and hidden spices you made me sneeze and laugh &mdash; and now?  You have already taken off your novelty and some of you, I fear, are on the point of becoming truths:  they already look so immortal, so pathetically righteous, so boring!</p>
| At the point-in-time p_1, H is on the cell  r_2,       and
+
|-
| At the point-in-time p_1, cell r_0 bears the mark "#", and
+
| align="right" | &mdash; Nietzsche, ''Beyond Good and Evil'', [Nie-2, ¶ 296]
| At the point-in-time p_1, cell r_1 bears the mark "0", and
+
|}
| At the point-in-time p_1, cell r_2 bears the mark "#", and
 
|
 
| At the point-in-time p_2, M is in the state q_#,       and
 
| At the point-in-time p_2, H is on the cell  r_1,      and
 
| At the point-in-time p_2, cell r_0 bears the mark "#", and
 
| At the point-in-time p_2, cell r_1 bears the mark "0", and
 
| At the point-in-time p_2, cell r_2 bears the mark "#".
 
  
In brief, the output for our sake being the symbol that rests
+
In this Subsection, I describe a particular semantics for the painted cactus language, telling what meanings I aim to attach to its bare syntactic forms.  This supplies an ''interpretation'' for this parametric family of formal languages, but it is good to remember that it forms just one of many such interpretations that are conceivable and even viable.  In deed, the distinction between the object domain and the sign domain can be observed in the fact that many languages can be deployed to depict the same set of objects and that any language worth its salt is bound to to give rise to many different forms of interpretive saliency.
under the tape-head H when the machine M gets to a rest state,
 
we are now amazed by the remarkable result that Parity(0) = 0.
 
  
o~~~~~~~~~o~~~~~~~~~o~OUTPUT~1~o~~~~~~~~~o~~~~~~~~~o
+
In formal settings, it is common to speak of interpretation as if it created a direct connection between the signs of a formal language and the objects of the intended domain, in other words, as if it determined the denotative component of a sign relation.  But a closer attention to what goes on reveals that the process of interpretation is more indirect, that what it does is to provide each sign of a prospectively meaningful source language with a translation into an already established target language, where ''already established'' means that its relationship to pragmatic objects is taken for granted at the moment in question.
 
 
Output Conditions:
 
 
 
p0_q0
 
  p0_r1
 
  p0_r0_s#
 
    p0_r1_s1
 
    p0_r2_s#
 
      p1_q1
 
      p1_r2
 
        p1_r2_s#
 
        p1_r0_s#
 
          p1_r1_s1
 
          p2_q*
 
            p2_r1
 
            p2_r0_s#
 
              p2_r1_s1
 
              p2_r2_s#
 
 
 
The Output Conditions amount to the sole satisfying interpretation,
 
that is, a "sequence of truth-value assignments" (SOTVA) that make
 
the entire proposition come out true, and they state the following:
 
 
 
| At the point-in-time p_0, M is in the state q_0,       and
 
| At the point-in-time p_0, H is on the cell  r_1,      and
 
| At the point-in-time p_0, cell r_0 bears the mark "#", and
 
| At the point-in-time p_0, cell r_1 bears the mark "1", and
 
| At the point-in-time p_0, cell r_2 bears the mark "#", and
 
|
 
| At the point-in-time p_1, M is in the state q_1,       and
 
| At the point-in-time p_1, H is on the cell  r_2,       and
 
| At the point-in-time p_1, cell r_0 bears the mark "#", and
 
| At the point-in-time p_1, cell r_1 bears the mark "1", and
 
| At the point-in-time p_1, cell r_2 bears the mark "#", and
 
|
 
| At the point-in-time p_2, M is in the state q_*,      and
 
| At the point-in-time p_2, H is on the cell  r_1,      and
 
| At the point-in-time p_2, cell r_0 bears the mark "#", and
 
| At the point-in-time p_2, cell r_1 bears the mark "1", and
 
| At the point-in-time p_2, cell r_2 bears the mark "#".
 
  
In brief, the output for our sake being the symbol that rests
+
With this in mind, it is clear that interpretation is an affair of signs that at best respects the objects of all of the signs that enter into it, and so it is the connotative aspect of semiotics that is at stake here.  There is nothing wrong with my saying that I interpret a sentence of a formal language as a sign that refers to a function or to a proposition, so long as you understand that this reference is likely to be achieved by way of more familiar and perhaps less formal signs that you already take to denote those objects.
under the tape-head H when the machine M gets to a rest state,
 
we are now amazed by the remarkable result that Parity(1) = 1.
 
  
I realized after sending that last bunch of bits that there is room
+
On entering a context where a logical interpretation is intended for the sentences of a formal language there are a few conventions that make it easier to make the translation from abstract syntactic forms to their intended semantic sensesAlthough these conventions are expressed in unnecessarily colorful terms, from a purely abstract point of view, they do provide a useful array of connotations that help to negotiate what is otherwise a difficult transitionThis terminology is introduced as the need for it arises in the process of interpreting the cactus language.
for confusion about what is the input/output of the Study module of
 
the Theme One program as opposed to what is the input/output of the
 
"finitely approximated turing automaton" (FATA)So here is better
 
delineation of what's whatThe input to Study is a text file that
 
is known as LogFile(Whatever) and the output of Study is a sequence
 
of text files that summarize the various canonical and normal forms
 
that it generates.  For short, let us call these NormFile(Whatelse).
 
With that in mind, here are the actual IO's of Study, excluding the
 
glosses in square brackets:
 
  
o~~~~~~~~~o~~~~~~~~~o~~INPUT~~o~~~~~~~~~o~~~~~~~~~o
+
The task of this Subsection is to specify a ''semantic function'' for the sentences of the cactus language <math>\mathfrak{L} = \mathfrak{C}(\mathfrak{P}),</math> in other words, to define a mapping that "interprets" each sentence of <math>\mathfrak{C}(\mathfrak{P})</math> as a sentence that says something, as a sentence that bears a meaning, in short, as a sentence that denotes a proposition, and thus as a sign of an indicator function.  When the syntactic sentences of a formal language are given a referent significance in logical terms, for example, as denoting propositions or indicator functions, then each form of syntactic combination takes on a corresponding form of logical significance.
  
[Input To Study = FATA Initial Conditions + FATA Program Conditions]
+
By way of providing a logical interpretation for the cactus language, I introduce a family of operators on indicator functions that are called ''propositional connectives'', and I distinguish these from the associated family of syntactic combinations that are called ''sentential connectives'', where the relationship between these two realms of connection is exactly that between objects and their signs.  A propositional connective, as an entity of a well-defined functional and operational type, can be treated in every way as a logical or a mathematical object, and thus as the type of object that can be denoted by the corresponding form of syntactic entity, namely, the sentential connective that is appropriate to the case in question.
  
[FATA Initial Conditions For Input 0]
+
There are two basic types of connectives, called the ''blank connectives'' and the ''bound connectives'', respectively, with one connective of each type for each natural number <math>k = 0, 1, 2, 3, \ldots.</math>
  
p0_q0
+
<ol style="list-style-type:decimal">
  
p0_r1
+
<li>
 +
<p>The ''blank connective'' of <math>k\!</math> places is signified by the concatenation of the <math>k\!</math> sentences that fill those places.</p>
  
p0_r0_s#
+
<p>For the special case of <math>k = 0,\!</math> the blank connective is taken to be an empty string or a blank symbol &mdash; it does not matter which, since both are assigned the same denotation among propositions.</p>
p0_r1_s0
 
p0_r2_s#
 
  
[FATA Program Conditions For Parity Machine]
+
<p>For the generic case of <math>k > 0,\!</math> the blank connective takes the form <math>s_1 \cdot \ldots \cdot s_k.</math>  In the type of data that is called a ''text'', the use of the center dot <math>(\cdot)</math> is generally supplanted by whatever number of spaces and line breaks serve to improve the readability of the resulting text.</p></li>
  
[Mediate Conditions]
+
<li>
 +
<p>The ''bound connective'' of <math>k\!</math> places is signified by the surcatenation of the <math>k\!</math> sentences that fill those places.</p>
  
( p0_q#  ( p1_q# ))
+
<p>For the special case of <math>k = 0,\!</math> the bound connective is taken to be an empty closure &mdash; an expression enjoying one of the forms <math>\underline{(} \underline{)}, \, \underline{(} ~ \underline{)}, \, \underline{(} ~~ \underline{)}, \, \ldots</math> with any number of blank symbols between the parentheses &mdash; all of which are assigned the same logical denotation among propositions.</p>
( p0_q*  ( p1_q* ))
 
  
( p1_q#  ( p2_q# ))
+
<p>For the generic case of <math>k > 0,\!</math> the bound connective takes the form <math>\underline{(} s_1, \ldots, s_k \underline{)}.</math></p></li>
( p1_q*  ( p2_q* ))
 
  
[Terminal Conditions]
+
</ol>
  
(( p2_q# )( p2_q* ))
+
At this point, there are actually two different dialects, scripts, or modes of presentation for the cactus language that need to be interpreted, in other words, that need to have a semantic function defined on their domains.
  
[State Partition]
+
<ol style="list-style-type:lower-alpha">
  
(( p0_q0 ),( p0_q1 ),( p0_q# ),( p0_q* ))
+
<li>There is the literal formal language of strings in <math>\operatorname{PARCE} (\mathfrak{P}),</math> the ''painted and rooted cactus expressions'' that constitute the language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) \subseteq \mathfrak{A}^* = (\mathfrak{M} \cup \mathfrak{P})^*.</math></li>
(( p1_q0 ),( p1_q1 ),( p1_q# ),( p1_q* ))
 
(( p2_q0 ),( p2_q1 ),( p2_q# ),( p2_q* ))
 
  
[Register Partition]
+
<li>There is the figurative formal language of graphs in <math>\operatorname{PARC} (\mathfrak{P}),</math> the ''painted and rooted cacti'' themselves, a parametric family of graphs or a species of computational data structures that is graphically analogous to the language of literal strings.</li>
  
(( p0_r0 ),( p0_r1 ),( p0_r2 ))
+
</ol>
(( p1_r0 ),( p1_r1 ),( p1_r2 ))
 
(( p2_r0 ),( p2_r1 ),( p2_r2 ))
 
  
[Symbol Partition]
+
Of course, these two modalities of formal language, like written and spoken natural languages, are meant to have compatible interpretations, and so it is usually sufficient to give just the meanings of either one.  All that remains is to provide a ''codomain'' or a ''target space'' for the intended semantic function, in other words, to supply a suitable range of logical meanings for the memberships of these languages to map into.  Out of the many interpretations that are formally possible to arrange, one way of doing this proceeds by making the following definitions:
  
(( p0_r0_s0 ),( p0_r0_s1 ),( p0_r0_s# ))
+
<ol style="list-style-type:decimal">
(( p0_r1_s0 ),( p0_r1_s1 ),( p0_r1_s# ))
 
(( p0_r2_s0 ),( p0_r2_s1 ),( p0_r2_s# ))
 
  
(( p1_r0_s0 ),( p1_r0_s1 ),( p1_r0_s# ))
+
<li>
(( p1_r1_s0 ),( p1_r1_s1 ),( p1_r1_s# ))
+
<p>The ''conjunction'' <math>\operatorname{Conj}_j^J q_j</math> of a set of propositions, <math>\{ q_j : j \in J \},</math> is a proposition that is true if and only if every one of the <math>q_j\!</math> is true.</p>
(( p1_r2_s0 ),( p1_r2_s1 ),( p1_r2_s# ))
 
  
(( p2_r0_s0 ),( p2_r0_s1 ),( p2_r0_s# ))
+
<p><math>\operatorname{Conj}_j^J q_j</math> is true &nbsp;<math>\Leftrightarrow</math>&nbsp; <math>q_j\!</math> is true for every <math>j \in J.</math></p></li>
(( p2_r1_s0 ),( p2_r1_s1 ),( p2_r1_s# ))
 
(( p2_r2_s0 ),( p2_r2_s1 ),( p2_r2_s# ))
 
  
[Interaction Conditions]
+
<li>
 +
<p>The ''surjunction'' <math>\operatorname{Surj}_j^J q_j</math> of a set of propositions, <math>\{ q_j : j \in J \},</math> is a proposition that is true if and only if exactly one of the <math>q_j\!</math> is untrue.</p>
  
(( p0_r0 ) p0_r0_s0 ( p1_r0_s0 ))
+
<p><math>\operatorname{Surj}_j^J q_j</math> is true &nbsp;<math>\Leftrightarrow</math>&nbsp;  <math>q_j\!</math> is untrue for unique <math>j \in J.</math></p></li>
(( p0_r0 ) p0_r0_s1 ( p1_r0_s1 ))
 
(( p0_r0 ) p0_r0_s# ( p1_r0_s# ))
 
  
(( p0_r1 ) p0_r1_s0 ( p1_r1_s0 ))
+
</ol>
(( p0_r1 ) p0_r1_s1 ( p1_r1_s1 ))
 
(( p0_r1 ) p0_r1_s# ( p1_r1_s# ))
 
  
(( p0_r2 ) p0_r2_s0 ( p1_r2_s0 ))
+
If the number of propositions that are being joined together is finite, then the conjunction and the surjunction can be represented by means of sentential connectives, incorporating the sentences that represent these propositions into finite strings of symbols.
(( p0_r2 ) p0_r2_s1 ( p1_r2_s1 ))
 
(( p0_r2 ) p0_r2_s# ( p1_r2_s# ))
 
  
(( p1_r0 ) p1_r0_s0 ( p2_r0_s0 ))
+
If <math>J\!</math> is finite, for instance, if <math>J\!</math> consists of the integers in the interval <math>j = 1 ~\text{to}~ k,</math> and if each proposition <math>q_j\!</math> is represented by a sentence <math>s_j,\!</math> then the following strategies of expression are open:
(( p1_r0 ) p1_r0_s1 ( p2_r0_s1 ))
 
(( p1_r0 ) p1_r0_s# ( p2_r0_s# ))
 
  
(( p1_r1 ) p1_r1_s0 ( p2_r1_s0 ))
+
<ol style="list-style-type:decimal">
(( p1_r1 ) p1_r1_s1 ( p2_r1_s1 ))
 
(( p1_r1 ) p1_r1_s# ( p2_r1_s# ))
 
  
(( p1_r2 ) p1_r2_s0 ( p2_r2_s0 ))
+
<li>
(( p1_r2 ) p1_r2_s1 ( p2_r2_s1 ))
+
<p>The conjunction <math>\operatorname{Conj}_j^J q_j</math> can be represented by a sentence that is constructed by concatenating the <math>s_j\!</math> in the following fashion:</p>
(( p1_r2 ) p1_r2_s# ( p2_r2_s# ))
 
  
[Transition Relations]
+
<p><math>\operatorname{Conj}_j^J q_j ~\leftrightsquigarrow~ s_1 s_2 \ldots s_k.</math></p></li>
  
( p0_q0  p0_r1  p0_r1_s0  ( p1_q0  p1_r2  p1_r1_s0 ))
+
<li>
( p0_q0  p0_r1  p0_r1_s1  ( p1_q1  p1_r2  p1_r1_s1 ))
+
<p>The surjunction <math>\operatorname{Surj}_j^J q_j</math> can be represented by a sentence that is constructed by surcatenating the <math>s_j\!</math> in the following fashion:</p>
( p0_q0  p0_r1  p0_r1_s#  ( p1_q#  p1_r0  p1_r1_s# ))
 
( p0_q0  p0_r2  p0_r2_s#  ( p1_q#  p1_r1  p1_r2_s# ))
 
  
( p0_q1  p0_r1  p0_r1_s0  ( p1_q1  p1_r2  p1_r1_s0 ))
+
<p><math>\operatorname{Surj}_j^J q_j ~\leftrightsquigarrow~ \underline{(} s_1, s_2, \ldots, s_k \underline{)}.</math></p></li>
( p0_q1  p0_r1  p0_r1_s1  ( p1_q0  p1_r2  p1_r1_s1 ))
 
( p0_q1  p0_r1  p0_r1_s#  ( p1_q*  p1_r0  p1_r1_s# ))
 
( p0_q1  p0_r2  p0_r2_s#  ( p1_q*  p1_r1  p1_r2_s# ))
 
  
( p1_q0  p1_r1  p1_r1_s0  ( p2_q0  p2_r2  p2_r1_s0 ))
+
</ol>
( p1_q0  p1_r1  p1_r1_s1  ( p2_q1  p2_r2  p2_r1_s1 ))
 
( p1_q0  p1_r1  p1_r1_s#  ( p2_q#  p2_r0  p2_r1_s# ))
 
( p1_q0  p1_r2  p1_r2_s#  ( p2_q#  p2_r1  p2_r2_s# ))
 
  
( p1_q1  p1_r1  p1_r1_s0  ( p2_q1  p2_r2  p2_r1_s0 ))
+
If one opts for a mode of interpretation that moves more directly from the parse graph of a sentence to the potential logical meaning of both the PARC and the PARCE, then the following specifications are in order:
( p1_q1  p1_r1  p1_r1_s1  ( p2_q0  p2_r2  p2_r1_s1 ))
 
( p1_q1  p1_r1  p1_r1_s#  ( p2_q*  p2_r0  p2_r1_s# ))
 
( p1_q1  p1_r2  p1_r2_s#  ( p2_q*  p2_r1  p2_r2_s# ))
 
  
o~~~~~~~~~o~~~~~~~~~o~~OUTPUT~~o~~~~~~~~~o~~~~~~~~~o
+
A cactus rooted at a particular node is taken to represent what that node denotes, its logical denotation or its logical interpretation.
  
[Output Of Study = FATA Output For Input 0]
+
# The logical denotation of a node is the logical conjunction of that node's arguments, which are defined as the logical denotations of that node's attachments.  The logical denotation of either a blank symbol or an empty node is the boolean value <math>\underline{1} = \operatorname{true}.</math>  The logical denotation of the paint <math>\mathfrak{p}_j\!</math> is the proposition <math>p_j,\!</math> a proposition that is regarded as ''primitive'', at least, with respect to the level of analysis that is represented in the current instance of <math>\mathfrak{C} (\mathfrak{P}).</math>
 +
# The logical denotation of a lobe is the logical surjunction of that lobe's arguments, which are defined as the logical denotations of that lobe's accoutrements.  As a corollary, the logical denotation of the parse graph of <math>\underline{(} \underline{)},</math> otherwise called a ''needle'', is the boolean value <math>\underline{0} = \operatorname{false}.</math>
  
p0_q0
+
If one takes the point of view that PARCs and PARCEs amount to a pair of intertranslatable languages for the same domain of objects, then denotation brackets of the form <math>\downharpoonleft \ldots \downharpoonright</math> can be used to indicate the logical denotation <math>\downharpoonleft C_j \downharpoonright</math> of a cactus <math>C_j\!</math> or the logical denotation <math>\downharpoonleft s_j \downharpoonright</math> of a sentence <math>s_j.\!</math>
  p0_r1
 
  p0_r0_s#
 
    p0_r1_s0
 
    p0_r2_s#
 
      p1_q0
 
      p1_r2
 
        p1_r2_s#
 
        p1_r0_s#
 
          p1_r1_s0
 
          p2_q#
 
            p2_r1
 
            p2_r0_s#
 
              p2_r1_s0
 
              p2_r2_s#
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
Tables&nbsp;14 and 15 summarize the relations that serve to connect the formal language of sentences with the logical language of propositions.  Between these two realms of expression there is a family of graphical data structures that arise in parsing the sentences and that serve to facilitate the performance of computations on the indicator functions.  The graphical language supplies an intermediate form of representation between the formal sentences and the indicator functions, and the form of mediation that it provides is very useful in rendering the possible connections between the other two languages conceivable in fact, not to mention in carrying out the necessary translations on a practical basis.  These Tables include this intermediate domain in their Central Columns.  Between their First and Middle Columns they illustrate the mechanics of parsing the abstract sentences of the cactus language into the graphical data structures of the corresponding species.  Between their Middle and Final Columns they summarize the semantics of interpreting the graphical forms of representation for the purposes of reasoning with propositions.
  
Turing automata, finitely approximated or not, make my head spin and
+
<br>
my tape go loopy, and I still believe 'twere a far better thing I do
 
if I work up to that level of complexity in a more gracile graduated
 
manner.  So let us return to our Example in this gradual progress to
 
that vastly more well-guarded grail of our long-term pilgrim's quest:
 
  
|                 boy  male          girl  female
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|                   o---o child          o---o child
+
|+ style="height:30px" | <math>\text{Table 14.} ~~ \text{Semantic Translation : Functional Form}\!</math>
| male  female      \ /                  \ /               child  human
+
|- style="height:40px; background:ghostwhite"
|     o---o            o                    o                    o--o
 
|      \ /            |                    |                    |
 
|      @              @                    @                    @
 
 
|
 
|
| (male , female)((boy , male child))((girl , female child))(child (human))
+
{| align="center" border="0" cellpadding="8" cellspacing="0" style="background:ghostwhite; width:100%"
 
+
| width="20%" | <math>\mathrm{Sentence}\!</math>
One section of the Theme One program has a suite of utilities that fall
+
| width="20%" | <math>\xrightarrow[\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}]{\mathrm{Parse}}\!</math>
under the title "Theme One Study" ("To Study", or just "TOS" for short).
+
| width="20%" | <math>\mathrm{Graph}\!</math>
To Study is to read and to parse a so-called and a generally so-suffixed
+
| width="20%" | <math>\xrightarrow[\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}]{\mathrm{Denotation}}\!</math>
"log" file, and then to conjoin what is called a "query", which is really
+
| width="20%" | <math>\mathrm{Proposition}\!</math>
just an additional propositional expression that imposes a further logical
+
|}
constraint on the input expression.
+
|-
 
 
The Figure roughly sketches the conjuncts of the graph-theoretic
 
data structure that the parser would commit to memory on reading
 
the appropriate log file that contains the text along the bottom.
 
 
 
I will now explain the various sorts of things that the TOS utility
 
can do with the log file that describes the universe of discourse in
 
our present Example.
 
 
 
Theme One Study is built around a suite of four successive generators
 
of "normal forms" for propositional expressions, just to use that term
 
in a very approximate way.  The functions that compute these normal forms
 
are called "Model", "Tenor", "Canon", and "Sense", and so we may refer to
 
to their text-style outputs as the "mod", "ten", "can", and "sen" files.
 
 
 
Though it could be any propositional expression on the same vocabulary
 
$A$ = {"boy", "child", "female", "girl", "human", "male"}, more usually
 
the query is a simple conjunction of one or more positive features that
 
we want to focus on or perhaps to filter out of the logical model space.
 
On our first run through this Example, we take the log file proposition
 
as it is, with no extra riders.
 
 
 
| Procedural Note.  TO Study Model displays a running tab of how much
 
| free memory space it has left.  On some of the harder problems that
 
| you may think of to give it, Model may run out of free memory and
 
| terminate, abnormally exiting Theme One.  Sometimes it helps to:
 
 
|
 
|
| 1.  Rephrase the problem in logically equivalent
+
{| align="center" border="0" cellpadding="8" cellspacing="0" width="100%"
|     but rhetorically increasedly felicitous ways.
+
| width="20%" | <math>s_j\!</math>
 +
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
 +
| width="20%" | <math>C_j\!</math>
 +
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
 +
| width="20%" | <math>q_j\!</math>
 +
|}
 +
|-
 
|
 
|
| 2.  Think of additional facts that are taken for granted but not
+
{| align="center" border="0" cellpadding="8" cellspacing="0" width="100%"
|     made explicit and that cannot be logically inferred by Model.
+
| width="20%" | <math>\mathrm{Conc}^0\!</math>
 
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
After Model has finished, it is ready to write out its mod file,
+
| width="20%" | <math>\mathrm{Node}^0\!</math>
which you may choose to show on the screen or save to a named file.
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
Mod files are usually too long to see (or to care to see) all at once
+
| width="20%" | <math>\underline{1}\!</math>
on the screen, so it is very often best to save them for later replay.
+
|-
In our Example the Model function yields a mod file that looks like so:
+
| width="20%" | <math>\mathrm{Conc}^k_j s_j\!</math>
 
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
Model Output and
+
| width="20%" | <math>\mathrm{Node}^k_j C_j\!</math>
Mod File Example
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
o-------------------o
+
| width="20%" | <math>\mathrm{Conj}^k_j q_j\!</math>
| male              |
+
|}
| female -        | 1
+
|-
(female )       |
 
|  girl -          |  2
 
(girl )         |
 
|   child          |
 
|     boy          |
 
|      human *      |  3 *
 
|      (human ) -  |  4
 
|    (boy ) -      |  5
 
|   (child )      |
 
|     boy -         |  6
 
|     (boy ) *      | 7 *
 
| (male )           |
 
|  female          |
 
|  boy -          |  8
 
(boy )         |
 
|   child          |
 
|     girl          |
 
|      human *      |  9 *
 
|      (human ) -  | 10
 
|    (girl ) -    | 11
 
|    (child )      |
 
|     girl -        | 12
 
|    (girl ) *    | 13 *
 
|  (female ) -      | 14
 
o-------------------o
 
 
 
Counting the stars "*" that indicate true interpretations
 
and the bars "-" that indicate false interpretations of
 
the input formula, we can see that the Model function,
 
out of the 64 possible interpretations, has actually
 
gone through the work of making just 14 evaluations,
 
all in order to find the 4 models that are allowed
 
by the input definitions.
 
 
 
To be clear about what this output means, the starred paths
 
indicate all of the complete specifications of objects in the
 
universe of discourse, that is, all of the consistent feature
 
conjunctions of maximum length, as permitted by the definitions
 
that are given in the log file.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Let's take a little break from the Example in progress
 
and look at where we are and what we have been doing from
 
computational, logical, and semiotic perspectives.  Because,
 
after all, as is usually the case, we should not let our focus
 
and our fascination with this particular Example prevent us from
 
recognizing it, and all that we do with it, as just an Example of
 
much broader paradigms and predicaments and principles, not to say
 
but a glimmer of ultimately more concernful and fascinating objects.
 
 
 
I chart the progression that we have just passed through in this way:
 
 
 
|                   Parse
 
|      Sign A  o-------------->o  Sign 1
 
|            ^               |
 
|            /                 |
 
|          /                  |
 
|          /                  |
 
| Object  o                    |  Transform
 
|          ^                  |
 
|          \                  |
 
|            \                |
 
|             \                v
 
|     Sign B  o<--------------o  Sign 2
 
|                    Verse
 
 
|
 
|
| Figure.  Computation As Sign Transformation
+
{| align="center" border="0" cellpadding="8" cellspacing="0" width="100%"
 
+
| width="20%" | <math>\mathrm{Surc}^0\!</math>
In the present case, the Object is an objective situation
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
or a state of affairs, in effect, a particular pattern of
+
| width="20%" | <math>\mathrm{Lobe}^0\!</math>
feature concurrences occurring to us in that world through
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
which we find ourselves most frequently faring, wily nily,
+
| width="20%" | <math>\underline{0}\!</math>
and the Signs are different tokens and different types of
+
|-
data structures that we somehow or other find it useful
+
| width="20%" | <math>\mathrm{Surc}^k_j s_j~\!</math>
to devise or to discover for the sake of representing
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
current objects to ourselves on a recurring basis.
+
| width="20%" | <math>\mathrm{Lobe}^k_j C_j\!</math>
 
+
| width="20%" | <math>\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!</math>
But not all signs, not even signs of a single object, are alike
+
| width="20%" | <math>\mathrm{Surj}^k_j q_j\!</math>
in every other respect that one might name, not even with respect
+
|}
to their powers of relating, significantly, to that common object.
+
|}
 
 
And that is what our whole business of computation busies itself about,
 
when it minds its business best, that is, transmuting signs into signs
 
in ways that augment their powers of relating significantly to objects.
 
 
 
We have seen how the Model function and the mod output format
 
indicate all of the complete specifications of objects in the
 
universe of discourse, that is, all of the consistent feature
 
conjunctions of maximal specificity that are permitted by the
 
constraints or the definitions that are given in the log file.
 
 
 
To help identify these specifications of particular cells in
 
the universe of discourse, the next function and output format,
 
called "Tenor", edits the mod file to give only the true paths,
 
in effect, the "positive models", that are by default what we
 
usually mean when we say "models", and not the "anti-models"
 
or the "negative models" that fail to satisfy the formula
 
in question.
 
 
 
In the present Example the Tenor function
 
generates a Ten file that looks like this:
 
 
 
Tenor Output and
 
Ten File Example
 
o-------------------o
 
| male              |
 
| (female )       |
 
(girl )         |
 
|   child          |
 
|     boy          |
 
|      human *      | <1>
 
|    (child )       |
 
|    (boy ) *      | <2>
 
| (male )          |
 
| female          |
 
|   (boy )          |
 
|   child          |
 
|    girl          |
 
|      human *      | <3>
 
|    (child )       |
 
|     (girl ) *    | <4>
 
o-------------------o
 
 
 
As I said, the Tenor function just abstracts a transcript of the models,
 
that is, the satisfying interpretations, that were already interspersed
 
throughout the complete Model output.  These specifications, or feature
 
conjunctions, with the positive and the negative features listed in the
 
order of their actual budding on the "arboreal boolean expansion" twigs,
 
may be gathered and arranged in this antherypulogical flowering bouquet:
 
 
 
1.  male  (female ) (girl ) child    boy    human  *
 
2.  male  (female ) (girl ) (child ) (boy  )          *
 
3.  (male )  female  (boy  )  child    girl    human  *
 
4.  (male )  female  (boy  ) (child ) (girl )          *
 
  
Notice that Model, as reflected in this abstract, did not consider
+
<br>
the six positive features in the same order along each path.  This
 
is because the algorithm was designed to proceed opportunistically
 
in its attempt to reduce the original proposition through a series
 
of case-analytic considerations and the resulting simplifications.
 
  
Notice, too, that Model is something of a lazy evaluator, quitting work
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
when and if a value is determined by less than the full set of variables.
+
|+ style="height:30px" | <math>\text{Table 15.} ~~ \text{Semantic Translation : Equational Form}\!</math>
This is the reason why paths <2> and <4> are not ostensibly of the maximum
+
|- style="height:40px; background:ghostwhite"
length.  According to this lazy mode of understanding, any path that is not
 
specified on a set of features really stands for the whole bundle of paths
 
that are derived by freely varying those features.  Thus, specifications
 
<2> and <4> summarize four models altogether, with the logical choice
 
between "human" and "not human" being left open at the point where
 
they leave off their branches in the releavent deciduous tree.
 
 
 
The last two functions in the Study section, "Canon" and "Sense",
 
extract further derivatives of the normal forms that are produced
 
by Model and Tenor.  Both of these functions take the set of model
 
paths and simply throw away the negative labels.  You may think of
 
these as the "rose colored glasses" or "job interview" normal forms,
 
in that they try to say everything that's true, so long as it can be
 
expressed in positive terms.  Generally, this would mean losing a lot
 
of information, and the result could no longer be expected to have the
 
property of remaining logically equivalent to the original proposition.
 
 
 
Fortunately, however, it seems that this type of positive projection of
 
the whole truth is just what is possible, most needed, and most clear in
 
many of the "natural" examples, that is, in examples that arise from the
 
domains of natural language and natural conceptual kinds.  In these cases,
 
where most of the logical features are redundantly coded, for example, in
 
the way that "adult" = "not child" and "child" = "not adult", the positive
 
feature bearing redacts are often sufficiently expressive all by themselves.
 
 
 
Canon merely censors its printing of the negative labels as it traverses the
 
model tree.  This leaves the positive labels in their original columns of the
 
outline form, giving it a slightly skewed appearance.  This can be misleading
 
unless you already know what you are looking for.  However, this Canon format
 
is computationally quick, and frequently suffices, especially if you already
 
have a likely clue about what to expect in the way of a question's outcome.
 
 
 
In the present Example the Canon function
 
generates a Can file that looks like this:
 
 
 
Canon Output and
 
Can File Example
 
o-------------------o
 
| male              |
 
|    child          |
 
|    boy          |
 
|      human        |
 
|  female          |
 
|    child          |
 
|    girl          |
 
|      human        |
 
o-------------------o
 
 
 
The Sense function does the extra work that is required
 
to place the positive labels of the model tree at their
 
proper level in the outline.
 
 
 
In the present Example the Sense function
 
generates a Sen file that looks like this:
 
 
 
Sense Output and
 
Sen File Example
 
o-------------------o
 
| male              |
 
|  child            |
 
|   boy            |
 
|    human          |
 
| female            |
 
|  child            |
 
|  girl            |
 
|    human          |
 
o-------------------o
 
 
 
The Canon and Sense outlines for this Example illustrate a certain
 
type of general circumstance that needs to be noted at this point.
 
Recall the model paths or the feature specifications that were
 
numbered <2> and <4> in the listing of the output for Tenor.
 
These paths, in effect, reflected Model's discovery that
 
the venn diagram cells for male or female non-children
 
and male or female non-humans were not excluded by
 
the definitions that were given in the Log file.
 
In the abstracts given by Canon and Sense, the
 
specifications <2> and <4> have been subsumed,
 
or absorbed unmarked, under the general topics
 
of their respective genders, male or female.
 
This happens because no purely positive
 
features were supplied to distinguish
 
the non-child and non-human cases.
 
 
 
That completes the discussion of
 
this six-dimensional Example.
 
 
 
Nota Bene, for possible future use.  In the larger current of work
 
with respect to which this meander of a conduit was initially both
 
diversionary and tributary, before those high and dry regensquirm
 
years when it turned into an intellectual interglacial oxbow lake,
 
I once had in mind a scape in which expressions in a definitional
 
lattice were ordered according to their simplicity on some scale
 
or another, and in this setting the word "sense" was actually an
 
acronym for "semantically equivalent next-simplest expression".
 
 
 
| If this is starting to sound a little bit familiar,
 
| it may be because the relationship between the two
 
| kinds of pictures of propositions, namely:
 
 
|
 
|
| 1.  Propositions about things in general, here,
+
{| align="center" border="0" cellpadding="8" cellspacing="0" style="background:ghostwhite; width:100%"
|     about the times when certain facts are true,
+
| width="20%" | <math>\downharpoonleft \mathrm{Sentence} \downharpoonright\!</math>
|     having the form of functions f : X -> B,
+
| width="20%" | <math>\stackrel{\mathrm{Parse}}{=}\!</math>
 +
| width="20%" | <math>\downharpoonleft \mathrm{Graph} \downharpoonright\!</math>
 +
| width="20%" | <math>\stackrel{\mathrm{Denotation}}{=}\!</math>
 +
| width="20%" | <math>\mathrm{Proposition}\!</math>
 +
|}
 +
|-
 
|
 
|
| 2.  Propositions about binary codes, here, about
+
{| align="center" border="0" cellpadding="8" cellspacing="0" width="100%"
|     the bit-vector labels on venn diagram cells,
+
| width="20%" | <math>\downharpoonleft s_j \downharpoonright\!</math>
|     having the form of functions f' : B^k -> B,
+
| width="20%" | <math>=\!</math>
 +
| width="20%" | <math>\downharpoonleft C_j \downharpoonright\!</math>
 +
| width="20%" | <math>=\!</math>
 +
| width="20%" | <math>q_j\!</math>
 +
|}
 +
|-
 
|
 
|
| is an epically old story, one that I, myself,
+
{| align="center" border="0" cellpadding="8" cellspacing="0" width="100%"
| have related one or twice upon a time before,
+
| width="20%" | <math>\downharpoonleft \mathrm{Conc}^0 \downharpoonright\!</math>
| to wit, at least, at the following two cites:
+
| width="20%" | <math>=\!</math>
 +
| width="20%" | <math>\downharpoonleft \mathrm{Node}^0 \downharpoonright\!</math>
 +
| width="20%" | <math>=\!</math>
 +
| width="20%" | <math>\underline{1}\!</math>
 +
|-
 +
| width="20%" | <math>\downharpoonleft \mathrm{Conc}^k_j s_j \downharpoonright\!</math>
 +
| width="20%" | <math>=\!</math>
 +
| width="20%" | <math>\downharpoonleft \mathrm{Node}^k_j C_j \downharpoonright\!</math>
 +
| width="20%" | <math>=\!</math>
 +
| width="20%" | <math>\mathrm{Conj}^k_j q_j\!</math>
 +
|}
 +
|-
 
|
 
|
| http://suo.ieee.org/email/msg01251.html
+
{| align="center" border="0" cellpadding="8" cellspacing="0" width="100%"
| http://suo.ieee.org/email/msg01293.html
+
| width="20%" | <math>\downharpoonleft \mathrm{Surc}^0 \downharpoonright\!</math>
|
+
| width="20%" | <math>=\!</math>
| There, and now here, once more, and again, it may be observed
+
| width="20%" | <math>\downharpoonleft \mathrm{Lobe}^0 \downharpoonright\!</math>
| that the relation is one whereby the proposition f : X -> B,
+
| width="20%" | <math>=\!</math>
| the one about things and times and mores in general, factors
+
| width="20%" | <math>\underline{0}\!</math>
| into a coding function c : X -> B^k, followed by a derived
+
|-
| proposition f' : B^k -> B that judges the resulting codes.
+
| width="20%" | <math>\downharpoonleft \mathrm{Surc}^k_j s_j \downharpoonright\!</math>
|
+
| width="20%" | <math>=\!</math>
|                         f
+
| width="20%" | <math>\downharpoonleft \mathrm{Lobe}^k_j C_j \downharpoonright\!</math>
|                  X o------>o B
+
| width="20%" | <math>=\!</math>
|                      \     ^
+
| width="20%" | <math>\mathrm{Surj}^k_j q_j\!</math>
|   c = <x_1, ..., x_k> \   / f'
+
|}
|                       v /
+
|}
|                         o
 
|                       B^k
 
|
 
| You may remember that this was supposed to illustrate
 
| the "factoring" of a proposition f : X -> B = {0, 1}
 
| into the composition f'(c(x)), where c : X -> B^k is
 
| the "coding" of each x in X as an k-bit string in B^k,
 
| and where f' is the mapping of codes into a co-domain
 
| that we interpret as t-f-values, B = {0, 1} = {F, T}.
 
  
In short, there is the standard equivocation ("systematic ambiguity"?) as to
+
<br>
whether we are talking about the "applied" and concretely typed proposition
 
f : X -> B or the "pure" and abstractly typed proposition f' : B^k -> B.
 
Or we can think of the latter object as the approximate code icon of
 
the former object.
 
  
Anyway, these types of formal objects are the sorts of things that
+
Aside from their common topic, the two Tables present slightly different ways of conceptualizing the operations that go to establish their maps.  Table&nbsp;14 records the functional associations that connect each domain with the next, taking the triplings of a sentence <math>s_j,\!</math> a cactus <math>C_j,\!</math> and a proposition <math>q_j\!</math> as basic data, and fixing the rest by recursion on these. Table&nbsp;15 records these associations in the form of equations, treating sentences and graphs as alternative kinds of signs, and generalizing the denotation bracket operator to indicate the proposition that either denotes.  It should be clear at this point that either scheme of translation puts the sentences, the graphs, and the propositions that it associates with each other roughly in the roles of the signs, the interpretants, and the objects, respectively, whose triples define an appropriate sign relation.  Indeed, the "roughly" can be made "exactly" as soon as the domains of a suitable sign relation are specified precisely.
I take to be the denotational objects of propositional expressions.
 
These objects, along with their invarious and insundry mathematical
 
properties, are the orders of things that I am talking about when
 
I refer to the "invariant structures in these objects themselves".
 
  
"Invariant" means "invariant under a suitable set of transformations",
+
A good way to illustrate the action of the conjunction and surjunction operators is to demonstrate how they can be used to construct the boolean functions on any finite number of variablesLet us begin by doing this for the first three cases, <math>k = 0, 1, 2.\!</math>
in this case the translations between various languages that preserve
 
the objects and the structures in questionIn extremest generality,
 
this is what universal constructions in category theory are all about.
 
  
In summation, the functions f : X -> B and f' : B* -> B have invariant, formal,
+
A boolean function <math>F^{(0)}\!</math> on <math>0\!</math> variables is just an element of the boolean domain <math>\underline\mathbb{B} = \{ \underline{0}, \underline{1} \}.</math> Table&nbsp;16 shows several different ways of referring to these elements, just for the sake of consistency using the same format that will be used in subsequent Tables, no matter how degenerate it tends to appear in the initial case.
mathematical, objective properties that any adequate language might eventually
 
evolve to express, only some languages express them more obscurely than others.
 
  
To be perfectly honest, I continue to be surprised that anybody in this group
+
<br>
has trouble with this.  There are perfectly apt and familiar examples in the
 
contrast between roman numerals and arabic numerals, or the contrast between
 
redundant syntaxes, like those that use the pentalphabet {~, &, v, =>, <=>},
 
and trimmer syntaxes, like those used in existential and conceptual graphs.
 
Every time somebody says "Let's take {~, &, v, =>, <=>} as an operational
 
basis for logic" it's just like that old joke that mathematicians tell on
 
engineers where the ingenue in question says "1 is a prime, 2 is a prime,
 
3 is a prime, 4 is a prime, ..." -- and I know you think that I'm being
 
hyperbolic, but I'm really only up to parabolas here ...
 
  
I have already refined my criticism so that it does not apply to
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
the spirit of FOL or KIF or whatever, but only to the letters of
+
|+ style="height:30px" | <math>\text{Table 16.} ~~ \text{Boolean Functions on Zero Variables}\!</math>
specific syntactic proposals.  There is a fact of the matter as
+
|- style="height:40px; background:ghostwhite"
to whether a concrete language provides a clean or a cluttered
+
| width="14%" | <math>F\!</math>
basis for representing the identified set of formal objects.
+
| width="14%" | <math>F\!</math>
And it shows up in pragmatic realities like the efficiency
+
| width="48%" | <math>F()\!</math>
of real time concept formation, concept use, learnability,
+
| width="24%" | <math>F\!</math>
reasoning power, and just plain good use of real time.
+
|-
These are the dire consequences that I learned in my
+
| <math>\underline{0}\!</math>
very first tries at mathematically oriented theorem
+
| <math>F_0^{(0)}\!</math>
automation, and the only factor that has obscured
+
| <math>\underline{0}\!</math>
them in mainstream work since then is the speed
+
| <math>\texttt{(~)}\!</math>
with which folks can now do all of the same
+
|-
old dumb things that they used to do on
+
| <math>\underline{1}\!</math>
their way to kludging out the answers.
+
| <math>F_1^{(0)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{((~))}\!</math>
 +
|}
  
It seems to be darn near impossible to explain to the
+
<br>
centurion all of the neat stuff that he's missing by
 
sticking to his old roman numerals.  He just keeps
 
on reckoning that what he can't count must be of
 
no account at all.  There is way too much stuff
 
that these original syntaxes keep us from even
 
beginning to discuss, like differential logic,
 
just for starters.
 
  
Our next Example illustrates the use of the Cactus Language
+
Column&nbsp;1 lists each boolean element or boolean function under its ordinary constant name or under a succinct nickname, respectively.
for representing "absolute" and "relative" partitions, also
 
known as "complete" and "contingent" classifications of the
 
universe of discourse, all of which amounts to divvying it
 
up into mutually exclusive regions, exhaustive or not, as
 
one frequently needs in situations involving a genus and
 
its sundry species, and frequently pictures in the form
 
of a venn diagram that looks just like a "pie chart".
 
  
Example. Partition, Genus & Species
+
Column&nbsp;2 lists each boolean function in a style of function name <math>F_j^{(k)}\!</math> that is constructed as follows: The superscript <math>(k)\!</math> gives the dimension of the functional domain, that is, the number of its functional variables, and the subscript <math>j\!</math> is a binary string that recapitulates the functional values, using the obvious translation of boolean values into binary values.
  
The idea that one needs for expressing partitions
+
Column&nbsp;3 lists the functional values for each boolean function, or possibly a boolean element appearing in the guise of a function, for each combination of its domain values.
in cactus expressions can be summed up like this:
 
  
| If the propositional expression
+
Column&nbsp;4 shows the usual expressions of these elements in the cactus language, conforming to the practice of omitting the underlines in display formats. Here I illustrate also the convention of using the expression <math>^{\backprime\backprime} ((~)) ^{\prime\prime}</math> as a visible stand-in for the expression of the logical value <math>\operatorname{true},</math> a value that is minimally represented by a blank expression that tends to elude our giving it much notice in the context of more demonstrative texts.
|
 
| "( p , q , r , ... )"
 
|
 
| means that just one of
 
|
 
| p, q, r, ... is false,
 
|
 
| then the propositional expression
 
|
 
| "((p),(q),(r), ... )"
 
|
 
| must mean that just one of
 
|
 
| (p), (q), (r), ... is false,
 
|
 
| in other words, that just one of
 
|
 
| p, q, r, ... is true.
 
  
Thus we have an efficient means to express and to enforce
+
Table 17 presents the boolean functions on one variable, <math>F^{(1)} : \underline\mathbb{B} \to \underline\mathbb{B},</math> of which there are precisely four.
a partition of the space of models, in effect, to maintain
 
the condition that a number of features or propositions are
 
to be held in mutually exclusive and exhaustive disjunction.
 
This supplies a much needed bridge between the binary domain
 
of two values and any other domain with a finite number of
 
feature values.
 
  
Another variation on this theme allows one to maintain the
+
<br>
subsumption of many separate species under an explicit genus.
 
To see this, let us examine the following form of expression:
 
  
( q , ( q_1 ) , ( q_2 ) , ( q_3 ) ).
+
{| align="center" border="1" cellpadding="6" cellspacing="0" style="text-align:center; width:80%"
 
+
|+ style="height:30px" | <math>\text{Table 17.} ~~ \text{Boolean Functions on One Variable}\!</math>
Now consider what it would mean for this to be true.  We see two cases:
+
|- style="height:40px; background:ghostwhite"
 
+
| width="14%" | <math>F\!</math>
1.  If the proposition q is true, then exactly one of the
+
| width="14%" | <math>F\!</math>
    propositions (q_1), (q_2), (q_3) must be false, and so
+
| colspan="2" | <math>F(x)\!</math>
    just one of the propositions q_1, q_2, q_3 must be true.
+
| width="24%" | <math>F\!</math>
 
+
|- style="height:40px; background:ghostwhite"
2.  If the proposition q is false, then every one of the
+
| width="14%" | &nbsp;
    propositions (q_1), (q_2), (q_2) must be true, and so
+
| width="14%" | &nbsp;
    each one of the propositions q_1, q_2, q_3 must be false.
+
| width="24%" | <math>F(\underline{1})</math>
    In short, if q is false then all of the other q's are also.
+
| width="24%" | <math>F(\underline{0})</math>
 
+
| width="24%" | &nbsp;
Figures 1 and 2 illustrate this type of situation.
+
|-
 +
| <math>F_0^{(1)}\!</math>
 +
| <math>F_{00}^{(1)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\texttt{(~)}\!</math>
 +
|-
 +
| <math>F_1^{(1)}\!</math>
 +
| <math>F_{01}^{(1)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{(} x \texttt{)}\!</math>
 +
|-
 +
| <math>F_2^{(1)}\!</math>
 +
| <math>F_{10}^{(1)}~\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>x\!</math>
 +
|-
 +
| <math>F_3^{(1)}\!</math>
 +
| <math>F_{11}^{(1)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{((~))}\!</math>
 +
|}
  
Figure 1 is the venn diagram of a 4-dimensional universe of discourse
+
<br>
X = [q, q_1, q_2, q_3], conventionally named after the gang of four
 
logical features that generate it.  Strictly speaking, X is made up
 
of two layers, the position space X of abstract type %B%^4, and the
 
proposition space X^ = (X -> %B%) of abstract type %B%^4 -> %B%,
 
but it is commonly lawful enough to sign the signature of both
 
spaces with the same X, and thus to give the power of attorney
 
for the propositions to the so-indicted position space thereof.
 
  
Figure 1 also makes use of the convention whereby the regions
+
Here, Column&nbsp;1 codes the contents of Column&nbsp;2 in a more concise form, compressing the lists of boolean values, recorded as bits in the subscript string, into their decimal equivalents.  Naturally, the boolean constants reprise themselves in this new setting as constant functions on one variable.  Thus, one has the synonymous expressions for constant functions that are expressed in the next two chains of equations:
or the subsets of the universe of discourse that correspond
 
to the basic features q, q_1, q_2, q_3 are labelled with
 
the parallel set of upper case letters Q, Q_1, Q_2, Q_3.
 
  
|                       o
+
{| align="center" cellpadding="8" width="90%"
|                      / \
 
|                      /  \
 
|                    /    \
 
|                    /      \
 
|                  o        o
 
|                  /%\      /%\
 
|                /%%%\    /%%%\
 
|                /%%%%%\  /%%%%%\
 
|              /%%%%%%%\ /%%%%%%%\
 
|              o%%%%%%%%%o%%%%%%%%%o
 
|            / \%%%%%%%/ \%%%%%%%/ \
 
|            /  \%%%%%/  \%%%%%/  \
 
|          /    \%%%/    \%%%/    \
 
|          /      \%/      \%/      \
 
|        o        o        o        o
 
|        / \      /%\      / \      / \
 
|      /  \    /%%%\    /  \    /  \
 
|      /    \  /%%%%%\  /    \  /    \
 
|    /      \ /%%%%%%%\ /      \ /      \
 
|    o        o%%%%%%%%%o        o        o
 
|    ·\      / \%%%%%%%/ \      / \      /·
 
|    · \    /  \%%%%%/  \    /  \    / ·
 
|    ·  \  /    \%%%/    \  /    \  /  ·
 
|    ·  \ /      \%/      \ /      \ /  ·
 
|    ·    o        o        o        o    ·
 
|    ·    ·\      / \      / \      /·    ·
 
|    ·    · \    /  \    /  \    / ·    ·
 
|    ·    ·  \  /    \  /    \  /  ·    ·
 
|    · Q  ·  \ /      \ /      \ /  ·Q_3 ·
 
|    ··········o        o        o··········
 
|        ·    \      /%\      /    ·
 
|        ·      \    /%%%\    /      ·
 
|        ·      \  /%%%%%\  /      ·
 
|        · Q_1    \ /%%%%%%%\ /    Q_2 ·
 
|        ··········o%%%%%%%%%o··········
 
|                    \%%%%%%%/
 
|                    \%%%%%/
 
|                      \%%%/
 
|                      \%/
 
|                        o
 
 
|
 
|
| Figure 1.  Genus Q and Species Q_1, Q_2, Q_3
+
<math>\begin{matrix}
 +
F_0^{(1)}
 +
& = &
 +
F_{00}^{(1)}
 +
& = &
 +
\underline{0} ~:~ \underline\mathbb{B} \to \underline\mathbb{B}
 +
\\
 +
\\
 +
F_3^{(1)}
 +
& = &
 +
F_{11}^{(1)}
 +
& = &
 +
\underline{1} ~:~ \underline\mathbb{B} \to \underline\mathbb{B}
 +
\end{matrix}</math>
 +
|}
  
Figure 2 is another form of venn diagram that one often uses,
+
As for the rest, the other two functions are easily recognized as corresponding to the one-place logical connectives, or the monadic operators on <math>\underline\mathbb{B}.</math> Thus, the function <math>F_1^{(1)} = F_{01}^{(1)}</math> is recognizable as the negation operation, and the function <math>F_2^{(1)} = F_{10}^{(1)}</math> is obviously the identity operation.
where one collapses the unindited cells and leaves only the
 
models of the proposition in questionSome people would
 
call the transformation that changes from the first form
 
to the next form an operation of "taking the quotient",
 
but I tend to think of it as the "soap bubble picture"
 
or more exactly the "wire & thread & soap film" model
 
of the universe of discourse, where one pops out of
 
consideration the sections of the soap film that
 
stretch across the anti-model regions of space.
 
  
o-------------------------------------------------o
+
Table&nbsp;18 presents the boolean functions on two variables, <math>F^{(2)} : \underline\mathbb{B}^2 \to \underline\mathbb{B},</math> of which there are precisely sixteen.
|                                                |
 
|  X                                              |
 
|                                                |
 
|                        o                        |
 
|                      / \                      |
 
|                      /  \                      |
 
|                    /    \                    |
 
|                    /      \                    |
 
|                  /        \                  |
 
|                  o    Q_1    o                  |
 
|                / \        / \                |
 
|                /  \      /  \                |
 
|              /    \    /    \              |
 
|              /      \  /      \              |
 
|            /        \ /        \             |
 
|            /          Q          \           |
 
|          /            |            \           |
 
|          /            |            \         |
 
|        /       Q_2    |    Q_3      \        |
 
|        /              |              \        |
 
|      /                |                \      |
 
|      o-----------------o-----------------o      |
 
|                                                |
 
|                                                |
 
|                                                |
 
o-------------------------------------------------o
 
  
Figure 2.  Genus Q and Species Q_1, Q_2, Q_3
+
<br>
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
{| align="center" border="1" cellpadding="4" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" | <math>\text{Table 18.} ~~ \text{Boolean Functions on Two Variables}\!</math>
 +
|- style="height:40px; background:ghostwhite"
 +
| width="14%" | <math>F\!</math>
 +
| width="14%" | <math>F\!</math>
 +
| colspan="4" | <math>F(x, y)\!</math>
 +
| width="24%" | <math>F\!</math>
 +
|- style="height:40px; background:ghostwhite"
 +
| width="14%" | &nbsp;
 +
| width="14%" | &nbsp;
 +
| width="12%" | <math>F(\underline{1}, \underline{1})</math>
 +
| width="12%" | <math>F(\underline{1}, \underline{0})</math>
 +
| width="12%" | <math>F(\underline{0}, \underline{1})</math>
 +
| width="12%" | <math>F(\underline{0}, \underline{0})</math>
 +
| width="24%" | &nbsp;
 +
|-
 +
| <math>F_{0}^{(2)}\!</math>
 +
| <math>F_{0000}^{(2)}~\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\texttt{(~)}\!</math>
 +
|-
 +
| <math>F_{1}^{(2)}\!</math>
 +
| <math>F_{0001}^{(2)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{(} x \texttt{)(} y \texttt{)}\!</math>
 +
|-
 +
| <math>F_{2}^{(2)}\!</math>
 +
| <math>F_{0010}^{(2)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\texttt{(} x \texttt{)} y\!</math>
 +
|-
 +
| <math>F_{3}^{(2)}\!</math>
 +
| <math>F_{0011}^{(2)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{(} x \texttt{)}\!</math>
 +
|-
 +
| <math>F_{4}^{(2)}\!</math>
 +
| <math>F_{0100}^{(2)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>x \texttt{(} y \texttt{)}\!</math>
 +
|-
 +
| <math>F_{5}^{(2)}\!</math>
 +
| <math>F_{0101}^{(2)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{(} y \texttt{)}\!</math>
 +
|-
 +
| <math>F_{6}^{(2)}\!</math>
 +
| <math>F_{0110}^{(2)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\texttt{(} x \texttt{,} y \texttt{)}\!</math>
 +
|-
 +
| <math>F_{7}^{(2)}\!</math>
 +
| <math>F_{0111}^{(2)}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{(} x y \texttt{)}\!</math>
 +
|-
 +
| <math>F_{8}^{(2)}\!</math>
 +
| <math>F_{1000}^{(2)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>x y\!</math>
 +
|-
 +
| <math>F_{9}^{(2)}\!</math>
 +
| <math>F_{1001}^{(2)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{((} x \texttt{,} y \texttt{))}\!</math>
 +
|-
 +
| <math>F_{10}^{(2)}\!</math>
 +
| <math>F_{1010}^{(2)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>y\!</math>
 +
|-
 +
| <math>F_{11}^{(2)}\!</math>
 +
| <math>F_{1011}^{(2)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{(} x \texttt{(} y \texttt{))}\!</math>
 +
|-
 +
| <math>F_{12}^{(2)}\!</math>
 +
| <math>F_{1100}^{(2)}~\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>x\!</math>
 +
|-
 +
| <math>F_{13}^{(2)}\!</math>
 +
| <math>F_{1101}^{(2)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{((} x \texttt{)} y \texttt{)}\!</math>
 +
|-
 +
| <math>F_{14}^{(2)}\!</math>
 +
| <math>F_{1110}^{(2)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{0}\!</math>
 +
| <math>\texttt{((} x \texttt{)(} y \texttt{))}\!</math>
 +
|-
 +
| <math>F_{15}^{(2)}\!</math>
 +
| <math>F_{1111}^{(2)}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\underline{1}\!</math>
 +
| <math>\texttt{((~))}\!</math>
 +
|}
  
Example.  Partition, Genus & Species (cont.)
+
<br>
  
Last time we considered in general terms how the forms
+
As before, all of the boolean functions of fewer variables are subsumed in this Table, though under a set of alternative names and possibly different interpretations.  Just to acknowledge a few of the more notable pseudonyms:
of complete partition and contingent partition operate
 
to maintain mutually disjoint and possibly exhaustive
 
categories of positions in a universe of discourse.
 
  
This time we contemplate another concrete Example of
+
: The constant function <math>\underline{0} ~:~ \underline\mathbb{B}^2 \to \underline\mathbb{B}</math> appears under the name <math>F_{0}^{(2)}.</math>
near minimal complexity, designed to demonstrate how
 
the forms of partition and subsumption can interact
 
in structuring a space of feature specifications.
 
  
In this Example, we describe a universe of discourse
+
: The constant function <math>\underline{1} ~:~ \underline\mathbb{B}^2 \to \underline\mathbb{B}</math> appears under the name <math>F_{15}^{(2)}.</math>
in terms of the following vocabulary of five features:
 
  
| L. living_thing
+
: The negation and identity of the first variable are <math>F_{3}^{(2)}</math> and <math>F_{12}^{(2)},</math> respectively.
|
 
| N.  non_living
 
|
 
| A.  animal
 
|
 
| V.  vegetable
 
|
 
| M.  mineral
 
  
Let us construe these features as being subject to four constraints:
+
: The negation and identity of the second variable are <math>F_{5}^{(2)}</math> and <math>F_{10}^{(2)},</math> respectively.
  
| 1.  Everything is either a living_thing or non_living, but not both.
+
: The logical conjunction is given by the function <math>F_{8}^{(2)} (x, y) = x \cdot y.</math>
|
 
| 2.  Everything is either animal, vegetable, or mineral,
 
|    but no two of these together.
 
|
 
| 3.  A living_thing is either animal or vegetable, but not both,
 
|    and everything animal or vegetable is a living_thing.
 
|
 
| 4.  Everything mineral is non_living.
 
  
These notions and constructions are expressed in the Log file shown below:
+
: The logical disjunction is given by the function <math>F_{14}^{(2)} (x, y) = \underline{((} ~x~ \underline{)(} ~y~ \underline{))}.</math>
  
Logical Input File
+
Functions expressing the ''conditionals'', ''implications'', or ''if-then'' statements are given in the following ways:
o-------------------------------------------------o
 
|                                                |
 
|  ( living_thing , non_living )                |
 
|                                                |
 
|  (( animal ),( vegetable ),( mineral ))        |
 
|                                                |
 
|  ( living_thing ,( animal ),( vegetable ))    |
 
|                                                |
 
|  ( mineral ( non_living ))                    |
 
|                                                |
 
o-------------------------------------------------o
 
  
The cactus expression in this file is the expression
+
: <math>[x \Rightarrow y] = F_{11}^{(2)} (x, y) = \underline{(} ~x~ \underline{(} ~y~ \underline{))} = [\operatorname{not}~ x ~\operatorname{without}~ y].</math>
of a "zeroth order theory" (ZOT), one that can be
 
paraphrased in more ordinary language to say:
 
  
Translation
+
: <math>[x \Leftarrow y] = F_{13}^{(2)} (x, y) = \underline{((} ~x~ \underline{)} ~y~ \underline{)} = [\operatorname{not}~ y ~\operatorname{without}~ x].</math>
o-------------------------------------------------o
 
|                                                |
 
|  living_thing  =/=  non_living                |
 
|                                                |
 
|  par : all -> {animal, vegetable, mineral}     |
 
|                                                |
 
|  par : living_thing -> {animal, vegetable}     |
 
|                                                |
 
|  mineral => non_living                        |
 
|                                                |
 
o-------------------------------------------------o
 
  
Here, "par : all -> {p, q, r}" is short for an assertion
+
The function that corresponds to the ''biconditional'', the ''equivalence'', or the ''if and only'' statement is exhibited in the following fashion:
that the universe as a whole is partitioned into subsets
 
that correspond to the features p, q, r.
 
  
Also, "par : q -> {r, s}" asserts that "Q partitions into R and S.
+
: <math>[x \Leftrightarrow y] = [x = y] = F_{9}^{(2)} (x, y) = \underline{((} ~x~,~y~ \underline{))}.</math>
  
It is probably enough just to list the outputs of Model, Tenor, and Sense
+
Finally, there is a boolean function that is logically associated with the ''exclusive disjunction'', ''inequivalence'', or ''not equals'' statement, algebraically associated with the ''binary sum'' operation, and geometrically associated with the ''symmetric difference'' of setsThis function is given by:
when run on the preceding Log fileUsing the same format and labeling as
 
before, we may note that Model has, from 2^5 = 32 possible interpretations,
 
made 11 evaluations, and found 3 models answering the generic descriptions
 
that were imposed by the logical input file.
 
  
Model Outline
+
: <math>[x \neq y] = [x + y] = F_{6}^{(2)} (x, y) = \underline{(} ~x~,~y~ \underline{)}.</math>
o------------------------o
 
| living_thing          |
 
|  non_living -          |  1
 
(non_living )        |
 
|  mineral -            |  2
 
|  (mineral )          |
 
|    animal              |
 
|    vegetable -        |  3
 
|    (vegetable ) *    |  4 *
 
|    (animal )          |
 
|    vegetable *        |  5 *
 
|    (vegetable ) -    |  6
 
| (living_thing )        |
 
|  non_living            |
 
|  animal -            |  7
 
|  (animal )            |
 
|    vegetable -        |  8
 
|    (vegetable )       |
 
|    mineral *          |  9 *
 
|    (mineral ) -      | 10
 
(non_living ) -      | 11
 
o------------------------o
 
  
Tenor Outline
+
Let me now address one last question that may have occurred to some. What has happened, in this suggested scheme of functional reasoning, to the distinction that is quite pointedly made by careful logicians between (1) the connectives called ''conditionals'' and symbolized by the signs <math>(\rightarrow)</math> and <math>(\leftarrow),</math> and (2) the assertions called ''implications'' and symbolized by the signs <math>(\Rightarrow)</math> and <math>(\Leftarrow)</math>, and, in a related question: What has happened to the distinction that is equally insistently made between (3) the connective called the ''biconditional'' and signified by the sign <math>(\leftrightarrow)</math> and (4) the assertion that is called an ''equivalence'' and signified by the sign <math>(\Leftrightarrow)</math>?  My answer is this:  For my part, I am deliberately avoiding making these distinctions at the level of syntax, preferring to treat them instead as distinctions in the use of boolean functions, turning on whether the function is mentioned directly and used to compute values on arguments, or whether its inverse is being invoked to indicate the fibers of truth or untruth under the propositional function in question.
o------------------------o
 
| living_thing          |
 
| (non_living )         |
 
(mineral )           |
 
|    animal              |
 
|    (vegetable ) *    | <1>
 
|    (animal )           |
 
|    vegetable *        | <2>
 
| (living_thing )       |
 
| non_living            |
 
(animal )           |
 
|    (vegetable )       |
 
|    mineral *          | <3>
 
o------------------------o
 
  
Sense Outline
+
===Stretching Exercises===
o------------------------o
 
| living_thing          |
 
|  animal                |
 
|  vegetable            |
 
| non_living            |
 
|  mineral              |
 
o------------------------o
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
The arrays of boolean connections described above, namely, the boolean functions <math>F^{(k)} : \underline\mathbb{B}^k \to \underline\mathbb{B},</math> for <math>k\!</math> in <math>\{ 0, 1, 2 \},\!</math> supply enough material to demonstrate the use of the stretch operation in a variety of concrete cases.
  
Example. Molly's World
+
For example, suppose that <math>F\!</math> is a connection of the form <math>F : \underline\mathbb{B}^2 \to \underline\mathbb{B},</math> that is, any one of the sixteen possibilities in Table&nbsp;18, while <math>p\!</math> and <math>q\!</math> are propositions of the form <math>p, q : X \to \underline\mathbb{B},</math> that is, propositions about things in the universe <math>X,\!</math> or else the indicators of sets contained in <math>X.\!</math>
  
I think that we are finally ready to tackle a more respectable example.
+
Then one has the imagination <math>\underline{f} = (f_1, f_2) = (p, q) : (X \to \underline\mathbb{B})^2,</math> and the stretch of the connection <math>F\!</math> to <math>\underline{f}\!</math> on <math>X\!</math> amounts to a proposition <math>F^\$ (p, q) : X \to \underline\mathbb{B}</math> that may be read as the ''stretch of <math>F\!</math> to <math>p\!</math> and <math>q.\!</math>'' If one is concerned with many different propositions about things in <math>X,\!</math> or if one is abstractly indifferent to the particular choices for <math>p\!</math> and <math>q,\!</math> then one may detach the operator <math>F^\$ : (X \to \underline\mathbb{B}))^2 \to (X \to \underline\mathbb{B})),</math> called the ''stretch of <math>F\!</math> over <math>X,\!</math>'' and consider it in isolation from any concrete application.
The Example known as "Molly's World" is borrowed from the literature on
 
computational learning theory, adapted with a few changes from the example
 
called "Molly’s Problem" in the paper "Learning With Hints" by Dana Angluin.
 
By way of setting up the problem, I quote Angluin's motivational description:
 
  
| Imagine that you have become acquainted with an alien named Molly from the
+
When the cactus notation is used to represent boolean functions, a single <math>\$</math> sign at the end of the expression is enough to remind the reader that the connections are meant to be stretched to several propositions on a universe <math>X.\!</math>
| planet Ornot, who is currently employed in a day-care center.  She is quite
 
| good at propositional logic, but a bit weak on knowledge of Earth.  So you
 
| decide to formulate the beginnings of a propositional theory to help her
 
| label things in her immediate environment.
 
|
 
| Angluin, Dana, "Learning With Hints", pages 167-181, in:
 
| David Haussler & Leonard Pitt (eds.), 'Proceedings of the 1988 Workshop
 
| on Computational Learning Theory', Morgan Kaufmann, San Mateo, CA, 1989.
 
 
 
The purpose of this quaint pretext is, of course, to make sure that the
 
reader appreciates the constraints of the problem:  that no extra savvy
 
is fair, all facts must be presumed or deduced on the immediate premises.
 
 
 
My use of this example is not directly relevant to the purposes of the
 
discussion from which it is taken, so I simply give my version of it
 
without comment on those issues.
 
 
 
Here is my rendition of the initial knowledge base delimiting Molly’s World:
 
  
Logical Input File: Molly.Log
+
For example, take the connection <math>F : \underline\mathbb{B}^2 \to \underline\mathbb{B}</math> such that:
o---------------------------------------------------------------------o
 
|                                                                    |
 
| ( object ,( toy ),( vehicle ))                                      |
 
| (( small_size ),( medium_size ),( large_size ))                    |
 
| (( two_wheels ),( three_wheels ),( four_wheels ))                  |
 
| (( no_seat ),( one_seat ),( few_seats ),( many_seats ))            |
 
| ( object ,( scooter ),( bike ),( trike ),( car ),( bus ),( wagon )) |
 
| ( two_wheels    no_seat            ,( scooter ))                    |
 
| ( two_wheels    one_seat    pedals ,( bike ))                      |
 
| ( three_wheels  one_seat    pedals ,( trike ))                      |
 
| ( four_wheels  few_seats  doors  ,( car ))                        |
 
| ( four_wheels  many_seats  doors  ,( bus ))                        |
 
| ( four_wheels  no_seat    handle ,( wagon ))                      |
 
| ( scooter          ( toy  small_size ))                            |
 
| ( wagon            ( toy  small_size ))                            |
 
| ( trike            ( toy  small_size ))                            |
 
| ( bike  small_size  ( toy ))                                        |
 
| ( bike  medium_size ( vehicle ))                                    |
 
| ( bike  large_size  )                                              |
 
| ( car              ( vehicle  large_size ))                        |
 
| ( bus              ( vehicle  large_size ))                        |
 
| ( toy              ( object ))                                    |
 
| ( vehicle          ( object ))                                    |
 
|                                                                    |
 
o---------------------------------------------------------------------o
 
  
All of the logical forms that are used in the preceding Log file
+
: <math>F(x, y) ~=~ F_{6}^{(2)} (x, y) ~=~ \underline{(}~x~,~y~\underline{)}\!</math>
will probably be familiar from earlier discussions.  The purpose
 
of one or two constructions may, however, be a little obscure,
 
so I will insert a few words of additional explanation here:
 
  
The rule "( bike large_size )", for example, merely
+
The connection in question is a boolean function on the variables <math>x, y\!</math> that returns a value of <math>\underline{1}</math> just when just one of the pair <math>x, y\!</math> is not equal to <math>\underline{1},</math> or what amounts to the same thing, just when just one of the pair <math>x, y\!</math> is equal to <math>\underline{1}.</math>  There is clearly an isomorphism between this connection, viewed as an operation on the boolean domain <math>\underline\mathbb{B} = \{ \underline{0}, \underline{1} \},</math> and the dyadic operation on binary values <math>x, y \in \mathbb{B} = \operatorname{GF}(2)\!</math> that is otherwise known as <math>x + y.\!</math>
says that nothing can be both a bike and large_size.
 
  
The rule "( three_wheels one_seat pedals ,( trike ))" says that anything
+
The same connection <math>F : \underline\mathbb{B}^2 \to \underline\mathbb{B}</math> can also be read as a proposition about things in the universe <math>X = \underline\mathbb{B}^2.</math>  If <math>s\!</math> is a sentence that denotes the proposition <math>F,\!</math> then the corresponding assertion says exactly what one states in uttering the sentence <math>^{\backprime\backprime} \, x ~\operatorname{is~not~equal~to}~ y \, ^{\prime\prime}.</math> In such a case, one has <math>\downharpoonleft s \downharpoonright \, = F,</math> and all of the following expressions are ordinarily taken as equivalent descriptions of the same set:
with all the features of three_wheels, one_seat, and pedals is excluded
 
from being anything but a trike.  In short, anything with just those
 
three features is equivalent to a trike.
 
  
Recall that the form "( p , q )" may be interpreted to assert either
+
{| align="center" cellpadding="8" width="90%"
the exclusive disjunction or the logical inequivalence of p and q.
 
 
 
The rules have been stated in this particular way simply
 
to imitate the style of rules in the reference example.
 
 
 
This last point does bring up an important issue, the question
 
of "rhetorical" differences in expression and their potential
 
impact on the "pragmatics" of computation.  Unfortunately,
 
I will have to abbreviate my discussion of this topic for
 
now, and only mention in passing the following facts.
 
 
 
Logically equivalent expressions, even though they must lead
 
to logically equivalent normal forms, may have very different
 
characteristics when it comes to the efficiency of processing.
 
 
 
For instance, consider the following four forms:
 
 
 
| 1.  (( p , q ))
 
 
|
 
|
| 2.  ( p ,( q ))
+
<math>\begin{array}{lll}
|
+
[| \downharpoonleft s \downharpoonright |]
| 3.  (( p ), q )
+
& = & [| F |]
|
+
\\[6pt]
| 4.  (( p , q ))
+
& = & F^{-1} (\underline{1})
 
+
\\[6pt]
All of these are equally succinct ways of maintaining that
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ s ~\}
p is logically equivalent to q, yet each can have different
+
\\[6pt]
effects on the route that Model takes to arrive at an answer.
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ F(x, y) = \underline{1} ~\}
Apparently, some equalities are more equal than others.
+
\\[6pt]
 
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ F(x, y) ~\}
These effects occur partly because the algorithm chooses to make cases
+
\\[6pt]
of variables on a basis of leftmost shallowest first, but their impact
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ \underline{(}~x~,~y~\underline{)} = \underline{1} ~\}
can be complicated by the interactions that each expression has with
+
\\[6pt]
the context that it occupies.  The main lesson to take away from all
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ \underline{(}~x~,~y~\underline{)} ~\}
of this, at least, for the time being, is that it is probably better
+
\\[6pt]
not to bother too much about these problems, but just to experiment
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x ~\operatorname{exclusive~or}~ y ~\}
with different ways of expressing equivalent pieces of information
+
\\[6pt]
until you get a sense of what works best in various situations.
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ \operatorname{just~one~true~of}~ x, y ~\}
 
+
\\[6pt]
I think that you will be happy to see only the
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x ~\operatorname{not~equal~to}~ y ~\}
ultimate Sense of Molly’s World, so here it is:
+
\\[6pt]
 
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x \nLeftrightarrow y ~\}
Sense Outline:  Molly.Sen
+
\\[6pt]
o------------------------o
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x \neq y ~\}
| object                |
+
\\[6pt]
|  two_wheels            |
+
& = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x + y ~\}.
|  no_seat              |
+
\end{array}</math>
|    scooter            |
+
|}
|    toy                |
 
|      small_size        |
 
|  one_seat            |
 
|    pedals              |
 
|    bike              |
 
|      small_size        |
 
|      toy              |
 
|      medium_size      |
 
|      vehicle          |
 
|  three_wheels          |
 
|  one_seat            |
 
|    pedals              |
 
|    trike              |
 
|      toy              |
 
|      small_size      |
 
|  four_wheels          |
 
|  few_seats            |
 
|    doors              |
 
|    car                |
 
|      vehicle          |
 
|      large_size      |
 
|  many_seats          |
 
|    doors              |
 
|    bus                |
 
|      vehicle          |
 
|      large_size      |
 
|  no_seat              |
 
|    handle              |
 
|    wagon              |
 
|      toy              |
 
|      small_size      |
 
o------------------------o
 
 
 
This outline is not the Sense of the unconstrained Log file,
 
but the result of running Model with a query on the single
 
feature "object".  Using this focus helps the Modeler
 
to make more relevant Sense of Molly’s World.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
DM = Douglas McDavid
 
 
 
DM: This, again, is an example of how real issues of ontology are
 
    so often trivialized at the expense of technicalities.  I just
 
    had a burger, some fries, and a Coke.  I would say all that was
 
    non-living and non-mineral.  A virus, I believe is non-animal,
 
    non-vegetable, but living (and non-mineral).  Teeth, shells,
 
    and bones are virtually pure mineral, but living.  These are
 
    the kinds of issues that are truly "ontological," in my
 
    opinion. You are not the only one to push them into
 
    the background as of lesser importance.  See the
 
    discussion of "18-wheelers" in John Sowa's book.
 
  
it's not my example, and from you say, it's not your example either.
+
Notice the distinction, that I continue to maintain at this point, between the logical values <math>\{ \operatorname{falsehood}, \operatorname{truth} \}</math> and the algebraic values <math>\{ 0, 1 \}.\!</math>  This makes it legitimate to write a sentence directly into the righthand side of a set-builder expression, for instance, weaving the sentence <math>s\!</math> or the sentence <math>^{\backprime\backprime} \, x ~\operatorname{is~not~equal~to}~ y \, ^{\prime\prime}</math> into the context <math>^{\backprime\backprime} \, \{ (x, y) \in \underline{B}^2 : \ldots \} \, ^{\prime\prime},</math> thereby obtaining the corresponding expressions listed aboveIt also allows us to assert the proposition <math>F(x, y)\!</math> in a more direct way, without detouring through the equation <math>F(x, y) = \underline{1},</math> since it already has a value in <math>\{ \operatorname{falsehood}, \operatorname{true} \},</math> and thus can be taken as tantamount to an actual sentence.
copied it out of a book or a paper somewhere, too long ago to remember.
 
i am assuming that the author or tardition from which it came must have
 
seen some kind of sense in it.  tell you what, write out your own theory
 
of "what is" in so many variables, more or less, publish it in a book or
 
a paper, and then folks will tell you that they dispute each and every
 
thing that you have just said, and it won't really matter all that much
 
how complex it is or how subtle you are.  that has been the way of all
 
ontology for about as long as anybody can remember or even read about.
 
me? i don't have sufficient arrogance to be an ontologist, and you
 
know that's saying a lot, as i can't even imagine a way to convince
 
myself that i believe i know "what is", really and truly for sure
 
like some folks just seem to do.  so i am working to improve our
 
technical ability to do logic, which is mostly a job of shooting
 
down the more serious delusions that we often get ourselves into.
 
can i be of any use to ontologists?  i dunno.  i guess it depends
 
on how badly they are attached to some of the delusions of knowing
 
what their "common" sense tells them everybody ought to already know,
 
but that every attempt to check that out in detail tells them it just
 
ain't so.  a problem for which denial was just begging to be invented,
 
and so it was.
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
If the appropriate safeguards can be kept in mind, avoiding all danger of confusing propositions with sentences and sentences with assertions, then the marks of these distinctions need not be forced to clutter the account of the more substantive indications, that is, the ones that really matter.  If this level of understanding can be achieved, then it may be possible to relax these restrictions, along with the absolute dichotomy between algebraic and logical values, which tends to inhibit the flexibility of interpretation.
  
ExampleMolly's World (cont.)
+
This covers the properties of the connection <math>F(x, y) = \underline{(}~x~,~y~\underline{)},</math> treated as a proposition about things in the universe <math>X = \underline\mathbb{B}^2.</math> Staying with this same connection, it is time to demonstrate how it can be "stretched" to form an operator on arbitrary propositions.
  
In preparation for a contingently possible future discussion,
+
To continue the exercise, let <math>p\!</math> and <math>q\!</math> be arbitrary propositions about things in the universe <math>X,\!</math> that is, maps of the form <math>p, q : X \to \underline\mathbb{B},</math> and suppose that <math>p, q\!</math> are indicator functions of the sets <math>P, Q \subseteq X,</math> respectively. In other words, we have the following data:
I need to attach a few parting thoughts to the case workup
 
of Molly's World that may not seem terribly relevant to
 
the present setting, but whose pertinence I hope will
 
become clearer in time.
 
  
The logical paradigm from which this Example was derived is that
+
{| align="center" cellpadding="8" width="90%"
of "Zeroth Order Horn Clause Theories".  The clauses at issue
 
in these theories are allowed to be of just three kinds:
 
 
 
| 1.  p & q & r & ... => z
 
 
|
 
|
| 2.  z
+
<math>\begin{matrix}
|
+
p
| 3.  ~[p & q & r & ...]
+
& = &
 +
\upharpoonleft P \upharpoonright
 +
& : &
 +
X \to \underline\mathbb{B}
 +
\\
 +
\\
 +
q
 +
& = &
 +
\upharpoonleft Q \upharpoonright
 +
& : &
 +
X \to \underline\mathbb{B}
 +
\\
 +
\\
 +
(p, q)
 +
& = &
 +
(\upharpoonleft P \upharpoonright, \upharpoonleft Q \upharpoonright)
 +
& : &
 +
(X \to \underline\mathbb{B})^2
 +
\\
 +
\end{matrix}</math>
 +
|}
  
Here, the proposition letters "p", "q", "r", ..., "z"
+
Then one has an operator <math>F^\$,</math> the stretch of the connection <math>F\!</math> over <math>X,\!</math> and a proposition <math>F^\$ (p, q),</math> the stretch of <math>F\!</math> to <math>(p, q)\!</math> on <math>X,\!</math> with the following properties:
are restricted to being single positive features, not
 
themselves negated or otherwise complex expressions.
 
  
In the Cactus Language or Existential Graph syntax
+
{| align="center" cellpadding="8" width="90%"
these forms would take on the following appearances:
 
 
 
| 1.  ( p q r ... ( z ))
 
 
|
 
|
| 2.    z
+
<math>\begin{array}{ccccl}
|
+
F^\$
| 3.  ( p q r ... )
+
& = &
 
+
\underline{(} \ldots, \ldots \underline{)}^\$
The style of deduction in Horn clause logics is essentially
+
& : &
proof-theoretic in character, with the main burden of proof
+
(X \to \underline\mathbb{B})^2 \to (X \to \underline\mathbb{B})
falling on implication relations ("=>") and on "projective"
+
\\
forms of inference, that is, information-losing inferences
+
\\
like modus ponens and resolution.  Cf. [Llo], [MaW].
+
F^\$ (p, q)
 
+
& = &
In contrast, the method used here is substantially model-theoretic,
+
\underline{(}~p~,~q~\underline{)}^\$
the stress being to start from more general forms of expression for
+
& : &
laying out facts (for example, distinctions, equations, partitions)
+
X \to \underline\mathbb{B}
and to work toward results that maintain logical equivalence with
+
\\
their origins.
+
\end{array}</math>
 
+
|}
What all of this has to do with the output above is this:
 
>From the perspective that is adopted in the present work,
 
almost any theory, for example, the one that is founded
 
on the postulates of Molly's World, will have far more
 
models than the implicational and inferential mode of
 
reasoning is designed to discover.  We will be forced
 
to confront them, however, if we try to run Model on
 
a large set of implications.
 
 
 
The typical Horn clause interpreter gets around this
 
difficulty only by a stratagem that takes clauses to
 
mean something other than what they say, that is, by
 
distorting the principles of semantics in practice.
 
Our Model, on the other hand, has no such finesse.
 
 
 
This explains why it was necessary to impose the
 
prerequisite "object" constraint on the Log file
 
for Molly's World.  It supplied no more than what
 
we usually take for granted, in order to obtain
 
a set of models that we would normally think of
 
as being the intended import of the definitions.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Example.  Jets & Sharks
 
 
 
The propositional calculus based on the boundary operator, that is,
 
the multigrade logical connective of the form "( , , , ... )" can be
 
interpreted in a way that resembles the logic of activation states and
 
competition constraints in certain neural network models.  One way to do
 
this is by interpreting the blank or unmarked state as the resting state
 
of a neural pool, the bound or marked state as its activated state, and
 
by representing a mutually inhibitory pool of neurons p, q, r by means
 
of the expression "( p , q , r )".  To illustrate this possibility,
 
I transcribe into cactus language expressions a notorious example
 
from the "parallel distributed processing" (PDP) paradigm [McR]
 
and work through two of the associated exercises as portrayed
 
in this format.
 
 
 
Logical Input File:  JAS  = ZOT(Jets And Sharks)
 
o----------------------------------------------------------------o
 
|                                                                |
 
(( art    ),( al  ),( sam  ),( clyde ),( mike  ),            |
 
|  ( jim    ),( greg ),( john ),( doug  ),( lance ),            |
 
|  ( george ),( pete ),( fred ),( gene  ),( ralph ),            |
 
|  ( phil  ),( ike  ),( nick ),( don  ),( ned  ),( karl ),  |
 
|  ( ken    ),( earl ),( rick ),( ol    ),( neal  ),( dave ))  |
 
|                                                                |
 
|  ( jets , sharks )                                            |
 
|                                                                |
 
|  ( jets ,                                                      |
 
|    ( art    ),( al  ),( sam  ),( clyde ),( mike  ),          |
 
|    ( jim    ),( greg ),( john ),( doug  ),( lance ),          |
 
|    ( george ),( pete ),( fred ),( gene  ),( ralph ))          |
 
|                                                                |
 
|  ( sharks ,                                                    |
 
|    ( phil ),( ike  ),( nick ),( don ),( ned  ),( karl ),      |
 
|    ( ken  ),( earl ),( rick ),( ol  ),( neal ),( dave ))      |
 
|                                                                |
 
|  (( 20's ),( 30's ),( 40's ))                                  |
 
|                                                                |
 
|  ( 20's ,                                                      |
 
|    ( sam    ),( jim  ),( greg ),( john ),( lance ),            |
 
|    ( george ),( pete ),( fred ),( gene ),( ken  ))            |
 
|                                                                |
 
|  ( 30's ,                                                      |
 
|    ( al  ),( mike ),( doug ),( ralph ),                      |
 
|    ( phil ),( ike  ),( nick ),( don  ),                      |
 
|    ( ned  ),( rick ),( ol  ),( neal  ),( dave ))              |
 
|                                                                |
 
|  ( 40's ,                                                      |
 
|    ( art ),( clyde ),( karl ),( earl ))                        |
 
|                                                                |
 
|  (( junior_high ),( high_school ),( college ))                |
 
|                                                                |
 
|  ( junior_high ,                                              |
 
|    ( art  ),( al    ),( clyde  ),( mike  ),( jim ),            |
 
|    ( john ),( lance ),( george ),( ralph ),( ike ))            |
 
|                                                                |
 
|  ( high_school ,                                              |
 
|    ( greg ),( doug ),( pete ),( fred ),( nick ),              |
 
|    ( karl ),( ken  ),( earl ),( rick ),( neal ),( dave ))      |
 
|                                                                |
 
|  ( college ,                                                  |
 
|    ( sam ),( gene ),( phil ),( don ),( ned ),( ol ))          |
 
|                                                                |
 
|  (( single ),( married ),( divorced ))                        |
 
|                                                                |
 
|  ( single ,                                                    |
 
|    ( art  ),( sam  ),( clyde ),( mike ),                      |
 
|    ( doug  ),( pete ),( fred  ),( gene ),                      |
 
|    ( ralph ),( ike  ),( nick  ),( ken  ),( neal ))            |
 
|                                                                |
 
|  ( married ,                                                  |
 
|    ( al  ),( greg ),( john ),( lance ),( phil ),              |
 
|    ( don ),( ned  ),( karl ),( earl  ),( ol  ))              |
 
|                                                                |
 
|  ( divorced ,                                                  |
 
|    ( jim ),( george ),( rick ),( dave ))                      |
 
|                                                                |
 
|  (( bookie ),( burglar ),( pusher ))                          |
 
|                                                                |
 
|  ( bookie ,                                                    |
 
|    ( sam  ),( clyde ),( mike ),( doug ),                      |
 
|    ( pete ),( ike  ),( ned  ),( karl ),( neal ))              |
 
|                                                                |
 
|  ( burglar ,                                                  |
 
|    ( al    ),( jim ),( john ),( lance ),                      |
 
|    ( george ),( don ),( ken  ),( earl  ),( rick ))            |
 
|                                                                |
 
|  ( pusher ,                                                    |
 
|    ( art  ),( greg ),( fred ),( gene ),                      |
 
|    ( ralph ),( phil ),( nick ),( ol  ),( dave ))              |
 
|                                                                |
 
o----------------------------------------------------------------o
 
 
 
We now apply Study to the proposition that
 
defines the Jets and Sharks knowledge base,
 
that is to say, the knowledge that we are
 
given about the Jets and Sharks, not the
 
knowledge that the Jets and Sharks have.
 
 
 
With a query on the name "ken" we obtain the following
 
output, giving all of the features associated with Ken:
 
 
 
Sense Outline: JAS & Ken
 
o---------------------------------------o
 
| ken                                  |
 
|  sharks                              |
 
|  20's                                |
 
|    high_school                        |
 
|    single                            |
 
|     burglar                          |
 
o---------------------------------------o
 
 
 
With a query on the two features "college" and "sharks"
 
we obtain the following outline of all of the features
 
that satisfy these constraints:
 
  
Sense Outline: JAS & College & Sharks
+
As a result, the application of the proposition <math>F^\$ (p, q)</math> to each <math>x \in X</math> returns a logical value in <math>\underline\mathbb{B},</math> all in accord with the following equations:
o---------------------------------------o
 
| college                              |
 
|  sharks                              |
 
|  30's                                |
 
|    married                            |
 
|    bookie                            |
 
|      ned                              |
 
|    burglar                          |
 
|      don                              |
 
|    pusher                            |
 
|      phil                            |
 
|      ol                              |
 
o---------------------------------------o
 
  
>From this we discover that all college Sharks
+
{| align="center" cellpadding="8" width="90%"
are 30-something and married.  Furthermore,
 
we have a complete listing of their names
 
broken down by occupation, as I have no
 
doubt that all of them will be in time.
 
 
 
| Reference:
 
 
|
 
|
| McClelland, James L. & Rumelhart, David E.,
+
<math>\begin{matrix}
|'Explorations in Parallel Distributed Processing:
+
F^\$ (p, q)(x) & = & \underline{(}~p~,~q~\underline{)}^\$ (x) & \in & \underline\mathbb{B}
| A Handbook of Models, Programs, and Exercises',
+
\\
| MIT Press, Cambridge, MA, 1988.
+
\\
 +
\Updownarrow  &  & \Updownarrow
 +
\\
 +
\\
 +
F(p(x), q(x))  & = & \underline{(}~p(x)~,~q(x)~\underline{)}  & \in & \underline\mathbb{B}
 +
\\
 +
\end{matrix}</math>
 +
|}
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
For each choice of propositions <math>p\!</math> and <math>q\!</math> about things in <math>X,\!</math> the stretch of <math>F\!</math> to <math>p\!</math> and <math>q\!</math> on <math>X\!</math> is just another proposition about things in <math>X,\!</math> a simple proposition in its own right, no matter how complex its current expression or its present construction as <math>F^\$ (p, q) = \underline{(}~p~,~q~\underline{)}^\$</math> makes it appear in relation to <math>p\!</math> and <math>q.\!</math>  Like any other proposition about things in <math>X,\!</math> it indicates a subset of <math>X,\!</math> namely, the fiber that is variously described in the following ways:
  
One of the issues that my pondering weak and weary over
+
{| align="center" cellpadding="8" width="90%"
has caused me to burn not a few barrels of midnight oil
 
over the past elventeen years or so is the relationship
 
among divers and sundry "styles of inference", by which
 
I mean particular choices of inference paradigms, rules,
 
or schemata.  The chief breakpoint seems to lie between
 
information-losing and information-maintaining modes of
 
inference, also called "implicational" and "equational",
 
or "projective" and "preservative" brands, respectively.
 
 
 
Since it appears to be mostly the implicational and projective
 
styles of inference that are more familiar to folks hereabouts,
 
I will start off this subdiscussion by introducing a number of
 
risibly simple but reasonably manageable examples of the other
 
brand of inference, treated as equational reasoning approaches
 
to problems about satisfying "zeroth order constraints" (ZOC's).
 
 
 
Applications of a Propositional Calculator:
 
Constraint Satisfaction Problems.
 
Jon Awbrey, April 24, 1995.
 
 
 
The Four Houses Puzzle
 
 
 
Constructed on the model of the "Five Houses Puzzle" in [VaH, 132-136].
 
 
 
Problem Statement.  Four people with different nationalities live in the
 
first four houses of a street.  They practice four distinct professions,
 
and each of them has a favorite animal, all of them different.  The four
 
houses are painted different colors.  The following facts are known:
 
 
 
|  1.  The Englander lives in the first house on the left.
 
|  2.  The doctor lives in the second house.
 
|  3.  The third house is painted red.
 
|  4.  The zebra is a favorite in the fourth house.
 
|  5.  The person in the first house has a dog.
 
|  6.  The Japanese lives in the third house.
 
|  7.  The red house is on the left of the yellow one.
 
8.  They breed snails in the house to right of the doctor.
 
|  9.  The Englander lives next to the green house.
 
| 10.  The fox is in the house next to to the diplomat.
 
| 11.  The Spaniard likes zebras.
 
| 12.  The Japanese is a painter.
 
| 13.  The Italian lives in the green house.
 
| 14.  The violinist lives in the yellow house.
 
| 15.  The dog is a pet in the blue house.
 
| 16.  The doctor keeps a fox.
 
 
 
The problem is to find all of the assignments of
 
features to houses that satisfy these requirements.
 
 
 
Logical Input File:  House^4.Log
 
o---------------------------------------------------------------------o
 
|                                                                    |
 
|  eng_1  doc_2  red_3  zeb_4  dog_1  jap_3                          |
 
|                                                                    |
 
|  (( red_1  yel_2 ),( red_2  yel_3 ),( red_3  yel_4 ))              |
 
|  (( doc_1  sna_2 ),( doc_2  sna_3 ),( doc_3  sna_4 ))              |
 
|                                                                    |
 
|  (( eng_1  gre_2 ),                                                |
 
|  ( eng_2  gre_3 ),( eng_2  gre_1 ),                                |
 
|  ( eng_3  gre_4 ),( eng_3  gre_2 ),                                |
 
|                    ( eng_4  gre_3 ))                                |
 
|                                                                    |
 
|  (( dip_1  fox_2 ),                                                |
 
|  ( dip_2  fox_3 ),( dip_2  fox_1 ),                                |
 
|  ( dip_3  fox_4 ),( dip_3  fox_2 ),                                |
 
|                    ( dip_4  fox_3 ))                                |
 
|                                                                    |
 
|  (( spa_1 zeb_1 ),( spa_2 zeb_2 ),( spa_3 zeb_3 ),( spa_4 zeb_4 ))  |
 
|  (( jap_1 pai_1 ),( jap_2 pai_2 ),( jap_3 pai_3 ),( jap_4 pai_4 ))  |
 
|  (( ita_1 gre_1 ),( ita_2 gre_2 ),( ita_3 gre_3 ),( ita_4 gre_4 ))  |
 
|                                                                    |
 
|  (( yel_1 vio_1 ),( yel_2 vio_2 ),( yel_3 vio_3 ),( yel_4 vio_4 ))  |
 
|  (( blu_1 dog_1 ),( blu_2 dog_2 ),( blu_3 dog_3 ),( blu_4 dog_4 ))  |
 
|                                                                    |
 
|  (( doc_1 fox_1 ),( doc_2 fox_2 ),( doc_3 fox_3 ),( doc_4 fox_4 ))  |
 
|                                                                    |
 
|  ((                                                                |
 
|                                                                    |
 
|  (( eng_1 ),( eng_2 ),( eng_3 ),( eng_4 ))                          |
 
|  (( spa_1 ),( spa_2 ),( spa_3 ),( spa_4 ))                          |
 
|  (( jap_1 ),( jap_2 ),( jap_3 ),( jap_4 ))                          |
 
|  (( ita_1 ),( ita_2 ),( ita_3 ),( ita_4 ))                          |
 
|                                                                    |
 
|  (( eng_1 ),( spa_1 ),( jap_1 ),( ita_1 ))                          |
 
|  (( eng_2 ),( spa_2 ),( jap_2 ),( ita_2 ))                          |
 
|  (( eng_3 ),( spa_3 ),( jap_3 ),( ita_3 ))                          |
 
|  (( eng_4 ),( spa_4 ),( jap_4 ),( ita_4 ))                          |
 
|                                                                    |
 
|  (( gre_1 ),( gre_2 ),( gre_3 ),( gre_4 ))                          |
 
|  (( red_1 ),( red_2 ),( red_3 ),( red_4 ))                          |
 
|  (( yel_1 ),( yel_2 ),( yel_3 ),( yel_4 ))                          |
 
|  (( blu_1 ),( blu_2 ),( blu_3 ),( blu_4 ))                          |
 
|                                                                    |
 
|  (( gre_1 ),( red_1 ),( yel_1 ),( blu_1 ))                          |
 
|  (( gre_2 ),( red_2 ),( yel_2 ),( blu_2 ))                          |
 
|  (( gre_3 ),( red_3 ),( yel_3 ),( blu_3 ))                          |
 
|  (( gre_4 ),( red_4 ),( yel_4 ),( blu_4 ))                          |
 
|                                                                    |
 
|  (( pai_1 ),( pai_2 ),( pai_3 ),( pai_4 ))                          |
 
|  (( dip_1 ),( dip_2 ),( dip_3 ),( dip_4 ))                          |
 
|  (( vio_1 ),( vio_2 ),( vio_3 ),( vio_4 ))                          |
 
|  (( doc_1 ),( doc_2 ),( doc_3 ),( doc_4 ))                          |
 
|                                                                    |
 
|  (( pai_1 ),( dip_1 ),( vio_1 ),( doc_1 ))                          |
 
|  (( pai_2 ),( dip_2 ),( vio_2 ),( doc_2 ))                          |
 
|  (( pai_3 ),( dip_3 ),( vio_3 ),( doc_3 ))                          |
 
|  (( pai_4 ),( dip_4 ),( vio_4 ),( doc_4 ))                          |
 
|                                                                    |
 
|  (( dog_1 ),( dog_2 ),( dog_3 ),( dog_4 ))                          |
 
|  (( zeb_1 ),( zeb_2 ),( zeb_3 ),( zeb_4 ))                          |
 
|  (( fox_1 ),( fox_2 ),( fox_3 ),( fox_4 ))                          |
 
|  (( sna_1 ),( sna_2 ),( sna_3 ),( sna_4 ))                          |
 
|                                                                    |
 
|  (( dog_1 ),( zeb_1 ),( fox_1 ),( sna_1 ))                          |
 
|  (( dog_2 ),( zeb_2 ),( fox_2 ),( sna_2 ))                          |
 
|  (( dog_3 ),( zeb_3 ),( fox_3 ),( sna_3 ))                          |
 
|  (( dog_4 ),( zeb_4 ),( fox_4 ),( sna_4 ))                          |
 
|                                                                    |
 
|  ))                                                                |
 
|                                                                    |
 
o---------------------------------------------------------------------o
 
 
 
Sense Outline:  House^4.Sen
 
o-----------------------------o
 
| eng_1                      |
 
|  doc_2                      |
 
|  red_3                    |
 
|    zeb_4                    |
 
|    dog_1                  |
 
|      jap_3                  |
 
|      yel_4                |
 
|        sna_3                |
 
|        gre_2              |
 
|          dip_1              |
 
|          fox_2            |
 
|            spa_4            |
 
|            pai_3          |
 
|              ita_2          |
 
|              vio_4        |
 
|                blu_1        |
 
o-----------------------------o
 
 
 
Table 1.  Solution to the Four Houses Puzzle
 
o------------o------------o------------o------------o------------o
 
|            | House 1    | House 2    | House 3    | House 4    |
 
o------------o------------o------------o------------o------------o
 
| Nation    | England    | Italy      | Japan      | Spain      |
 
| Color      | blue      | green      | red        | yellow    |
 
| Profession | diplomat  | doctor    | painter    | violinist  |
 
| Animal    | dog        | fox        | snails    | zebra      |
 
o------------o------------o------------o------------o------------o
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
First off, I do not trivialize the "real issues of ontology", indeed,
 
it is precisely my estimate of the non-trivial difficulty of this task,
 
of formulating the types of "generic ontology" that we propose to do here,
 
that forces me to choose and to point out the inescapability of the approach
 
that I am currently taking, which is to enter on the necessary preliminary of
 
building up the logical tools that we need to tackle the ontology task proper.
 
And I would say, to the contrary, that it is those who think we can arrive at
 
a working general ontology by sitting on the porch shooting the breeze about
 
"what it is" until the cows come home -- that is, the method for which it
 
has become cliche to indict the Ancient Greeks, though, if truth be told,
 
we'd have to look to the pre-socratics and the pre-stoics to find a good
 
match for the kinds of revelation that are common hereabouts -- I would
 
say that it's those folks who trivialize the "real issues of ontology".
 
 
 
A person, living in our times, who is serious about knowing the being of things,
 
really only has one choice -- to pick what tiny domain of things he or she just
 
has to know about the most, thence to hie away to the adept gurus of the matter
 
in question, forgeting the rest, cause "general ontology" is a no-go these days.
 
It is presently in a state like astronomy before telescopes, and that means not
 
entirely able to discern itself from astrology and other psychically projective
 
exercises of wishful and dreadful thinking like that.
 
 
 
So I am busy grinding lenses ...
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
DM = Douglas McDavid
 
 
 
DM: Thanks for both the original and additional response.  I'm not trying to
 
    single you out, as I have been picking  on various postings in a similar
 
    manner ever since I started contributing to this discussion.  I agree with
 
    you that the task of this working group is non-trivially difficult.  In fact,
 
    I believe we are still a long way from a clear and useful agreement about what
 
    constitutes "upper" ontology, and what it would mean to standardize it.  However,
 
    I don't agree that the only place to make progress is in tiny domains of things.
 
    I've contributed the thought that a fundamental, upper-level concept is the
 
    concept of system, and that that would be a good place to begin.  And I'll
 
    never be able to refrain from evaluating the content as well as the form
 
    of any examples presented for consideration here.  Probably should
 
    accompany these comments with a ;-)
 
 
 
There will never be a standard universal ontology
 
of the absolute essential impertubable monolithic
 
variety that some people still dream of in their
 
fantasies of spectating on and speculating about
 
a pre-relativistically non-participatory universe
 
from their singular but isolated gods'eye'views.
 
The bells tolled for that one many years ago,
 
but some of the more blithe of the blissful
 
islanders have just not gotten the news yet.
 
 
 
But there is still a lot to do that would be useful
 
under the banner of a "standard upper ontology",
 
if only we stay loose in our interpretation
 
of what that implies in practical terms.
 
 
 
One likely approach to the problem would be to take
 
a hint from the afore-allusioned history of physics --
 
to inquire for whom, else, the bell tolls -- and to
 
see if there are any bits of wisdom from that prior
 
round of collective experience that can be adapted
 
by dint of analogy to our present predicament.
 
I happen to think that there are.
 
 
 
And there the answer was, not to try and force a return,
 
though lord knows they all gave it their very best shot,
 
to an absolute and imperturbable framework of existence,
 
but to see the reciprocal participant relation that all
 
partakers have to the constitution of that framing, yes,
 
even unto those who would abdictators and abstainees be.
 
 
 
But what does that imply about some shred of a standard?
 
It means that we are better off seeking, not a standard,
 
one-size-fits-all ontology, but more standard resources
 
for trying to interrelate diverse points of view and to
 
transform the data that's gathered from one perspective
 
in ways that it can most appropriately be compared with
 
the data that is gathered from other standpoints on the
 
splendorous observational scenes and theorematic stages.
 
 
 
That is what I am working on.
 
And it hasn't been merely
 
for a couple of years.
 
 
 
As to this bit:
 
 
 
o-------------------------------------------------o
 
|                                                |
 
|  ( living_thing , non_living )                |
 
|                                                |
 
|  (( animal ),( vegetable ),( mineral ))        |
 
|                                                |
 
|  ( living_thing ,( animal ),( vegetable ))    |
 
|                                                |
 
|  ( mineral ( non_living ))                    |
 
|                                                |
 
o-------------------------------------------------o
 
 
 
My 5-dimensional Example, that I borrowed from some indifferent source
 
of what is commonly recognized as "common sense" -- and I think rather
 
obviously designed more for the classification of pre-modern species
 
of whole critters and pure matters of natural substance than the
 
motley mixture of un/natural and in/organic conglouterites that
 
we find served up on the menu of modernity -- was not intended
 
even so much as a toy ontology, but simply as an expository
 
example, concocted for the sake of illustrating the sorts
 
of logical interaction that occur among four different
 
patterns of logical constraint, all of which types
 
arise all the time no matter what the domain, and
 
which I believe that my novel forms of expression,
 
syntactically speaking, express quite succinctly,
 
especially when you contemplate the complexities
 
of the computation that may flow and must follow
 
from even these meagre propositional expressions.
 
 
 
Yes, systems -- but -- even here usage differs in significant ways.
 
I have spent ten years now trying to integrate my earlier efforts
 
under an explicit systems banner, but even within the bounds of
 
a systems engineering programme at one site there is a wide
 
semantic dispersion that issues from this word "system".
 
I am committed, and in writing, to taking what we so
 
glibly and prospectively call "intelligent systems"
 
seriously as dynamical systems.  That has many
 
consequences, and I have to pick and choose
 
which of those I may be suited to follow.
 
 
 
But that is too long a story for now ...
 
 
 
";-)"?
 
 
 
Somehow that has always looked like
 
the Chesshire Cat's grin to me ...
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
By way of catering to popular demand, I have decided to
 
render this symposium a bit more à la carte, and thus to
 
serve up as faster food than heretofore a choice selection
 
of the more sumptuous bits that I have in my logical larder,
 
not yet full fare, by any means, but a sample of what might
 
one day approach to being an abundantly moveable feast of
 
ontological contents and general metaphysical delights.
 
I'll leave it to you to name your poison, as it were.
 
 
 
Applications of a Propositional Calculator:
 
Constraint Satisfaction Problems.
 
Jon Awbrey, April 24, 1995.
 
 
 
Fabric Knowledge Base
 
Based on the example in [MaW, pages 8-16].
 
 
 
Logical Input File:  Fab.Log
 
o---------------------------------------------------------------------o
 
|                                                                    |
 
| (has_floats , plain_weave )                                        |
 
| (has_floats ,(twill_weave ),(satin_weave ))                        |
 
|                                                                    |
 
| (plain_weave ,                                                      |
 
|  (plain_weave  one_color ),                                        |
 
|  (color_groups  ),                                                  |
 
|  (grouped_warps ),                                                  |
 
|  (some_thicker  ),                                                  |
 
|  (crossed_warps ),                                                  |
 
|  (loop_threads  ),                                                  |
 
|  (plain_weave  flannel ))                                          |
 
|                                                                    |
 
| (plain_weave  one_color  cotton  balanced  smooth  ,(percale ))    |
 
| (plain_weave  one_color  cotton            sheer  ,(organdy ))    |
 
| (plain_weave  one_color  silk              sheer  ,(organza ))    |
 
|                                                                    |
 
| (plain_weave  color_groups  warp_stripe  fill_stripe ,(plaid  ))  |
 
| (plaid        equal_stripe                          ,(gingham ))  |
 
|                                                                    |
 
| (plain_weave  grouped_warps ,(basket_weave ))                      |
 
|                                                                    |
 
| (basket_weave  typed ,                                              |
 
|  (type_2_to_1 ),                                                    |
 
|  (type_2_to_2 ),                                                    |
 
|  (type_4_to_4 ))                                                    |
 
|                                                                    |
 
| (basket_weave  typed  type_2_to_1  thicker_fill  ,(oxford      )) |
 
| (basket_weave  typed (type_2_to_2 ,                                |
 
|                      type_4_to_4 ) same_thickness ,(monks_cloth )) |
 
| (basket_weave (typed )              rough  open    ,(hopsacking  )) |
 
|                                                                    |
 
| (typed (basket_weave ))                                            |
 
|                                                                    |
 
| (basket_weave ,(oxford ),(monks_cloth ),(hopsacking ))              |
 
|                                                                    |
 
| (plain_weave  some_thicker ,(ribbed_weave ))                      |
 
|                                                                    |
 
| (ribbed_weave ,(small_rib ),(medium_rib ),(heavy_rib ))            |
 
| (ribbed_weave ,(flat_rib  ),(round_rib ))                          |
 
|                                                                    |
 
| (ribbed_weave  thicker_fill          ,(cross_ribbed ))              |
 
| (cross_ribbed  small_rib  flat_rib  ,(faille      ))              |
 
| (cross_ribbed  small_rib  round_rib ,(grosgrain    ))              |
 
| (cross_ribbed  medium_rib  round_rib ,(bengaline    ))              |
 
| (cross_ribbed  heavy_rib  round_rib ,(ottoman      ))              |
 
|                                                                    |
 
| (cross_ribbed ,(faille ),(grosgrain ),(bengaline ),(ottoman ))      |
 
|                                                                    |
 
| (plain_weave  crossed_warps ,(leno_weave  ))                        |
 
| (leno_weave  open          ,(marquisette ))                        |
 
| (plain_weave  loop_threads  ,(pile_weave ))                        |
 
|                                                                    |
 
| (pile_weave ,(fill_pile ),(warp_pile ))                            |
 
| (pile_weave ,(cut ),(uncut ))                                      |
 
|                                                                    |
 
| (pile_weave  warp_pile  cut                  ,(velvet    ))        |
 
| (pile_weave  fill_pile  cut    aligned_pile  ,(corduroy  ))        |
 
| (pile_weave  fill_pile  cut    staggered_pile ,(velveteen ))        |
 
| (pile_weave  fill_pile  uncut  reversible    ,(terry    ))        |
 
|                                                                    |
 
| (pile_weave  fill_pile  cut ( (aligned_pile , staggered_pile ) ))  |
 
|                                                                    |
 
| (pile_weave ,(velvet ),(corduroy ),(velveteen ),(terry ))          |
 
|                                                                    |
 
| (plain_weave ,                                                      |
 
|  (percale    ),(organdy    ),(organza    ),(plaid  ),            |
 
|  (oxford    ),(monks_cloth ),(hopsacking ),                        |
 
|  (faille    ),(grosgrain  ),(bengaline  ),(ottoman ),            |
 
|  (leno_weave ),(pile_weave  ),(plain_weave  flannel ))            |
 
|                                                                    |
 
| (twill_weave ,                                                      |
 
|  (warp_faced ),                                                    |
 
|  (filling_faced ),                                                  |
 
|  (even_twill ),                                                    |
 
|  (twill_weave  flannel ))                                          |
 
|                                                                    |
 
| (twill_weave  warp_faced  colored_warp  white_fill ,(denim ))      |
 
| (twill_weave  warp_faced  one_color                ,(drill ))      |
 
| (twill_weave  even_twill  diagonal_rib            ,(serge ))      |
 
|                                                                    |
 
| (twill_weave  warp_faced (                                          |
 
|  (one_color ,                                                      |
 
|  ((colored_warp )(white_fill )) )                                  |
 
| ))                                                                  |
 
|                                                                    |
 
| (twill_weave  warp_faced ,(denim ),(drill ))                        |
 
| (twill_weave  even_twill ,(serge ))                                |
 
|                                                                    |
 
| ((                                                                  |
 
|    (  ((plain_weave )(twill_weave ))                              |
 
|        ((cotton      )(wool        )) napped ,(flannel ))          |
 
| ))                                                                  |
 
|                                                                    |
 
| (satin_weave ,(warp_floats ),(fill_floats ))                        |
 
|                                                                    |
 
| (satin_weave ,(satin_weave smooth ),(satin_weave napped ))          |
 
| (satin_weave ,(satin_weave cotton ),(satin_weave silk  ))          |
 
|                                                                    |
 
| (satin_weave  warp_floats  smooth        ,(satin    ))            |
 
| (satin_weave  fill_floats  smooth        ,(sateen  ))            |
 
| (satin_weave              napped  cotton ,(moleskin ))            |
 
|                                                                    |
 
| (satin_weave ,(satin ),(sateen ),(moleskin ))                      |
 
|                                                                    |
 
o---------------------------------------------------------------------o
 
 
 
| Reference [MaW]
 
 
|
 
|
| Maier, David & Warren, David S.,
+
<math>\begin{array}{lll}
|'Computing with Logic: Logic Programming with Prolog',
+
[| F^\$ (p, q) |]
| Benjamin/Cummings, Menlo Park, CA, 1988.
+
& = & [| \underline{(}~p~,~q~\underline{)}^\$ |]
 +
\\[6pt]
 +
& = & (F^\$ (p, q))^{-1} (\underline{1})
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ F^\$ (p, q)(x) ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ \underline{(}~p~,~q~\underline{)}^\$ (x) ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ \underline{(}~p(x)~,~q(x)~\underline{)} ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ p(x) + q(x) ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ p(x) \neq q(x) ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ \upharpoonleft P \upharpoonright (x) ~\neq~ \upharpoonleft Q \upharpoonright (x) ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ x \in P ~\nLeftrightarrow~ x \in Q ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ x \in P\!-\!Q ~\operatorname{or}~ x \in Q\!-\!P ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ x \in P\!-\!Q ~\cup~ Q\!-\!P ~\}
 +
\\[6pt]
 +
& = & \{~ x \in X ~:~ x \in P + Q ~\}
 +
\\[6pt]
 +
& = & P + Q ~\subseteq~ X
 +
\\[6pt]
 +
& = & [|p|] + [|q|] ~\subseteq~ X
 +
\end{array}</math>
 +
|}
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
==References==
  
I think that it might be a good idea to go back to a simpler example
+
* Bernstein, Herbert J. (1987), "Idols of Modern Science and The Reconstruction of Knowledge", pp. 37&ndash;68 in Marcus G. Raskin and Herbert J. Bernstein, ''New Ways of Knowing : The Sciences, Society, and Reconstructive Knowledge'', Rowman and Littlefield, Totowa, NJ, 1987.
of a constraint satisfaction problem, and to discuss the elements of
 
its expression as a ZOT in a less cluttered setting before advancing
 
onward once again to problems on the order of the Four Houses Puzzle.
 
  
| Applications of a Propositional Calculator:
+
* Denning, P.J., Dennis, J.B., and Qualitz, J.E. (1978), ''Machines, Languages, and Computation'', Prentice-Hall, Englewood Cliffs, NJ.
| Constraint Satisfaction Problems.
 
| Jon Awbrey, April 24, 1995.
 
  
Graph Coloring
+
* Nietzsche, Friedrich, ''Beyond Good and Evil : Prelude to a Philosophy of the Future'', R.J. Hollingdale (trans.), Michael Tanner (intro.), Penguin Books, London, UK, 1973, 1990.
  
Based on the discussion in [Wil, page 196].
+
* Raskin, Marcus G., and Bernstein, Herbert J. (1987, eds.), ''New Ways of Knowing : The Sciences, Society, and Reconstructive Knowledge'', Rowman and Littlefield, Totowa, NJ.
  
One is given three colors, say, orange, silver, indigo,
+
==Document History==
and a graph on four nodes that has the following shape:
 
  
|          1
+
===The Cactus Patch===
|          o
 
|        / \
 
|        /  \
 
|    4 o-----o 2
 
|        \  /
 
|        \ /
 
|          o
 
|          3
 
  
The problem is to color the nodes of the graph
+
<pre>
in such a way that no pair of nodes that are
+
| SubjectInquiry Driven Systems : An Inquiry Into Inquiry
adjacent in the graph, that is, linked by
+
| ContactJon Awbrey
an edge, get the same color.
+
| VersionDraft 8.70
 
+
| Created23 Jun 1996
The objective situation that is to be achieved can be represented
+
| Revised06 Jan 2002
in a so-called "declarative" fashion, in effect, by employing the
+
| Advisor:  M.A. Zohdy
cactus language as a very simple sort of declarative programming
+
| SettingOakland University, Rochester, Michigan, USA
language, and depicting the prospective solution to the problem
+
| Excerpt: Section 1.3.10 (Recurring Themes)
as a ZOT.
+
| Excerpt: Subsections 1.3.10.8 - 1.3.10.13
 
+
</pre>
To do this, begin by declaring the following set of
 
twelve boolean variables or "zeroth order features":
 
 
 
{1_orange, 1_silver, 1_indigo,
 
2_orange, 2_silver, 2_indigo,
 
3_orange, 3_silver, 3_indigo,
 
4_orange, 4_silver, 4_indigo}
 
 
 
The interpretation to keep in mind will be such that
 
the feature name of the form "<node i>_<color j>"
 
says that the node i is assigned the color j.
 
 
 
Logical Input FileColor.Log
 
o----------------------------------------------------------------------o
 
|                                                                      |
 
|  (( 1_orange ),( 1_silver ),( 1_indigo ))                            |
 
|  (( 2_orange ),( 2_silver ),( 2_indigo ))                            |
 
|  (( 3_orange ),( 3_silver ),( 3_indigo ))                            |
 
|  (( 4_orange ),( 4_silver ),( 4_indigo ))                            |
 
|                                                                      |
 
|  ( 1_orange  2_orange )( 1_silver  2_silver )( 1_indigo  2_indigo )  |
 
|  ( 1_orange  4_orange )( 1_silver  4_silver )( 1_indigo  4_indigo )  |
 
|  ( 2_orange  3_orange )( 2_silver  3_silver )( 2_indigo  3_indigo )  |
 
| ( 2_orange  4_orange )( 2_silver  4_silver )( 2_indigo  4_indigo )  |
 
|  ( 3_orange  4_orange )( 3_silver  4_silver )( 3_indigo  4_indigo )  |
 
|                                                                      |
 
o----------------------------------------------------------------------o
 
 
 
The first stanza of verses declares that
 
every node is assigned exactly one color.
 
 
 
The second stanza of verses declares that
 
no adjacent nodes get the very same color.
 
 
 
Each satisfying interpretation of this ZOT
 
that is also a program corresponds to what
 
graffitists call a "coloring" of the graph.
 
 
 
Theme One's Model interpreter, when we set
 
it to work on this ZOT, will array  before
 
our eyes all of the colorings of the graph.
 
 
 
Sense OutlineColor.Sen
 
o-----------------------------o
 
| 1_orange                    |
 
|  2_silver                  |
 
|  3_orange                  |
 
|    4_indigo                |
 
|  2_indigo                  |
 
|  3_orange                  |
 
|    4_silver                |
 
| 1_silver                    |
 
|  2_orange                  |
 
|  3_silver                  |
 
|    4_indigo                |
 
|  2_indigo                  |
 
|  3_silver                  |
 
|    4_orange                |
 
| 1_indigo                    |
 
|  2_orange                  |
 
|  3_indigo                  |
 
|    4_silver                |
 
|  2_silver                  |
 
|  3_indigo                  |
 
|    4_orange                |
 
o-----------------------------o
 
 
 
| Reference [Wil]
 
|
 
| Wilf, Herbert S.,
 
|'Algorithms and Complexity',
 
| Prentice-Hall, Englewood Cliffs, NJ, 1986.
 
|
 
| Nota Bene.  There is a wrong Figure in some
 
| printings of the book, that does not match
 
| the description of the Example that is
 
| given in the text.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Let us continue to examine the properties of the cactus language
 
as a minimal style of declarative programming language.  Even in
 
the likes of this zeroth order microcosm one can observe, and on
 
a good day still more clearly for the lack of other distractions,
 
many of the buzz worlds that will spring into full bloom, almost
 
as if from nowhere, to become the first order of business in the
 
latter day logical organa, plus combinators, plus lambda calculi.
 
 
 
By way of homage to the classics of the art, I can hardly pass
 
this way without paying my dues to the next sample of examples.
 
 
 
N Queens Problem
 
 
 
I will give the ZOT that describes the N Queens Problem for N = 5,
 
since that is the most that I and my old 286 could do when last I
 
wrote up this Example.
 
 
 
The problem is now to write a "zeroth order program" (ZOP) that
 
describes the following objectiveTo place 5 chess queens on
 
a 5 by 5 chessboard so that no queen attacks any other queen.
 
 
 
It is clear that there can be at most one queen on each row
 
of the board and so by dint of regal necessity, exactly one
 
queen in each row of the desired array.  This gambit allows
 
us to reduce the problem to one of picking a permutation of
 
five things in fives places, and this affords us sufficient
 
clue to begin down a likely path toward the intended object,
 
by recruiting the following phalanx of 25 logical variables:
 
 
 
Literal Input File:  Q5.Lit
 
o---------------------------------------o
 
|                                      |
 
| q1_r1, q1_r2, q1_r3, q1_r4, q1_r5,  |
 
|  q2_r1, q2_r2, q2_r3, q2_r4, q2_r5,  |
 
|  q3_r1, q3_r2, q3_r3, q3_r4, q3_r5,  |
 
|  q4_r1, q4_r2, q4_r3, q4_r4, q4_r5,  |
 
|  q5_r1, q5_r2, q5_r3, q5_r4, q5_r5.  |
 
|                                      |
 
o---------------------------------------o
 
 
 
Thus we seek to define a function, of abstract type f : %B%^25 -> %B%,
 
whose fibre of truth f^(-1)(%1%) is a set of interpretations, each of
 
whose elements bears the abstract type of a point in the space %B%^25,
 
and whose reading will inform us of our desired set of configurations.
 
 
 
Logical Input FileQ5.Log
 
o------------------------------------------------------------o
 
|                                                            |
 
|  ((q1_r1 ),(q1_r2 ),(q1_r3 ),(q1_r4 ),(q1_r5 ))            |
 
|  ((q2_r1 ),(q2_r2 ),(q2_r3 ),(q2_r4 ),(q2_r5 ))            |
 
|  ((q3_r1 ),(q3_r2 ),(q3_r3 ),(q3_r4 ),(q3_r5 ))            |
 
|  ((q4_r1 ),(q4_r2 ),(q4_r3 ),(q4_r4 ),(q4_r5 ))            |
 
|  ((q5_r1 ),(q5_r2 ),(q5_r3 ),(q5_r4 ),(q5_r5 ))            |
 
|                                                            |
 
|  ((q1_r1 ),(q2_r1 ),(q3_r1 ),(q4_r1 ),(q5_r1 ))            |
 
|  ((q1_r2 ),(q2_r2 ),(q3_r2 ),(q4_r2 ),(q5_r2 ))            |
 
|  ((q1_r3 ),(q2_r3 ),(q3_r3 ),(q4_r3 ),(q5_r3 ))            |
 
|  ((q1_r4 ),(q2_r4 ),(q3_r4 ),(q4_r4 ),(q5_r4 ))            |
 
|  ((q1_r5 ),(q2_r5 ),(q3_r5 ),(q4_r5 ),(q5_r5 ))            |
 
|                                                            |
 
|  ((                                                        |
 
|                                                            |
 
|  (q1_r1 q2_r2 )(q1_r1 q3_r3 )(q1_r1 q4_r4 )(q1_r1 q5_r5 )  |
 
|                (q2_r2 q3_r3 )(q2_r2 q4_r4 )(q2_r2 q5_r5 )  |
 
|                              (q3_r3 q4_r4 )(q3_r3 q5_r5 )  |
 
|                                            (q4_r4 q5_r5 )  |
 
|                                                            |
 
|  (q1_r2 q2_r3 )(q1_r2 q3_r4 )(q1_r2 q4_r5 )                |
 
|                (q2_r3 q3_r4 )(q2_r3 q4_r5 )                |
 
|                              (q3_r4 q4_r5 )                |
 
|                                                            |
 
|  (q1_r3 q2_r4 )(q1_r3 q3_r5 )                              |
 
|                (q2_r4 q3_r5 )                              |
 
|                                                            |
 
|  (q1_r4 q2_r5 )                                            |
 
|                                                            |
 
|  (q2_r1 q3_r2 )(q2_r1 q4_r3 )(q2_r1 q5_r4 )                |
 
|                (q3_r2 q4_r3 )(q3_r2 q5_r4 )                |
 
|                              (q4_r3 q5_r4 )                |
 
|                                                            |
 
|  (q3_r1 q4_r2 )(q3_r1 q5_r3 )                              |
 
|                (q4_r2 q5_r3 )                              |
 
|                                                            |
 
|  (q4_r1 q5_r2 )                                            |
 
|                                                            |
 
|  (q1_r5 q2_r4 )(q1_r5 q3_r3 )(q1_r5 q4_r2 )(q1_r5 q5_r1 )  |
 
|                (q2_r4 q3_r3 )(q2_r4 q4_r2 )(q2_r4 q5_r1 )  |
 
|                              (q3_r3 q4_r2 )(q3_r3 q5_r1 )  |
 
|                                            (q4_r2 q5_r1 )  |
 
|                                                            |
 
|  (q2_r5 q3_r4 )(q2_r5 q4_r3 )(q2_r5 q5_r2 )                |
 
|                (q3_r4 q4_r3 )(q3_r4 q5_r2 )                |
 
|                              (q4_r3 q5_r2 )                |
 
|                                                            |
 
|  (q3_r5 q4_r4 )(q3_r5 q5_r3 )                              |
 
|                (q4_r4 q5_r3 )                              |
 
|                                                            |
 
|  (q4_r5 q5_r4 )                                            |
 
|                                                            |
 
|  (q1_r4 q2_r3 )(q1_r4 q3_r2 )(q1_r4 q4_r1 )                |
 
|                (q2_r3 q3_r2 )(q2_r3 q4_r1 )                |
 
|                              (q3_r2 q4_r1 )                |
 
|                                                            |
 
|  (q1_r3 q2_r2 )(q1_r3 q3_r1 )                              |
 
|                (q2_r2 q3_r1 )                              |
 
|                                                            |
 
|  (q1_r2 q2_r1 )                                            |
 
|                                                            |
 
|  ))                                                        |
 
|                                                            |
 
o------------------------------------------------------------o
 
 
 
The vanguard of this logical regiment consists of two
 
stock'a'block platoons, the pattern of whose features
 
is the usual sort of array for conveying permutations.
 
Between the stations of their respective offices they
 
serve to warrant that all of the interpretations that
 
are left standing on the field of valor at the end of
 
the day will be ones that tell of permutations 5 by 5.
 
The rest of the ruck and the runt of the mill in this
 
regimental logos are there to cover the diagonal bias
 
against attacking queens that is our protocol to suit.
 
 
 
And here is the issue of the day:
 
 
 
Sense Output:  Q5.Sen
 
o-------------------o
 
| q1_r1            |
 
|  q2_r3            |
 
|  q3_r5          |
 
|    q4_r2          |
 
|    q5_r4        | <1>
 
|  q2_r4            |
 
|  q3_r2          |
 
|    q4_r5          |
 
|    q5_r3        | <2>
 
| q1_r2            |
 
|  q2_r4            |
 
|  q3_r1          |
 
|    q4_r3          |
 
|    q5_r5        | <3>
 
|  q2_r5            |
 
|  q3_r3          |
 
|    q4_r1          |
 
|    q5_r4        | <4>
 
| q1_r3            |
 
| q2_r1            |
 
|  q3_r4          |
 
|    q4_r2          |
 
|     q5_r5        | <5>
 
|  q2_r5            |
 
|  q3_r2          |
 
|    q4_r4          |
 
|    q5_r1        | <6>
 
| q1_r4            |
 
|  q2_r1            |
 
|  q3_r3          |
 
|    q4_r5          |
 
|    q5_r2        | <7>
 
|  q2_r2            |
 
|  q3_r5          |
 
|    q4_r3          |
 
|    q5_r1        | <8>
 
| q1_r5            |
 
|  q2_r2            |
 
|  q3_r4          |
 
|    q4_r1          |
 
|    q5_r3        | <9>
 
|  q2_r3            |
 
|  q3_r1          |
 
|    q4_r4          |
 
|    q5_r2        | <A>
 
o-------------------o
 
 
 
The number at least checks with all of the best authorities,
 
so I can breathe a sigh of relief on that account, at least.
 
I am sure that there just has to be a more clever way to do
 
this, that is to say, within the bounds of ZOT reason alone,
 
but the above is the best that I could figure out with the
 
time that I had at the time.
 
 
 
References[BaC, 166], [VaH, 122], [Wir, 143].
 
 
 
[BaC]  Ball, W.W. Rouse, & Coxeter, H.S.M.,
 
      'Mathematical Recreations and Essays',
 
      13th ed., Dover, New York, NY, 1987.
 
 
 
[VaH]  Van Hentenryck, Pascal,
 
      'Constraint Satisfaction in Logic Programming,
 
      MIT Press, Cambridge, MA, 1989.
 
 
 
[Wir]  Wirth, Niklaus,
 
      'Algorithms + Data Structures = Programs',
 
      Prentice-Hall, Englewood Cliffs, NJ, 1976.
 
 
 
http://mathworld.wolfram.com/QueensProblem.html
 
http://www.research.att.com/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=000170
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
I turn now to another golden oldie of a constraint satisfaction problem
 
that I would like to give here a slightly new spin, but not so much for
 
the sake of these trifling novelties as from a sense of old time's ache
 
and a duty to -- well, what's the opposite of novelty?
 
 
 
Phobic Apollo
 
 
 
| Suppose Peter, Paul, and Jane are musicians. One of them plays
 
| saxophone, another plays guitar, and the third plays drums.  As
 
| it happens, one of them is afraid of things associated with the
 
| number 13, another of them is afraid of cats, and the third is
 
| afraid of heights. You also know that Peter and the guitarist
 
| skydive, that Paul and the saxophone player enjoy cats, and
 
| that the drummer lives in apartment 13 on the 13th floor.
 
|
 
| Soon we will want to use these facts to reason
 
| about whether or not certain identity relations
 
| hold or are excluded. Assume X(Peter, Guitarist)
 
| means "the person who is Peter is not the person who
 
| plays the guitar". In this notation, the facts become:
 
|
 
| 1. X(Peter, Guitarist)
 
| 2.  X(Peter, Fears Heights)
 
| 3. X(Guitarist, Fears Heights)
 
| 4.  X(Paul, Fears Cats)
 
| 5.  X(Paul, Saxophonist)
 
| 6.  X(Saxophonist, Fears Cats)
 
| 7. X(Drummer, Fears 13)
 
| 8.  X(Drummer, Fears Heights)
 
|
 
| Exercise attributed to Kenneth D. Forbus, pages 449-450 in:
 
| Patrick Henry Winston, 'Artificial Intelligence', 2nd ed.,
 
| Addison-Wesley, Reading, MA, 1984.
 
 
 
Here is one way to represent these facts in the form of a ZOT
 
and use it as a logical program to draw a succinct conclusion:
 
 
 
Logical Input File:  ConSat.Log
 
o-----------------------------------------------------------------------o
 
|                                                                      |
 
|  (( pete_plays_guitar ),( pete_plays_sax ),( pete_plays_drums ))      |
 
|  (( paul_plays_guitar ),( paul_plays_sax ),( paul_plays_drums ))      |
 
|  (( jane_plays_guitar ),( jane_plays_sax ),( jane_plays_drums ))      |
 
|                                                                      |
 
|  (( pete_plays_guitar ),( paul_plays_guitar ),( jane_plays_guitar ))  |
 
|  (( pete_plays_sax    ),( paul_plays_sax    ),( jane_plays_sax    ))  |
 
|  (( pete_plays_drums  ),( paul_plays_drums  ),( jane_plays_drums  ))  |
 
|                                                                      |
 
|  (( pete_fears_13 ),( pete_fears_cats ),( pete_fears_height ))        |
 
|  (( paul_fears_13 ),( paul_fears_cats ),( paul_fears_height ))        |
 
|  (( jane_fears_13 ),( jane_fears_cats ),( jane_fears_height ))        |
 
|                                                                      |
 
|  (( pete_fears_13    ),( paul_fears_13    ),( jane_fears_13    ))  |
 
|  (( pete_fears_cats  ),( paul_fears_cats  ),( jane_fears_cats  ))  |
 
|  (( pete_fears_height ),( paul_fears_height ),( jane_fears_height ))  |
 
|                                                                      |
 
|  ((                                                                  |
 
|                                                                      |
 
|  ( pete_plays_guitar )                                                |
 
|  ( pete_fears_height )                                                |
 
|                                                                      |
 
|  ( pete_plays_guitar  pete_fears_height )                            |
 
|  ( paul_plays_guitar  paul_fears_height )                            |
 
|  ( jane_plays_guitar  jane_fears_height )                            |
 
|                                                                      |
 
|  ( paul_fears_cats )                                                  |
 
|  ( paul_plays_sax  )                                                  |
 
|                                                                      |
 
|  ( pete_plays_sax  pete_fears_cats )                                  |
 
|  ( paul_plays_sax  paul_fears_cats )                                  |
 
|  ( jane_plays_sax  jane_fears_cats )                                  |
 
|                                                                      |
 
|  ( pete_plays_drums  pete_fears_13 )                                  |
 
|  ( paul_plays_drums  paul_fears_13 )                                  |
 
|  ( jane_plays_drums  jane_fears_13 )                                  |
 
|                                                                      |
 
|  ( pete_plays_drums  pete_fears_height )                              |
 
|  ( paul_plays_drums  paul_fears_height )                              |
 
|  ( jane_plays_drums  jane_fears_height )                              |
 
|                                                                      |
 
|  ))                                                                  |
 
|                                                                      |
 
o-----------------------------------------------------------------------o
 
 
 
Sense Outline:  ConSat.Sen
 
o-----------------------------o
 
| pete_plays_drums            |
 
|  paul_plays_guitar          |
 
|  jane_plays_sax            |
 
|    pete_fears_cats          |
 
|    paul_fears_13          |
 
|      jane_fears_height      |
 
o-----------------------------o
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Phobic Apollo (cont.)
 
 
 
It might be instructive to review various aspects
 
of how the Theme One Study function actually went
 
about arriving at its answer to that last problem.
 
Just to prove that my program and I really did do
 
our homework on that Phobic Apollo ConSat problem,
 
and didn't just provoke some Oracle or other data
 
base server to give it away, here is the middling
 
output of the Model function as run on ConSat.Log:
 
 
 
Model Outline:  ConSat.Mod
 
o-------------------------------------------------o
 
| pete_plays_guitar -                            |
 
| (pete_plays_guitar )                            |
 
|  pete_plays_sax                                |
 
|  pete_plays_drums -                            |
 
|  (pete_plays_drums )                          |
 
|    paul_plays_sax -                            |
 
|    (paul_plays_sax )                            |
 
|    jane_plays_sax -                            |
 
|    (jane_plays_sax )                          |
 
|      paul_plays_guitar                          |
 
|      paul_plays_drums -                        |
 
|      (paul_plays_drums )                      |
 
|        jane_plays_guitar -                      |
 
|        (jane_plays_guitar )                    |
 
|        jane_plays_drums                        |
 
|          pete_fears_13                          |
 
|          pete_fears_cats -                    |
 
|          (pete_fears_cats )                    |
 
|            pete_fears_height -                  |
 
|            (pete_fears_height )                |
 
|            paul_fears_13 -                    |
 
|            (paul_fears_13 )                    |
 
|              jane_fears_13 -                    |
 
|              (jane_fears_13 )                  |
 
|              paul_fears_cats -                |
 
|              (paul_fears_cats )                |
 
|                paul_fears_height -              |
 
|                (paul_fears_height ) -          |
 
|          (pete_fears_13 )                      |
 
|          pete_fears_cats -                    |
 
|          (pete_fears_cats )                    |
 
|            pete_fears_height -                  |
 
|            (pete_fears_height ) -              |
 
|        (jane_plays_drums ) -                  |
 
|      (paul_plays_guitar )                      |
 
|      paul_plays_drums                          |
 
|        jane_plays_drums -                      |
 
|        (jane_plays_drums )                      |
 
|        jane_plays_guitar                      |
 
|          pete_fears_13                          |
 
|          pete_fears_cats -                    |
 
|          (pete_fears_cats )                    |
 
|            pete_fears_height -                  |
 
|            (pete_fears_height )                |
 
|            paul_fears_13 -                    |
 
|            (paul_fears_13 )                    |
 
|              jane_fears_13 -                    |
 
|              (jane_fears_13 )                  |
 
|              paul_fears_cats -                |
 
|              (paul_fears_cats )                |
 
|                paul_fears_height -              |
 
|                (paul_fears_height ) -          |
 
|          (pete_fears_13 )                      |
 
|          pete_fears_cats -                    |
 
|          (pete_fears_cats )                    |
 
|            pete_fears_height -                  |
 
|            (pete_fears_height ) -              |
 
|        (jane_plays_guitar ) -                  |
 
|      (paul_plays_drums ) -                    |
 
|  (pete_plays_sax )                              |
 
|  pete_plays_drums                              |
 
|    paul_plays_drums -                          |
 
|    (paul_plays_drums )                          |
 
|    jane_plays_drums -                          |
 
|    (jane_plays_drums )                        |
 
|      paul_plays_guitar                          |
 
|      paul_plays_sax -                          |
 
|      (paul_plays_sax )                        |
 
|        jane_plays_guitar -                      |
 
|        (jane_plays_guitar )                    |
 
|        jane_plays_sax                          |
 
|          pete_fears_13 -                        |
 
|          (pete_fears_13 )                      |
 
|          pete_fears_cats                      |
 
|            pete_fears_height -                  |
 
|            (pete_fears_height )                |
 
|            paul_fears_cats -                  |
 
|            (paul_fears_cats )                  |
 
|              jane_fears_cats -                  |
 
|              (jane_fears_cats )                |
 
|              paul_fears_13                    |
 
|                paul_fears_height -              |
 
|                (paul_fears_height )            |
 
|                jane_fears_13 -                |
 
|                (jane_fears_13 )                |
 
|                  jane_fears_height *            |
 
|                  (jane_fears_height ) -        |
 
|              (paul_fears_13 )                  |
 
|                paul_fears_height -              |
 
|                (paul_fears_height ) -          |
 
|          (pete_fears_cats )                    |
 
|            pete_fears_height -                  |
 
|            (pete_fears_height ) -              |
 
|        (jane_plays_sax ) -                    |
 
|      (paul_plays_guitar )                      |
 
|      paul_plays_sax -                          |
 
|      (paul_plays_sax ) -                      |
 
|  (pete_plays_drums ) -                        |
 
o-------------------------------------------------o
 
 
 
This is just the traverse of the "arboreal boolean expansion" (ABE) tree
 
that Model function germinates from the propositional expression that we
 
planted in the file Consat.Log, which works to describe the facts of the
 
situation in question.  Since there are 18 logical feature names in this
 
propositional expression, we are literally talking about a function that
 
enjoys the abstract type f : %B%^18 -> %B%.  If I had wanted to evaluate
 
this function by expressly writing out its truth table, then it would've
 
required 2^18 = 262144 rows.  Now I didn't bother to count, but I'm sure
 
that the above output does not have anywhere near that many lines, so it
 
must be that my program, and maybe even its author, has done a couple of
 
things along the way that are moderately intelligent.  At least, we hope.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
AK = Antti Karttunen
 
JA = Jon Awbrey
 
 
 
AK: Am I (and other SeqFanaticians) missing something from this thread?
 
 
 
AK: Your previous message on seqfan (headers below) is a bit of the same topic,
 
    but does it belong to the same thread?  Where I could obtain the other
 
    messages belonging to those two threads?  (I'm just now starting to
 
    study "mathematical logic", and its relations to combinatorics are
 
    very interesting.)  Is this "cactus" language documented anywhere?
 
 
 
here i was just following a courtesy of copying people
 
when i reference their works, in this case neil's site:
 
 
 
http://www.research.att.com/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=000170
 
 
 
but then i thought that the seqfantasians might be amused, too.
 
 
 
the bit on higher order propositions, in particular,
 
those of type h : (B^2 -> B) -> B, i sent because
 
of the significance that 2^2^2^2 = 65536 took on
 
for us around that time.  & the ho, ho, ho joke.
 
 
 
"zeroth order logic" (zol) is just another name for
 
the propositional calculus or the sentential logic
 
that comes before "first order logic" (fol), aka
 
first intens/tional logic, quantificational logic,
 
or predicate calculus, depending on who you talk to.
 
 
 
the line of work that i have been doing derives from
 
the ideas of c.s. peirce (1839-1914), who developed
 
a couple of systems of "logical graphs", actually,
 
two variant interpretations of the same abstract
 
structures, called "entitative" and "existential"
 
graphs.  he organized his system into "alpha",
 
"beta", and "gamma" layers, roughly equivalent
 
to our propositional, quantificational, and
 
modal levels of logic today.
 
 
 
on the more contemporary scene, peirce's entitative interpretation
 
of logical graphs was revived and extended by george spencer brown
 
in his book 'laws of form', while the existential interpretation
 
has flourished in the development of "conceptual graphs" by
 
john f sowa and a community of growing multitudes.
 
 
 
a passel of links:
 
 
 
http://members.door.net/arisbe/
 
http://www.enolagaia.com/GSB.html
 
http://www.cs.uah.edu/~delugach/CG/
 
http://www.jfsowa.com/
 
http://www.jfsowa.com/cg/
 
http://www.jfsowa.com/peirce/ms514w.htm
 
http://users.bestweb.net/~sowa/
 
http://users.bestweb.net/~sowa/peirce/ms514.htm
 
 
 
i have mostly focused on "alpha" (prop calc or zol) --
 
though the "func conception of quant logic" thread was
 
a beginning try at saying how the same line of thought
 
might be extended to 1st, 2nd, & higher order logics --
 
and i devised a particular graph & string syntax that
 
is based on a species of cacti, officially described as
 
the "reflective extension of logical graphs" (ref log),
 
but more lately just referred to as "cactus language".
 
 
 
it turns out that one can do many interesting things
 
with prop calc if one has an efficient enough syntax
 
and a powerful enough interpreter for it, even using
 
it as a very minimal sort of declarative programming
 
language, hence, the current thread was directed to
 
applying "zeroth order theories" (zot's) as brands
 
of "zeroth order programs" (zop's) to a set of old
 
constraint satisfaction and knowledge rep examples.
 
 
 
more recent expositions of the cactus language have been directed
 
toward what some people call "ontology engineering" -- it sounds
 
so much cooler than "taxonomy" -- and so these are found in the
 
ieee standard upper ontology working group discussion archives.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Let's now pause and reflect on the mix of abstract and concrete material
 
that we have cobbled together in spectacle of this "World Of Zero" (WOZ),
 
since I believe that we may have seen enough, if we look at it right, to
 
illustrate a few of the more salient phenomena that would normally begin
 
to weigh in as a major force only on a much larger scale.  Now, it's not
 
exactly like this impoverished sample, all by itself, could determine us
 
to draw just the right generalizations, or force us to see the shape and
 
flow of its immanent law -- it is much too sparse a scattering of points
 
to tease out the lines of its up and coming generations quite so clearly --
 
but it can be seen to exemplify many of the more significant themes that
 
we know evolve in more substantial environments, that is, On Beyond Zero,
 
since we have already seen them, "tho' obscur'd", in these higher realms.
 
 
 
One the the themes that I want to to keep an eye on as this discussion
 
develops is the subject that might be called "computation as semiosis".
 
 
 
In this light, any calculus worth its salt must be capable of helping
 
us do two things, calculation, of course, but also analysis.  This is
 
probably one of the reasons why the ordinary sort of differential and
 
integral calculus over quantitative domains is frequently referred to
 
as "real analysis", or even just "analysis".  It seems quite clear to
 
me that any adequate logical calculus, in many ways expected to serve
 
as a qualitative analogue of analytic geometry in the way that it can
 
be used to describe configurations in logically circumscribed domains,
 
ought to qualify in both dimensions, namely, analysis and computation.
 
 
 
With all of these various features of the situation in mind, then, we come
 
to the point of viewing analysis and computation as just so many different
 
kinds of "sign transformations in respect of pragmata" (STIROP's).  Taking
 
this insight to heart, let us next work to assemble a comprehension of our
 
concrete examples, set in the medium of the abstract calculi that allow us
 
to express their qualitative patterns, that may hope to be an increment or
 
two less inchoate than we have seen so far, and that may even permit us to
 
catch the action of these fading fleeting sign transformations on the wing.
 
 
 
Here is how I picture our latest round of examples
 
as filling out the framework of this investigation:
 
 
 
o-----------------------------o-----------------------------o
 
|    Objective Framework    |  Interpretive Framework    |
 
o-----------------------------o-----------------------------o
 
|                                                          |
 
|                              s_1 = Logue(o)      |      |
 
|                              /                    |      |
 
|                            /                      |      |
 
|                            @                      |      |
 
|                          ·  \                      |      |
 
|                        ·    \                    |      |
 
|                      ·        i_1 = Model(o)      v      |
 
|                    ·          s_2 = Model(o)      |      |
 
|                  ·          /                    |      |
 
|                ·            /                      |      |
 
|    Object = o · · · · · · @                      |      |
 
|                ·            \                      |      |
 
|                  ·          \                    |      |
 
|                    ·          i_2 = Tenor(o)      v      |
 
|                      ·        s_3 = Tenor(o)      |      |
 
|                        ·    /                    |      |
 
|                          ·  /                      |      |
 
|                            @                      |      |
 
|                            \                      |      |
 
|                              \                    |      |
 
|                              i_3 = Sense(o)      v      |
 
|                                                          |
 
o-----------------------------------------------------------o
 
Figure.  Computation As Semiotic Transformation
 
  
The Figure shows three distinct sign triples of the form <o, s, i>, where
+
===Aug 2000 &bull; Extensions Of Logical Graphs===
o = ostensible objective = the observed, indicated, or intended situation.
 
 
 
| A.  <o, Logue(o), Model(o)>
 
|
 
| B.  <o, Model(o), Tenor(o)>
 
|
 
| C.  <o, Tenor(o), Sense(o)>
 
  
Let us bring these several signs together in one place,
+
====CG List &bull; Lost Links====
to compare and contrast their common and their diverse
 
characters, and to think about why we make such a fuss
 
about passing from one to the other in the first place.
 
  
1. Logue(o)  =  Consat.Log
+
# http://www.virtual-earth.de/CG/cg-list/old/msg03351.html
o-----------------------------------------------------------------------o
+
# http://www.virtual-earth.de/CG/cg-list/old/msg03352.html
|                                                                      |
+
# http://www.virtual-earth.de/CG/cg-list/old/msg03353.html
|  (( pete_plays_guitar ),( pete_plays_sax ),( pete_plays_drums ))      |
+
# http://www.virtual-earth.de/CG/cg-list/old/msg03354.html
|  (( paul_plays_guitar ),( paul_plays_sax ),( paul_plays_drums ))      |
+
# http://www.virtual-earth.de/CG/cg-list/old/msg03376.html
|  (( jane_plays_guitar ),( jane_plays_sax ),( jane_plays_drums ))      |
+
# http://www.virtual-earth.de/CG/cg-list/old/msg03379.html
|                                                                      |
+
# http://www.virtual-earth.de/CG/cg-list/old/msg03381.html
|  (( pete_plays_guitar ),( paul_plays_guitar ),( jane_plays_guitar ))  |
 
|  (( pete_plays_sax    ),( paul_plays_sax    ),( jane_plays_sax    ))  |
 
|  (( pete_plays_drums  ),( paul_plays_drums  ),( jane_plays_drums  ))  |
 
|                                                                      |
 
|  (( pete_fears_13 ),( pete_fears_cats ),( pete_fears_height ))        |
 
|  (( paul_fears_13 ),( paul_fears_cats ),( paul_fears_height ))        |
 
|  (( jane_fears_13 ),( jane_fears_cats ),( jane_fears_height ))        |
 
|                                                                      |
 
|  (( pete_fears_13    ),( paul_fears_13    ),( jane_fears_13    ))  |
 
|  (( pete_fears_cats  ),( paul_fears_cats  ),( jane_fears_cats  ))  |
 
|  (( pete_fears_height ),( paul_fears_height ),( jane_fears_height ))  |
 
|                                                                      |
 
|  ((                                                                  |
 
|                                                                      |
 
|  ( pete_plays_guitar )                                                |
 
|  ( pete_fears_height )                                                |
 
|                                                                      |
 
|  ( pete_plays_guitar  pete_fears_height )                            |
 
|  ( paul_plays_guitar  paul_fears_height )                            |
 
|  ( jane_plays_guitar  jane_fears_height )                            |
 
|                                                                      |
 
|  ( paul_fears_cats )                                                  |
 
|  ( paul_plays_sax  )                                                  |
 
|                                                                      |
 
|  ( pete_plays_sax  pete_fears_cats )                                  |
 
|  ( paul_plays_sax  paul_fears_cats )                                  |
 
|  ( jane_plays_sax  jane_fears_cats )                                  |
 
|                                                                      |
 
|  ( pete_plays_drums  pete_fears_13 )                                  |
 
|  ( paul_plays_drums  paul_fears_13 )                                  |
 
|  ( jane_plays_drums  jane_fears_13 )                                  |
 
|                                                                      |
 
|  ( pete_plays_drums  pete_fears_height )                              |
 
|  ( paul_plays_drums  paul_fears_height )                              |
 
|  ( jane_plays_drums  jane_fears_height )                              |
 
|                                                                      |
 
|  ))                                                                  |
 
|                                                                      |
 
o-----------------------------------------------------------------------o
 
  
2.  Model(o)  = Consat.Mod  ><>  http://suo.ieee.org/ontology/msg03718.html
+
====CG List &bull; New Archive====
  
3. Tenor(o)  =  Consat.Ten (Just The Gist Of It)
+
* http://web.archive.org/web/20031104183832/http://mars.virtual-earth.de/pipermail/cg/2000q3/thread.html#3592
o-------------------------------------------------o
+
# http://web.archive.org/web/20030723202219/http://mars.virtual-earth.de/pipermail/cg/2000q3/003592.html
| (pete_plays_guitar )                            | <01> -
+
# http://web.archive.org/web/20030723202341/http://mars.virtual-earth.de/pipermail/cg/2000q3/003593.html
|  (pete_plays_sax )                              | <02> -
+
# &bull;
|  pete_plays_drums                              | <03> +
+
# http://web.archive.org/web/20030723202516/http://mars.virtual-earth.de/pipermail/cg/2000q3/003595.html
|    (paul_plays_drums )                          | <04> -
+
# &bull;
|    (jane_plays_drums )                        | <05> -
+
# &bull;
|      paul_plays_guitar                          | <06> +
+
# &bull;
|      (paul_plays_sax )                        | <07> -
 
|        (jane_plays_guitar )                    | <08> -
 
|        jane_plays_sax                          | <09> +
 
|          (pete_fears_13 )                      | <10> -
 
|          pete_fears_cats                      | <11> +
 
|            (pete_fears_height )                | <12> -
 
|            (paul_fears_cats )                  | <13> -
 
|              (jane_fears_cats )                | <14> -
 
|              paul_fears_13                    | <15> +
 
|                (paul_fears_height )            | <16> -
 
|                (jane_fears_13 )                | <17> -
 
|                  jane_fears_height *            | <18> +
 
o-------------------------------------------------o
 
  
4.  Sense(o)  = Consat.Sen
+
====CG List &bull; Old Archive====
o-------------------------------------------------o
 
| pete_plays_drums                                | <03>
 
|  paul_plays_guitar                              | <06>
 
|  jane_plays_sax                                | <09>
 
|    pete_fears_cats                              | <11>
 
|    paul_fears_13                              | <15>
 
|      jane_fears_height                          | <18>
 
o-------------------------------------------------o
 
  
As one proceeds through the subsessions of the Theme One Study session,
+
# &bull;
the computation transforms its larger "signs", in this case text files,
+
# http://web.archive.org/web/20020321115639/http://www.virtual-earth.de/CG/cg-list/msg03352.html
from one to the next, in the sequence: Logue, Model, Tenor, and Sense.
+
# &bull;
 +
# http://web.archive.org/web/20020321120331/http://www.virtual-earth.de/CG/cg-list/msg03354.html
 +
# http://web.archive.org/web/20020321223131/http://www.virtual-earth.de/CG/cg-list/msg03376.html
 +
# &bull;
 +
# http://web.archive.org/web/20020129134132/http://www.virtual-earth.de/CG/cg-list/msg03381.html
  
Let us see if we can pin down, on sign-theoretic grounds,
+
===Sep 2000 &bull; Zeroth Order Logic===
why this very sort of exercise is so routinely necessary.
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
* http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/thrd241.html#01246
 +
* http://web.archive.org/web/20130306202443/http://suo.ieee.org/email/thrd242.html#01406
  
We were in the middle of pursuing several questions about
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01246.html
sign relational transformations in general, in particular,
+
# http://web.archive.org/web/20080905054059/http://suo.ieee.org/email/msg01251.html
the following Example of a sign transformation that arose
+
# http://web.archive.org/web/20070223033521/http://suo.ieee.org/email/msg01380.html
in the process of setting up and solving a classical sort
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01406.html
of constraint satisfaction problem.
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01546.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01561.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01670.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01966.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01985.html
 +
# http://web.archive.org/web/20070401102902/http://suo.ieee.org/email/msg01988.html
  
o-----------------------------o-----------------------------o
+
===Oct 2000 &bull; All Liar, No Paradox===
|    Objective Framework    |  Interpretive Framework    |
 
o-----------------------------o-----------------------------o
 
|                                                          |
 
|                              s_1 = Logue(o)      |      |
 
|                              /                    |      |
 
|                            /                      |      |
 
|                            @                      |      |
 
|                          ·  \                      |      |
 
|                        ·    \                    |      |
 
|                      ·        i_1 = Model(o)      v      |
 
|                    ·          s_2 = Model(o)      |      |
 
|                  ·          /                    |      |
 
|                ·            /                      |      |
 
|    Object = o · · · · · · @                      |      |
 
|                ·            \                      |      |
 
|                  ·          \                    |      |
 
|                    ·          i_2 = Tenor(o)      v      |
 
|                      ·        s_3 = Tenor(o)      |      |
 
|                        ·    /                    |      |
 
|                          ·  /                      |      |
 
|                            @                      |      |
 
|                            \                      |      |
 
|                              \                    |      |
 
|                              i_3 = Sense(o)      v      |
 
|                                                          |
 
o-----------------------------------------------------------o
 
Figure.  Computation As Semiotic Transformation
 
  
1. Logue(o)  =  Consat.Log
+
* http://web.archive.org/web/20130306202504/http://suo.ieee.org/email/thrd236.html#01739
o-----------------------------------------------------------------------o
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01739.html
|                                                                      |
 
|  (( pete_plays_guitar ),( pete_plays_sax ),( pete_plays_drums ))      |
 
|  (( paul_plays_guitar ),( paul_plays_sax ),( paul_plays_drums ))      |
 
|  (( jane_plays_guitar ),( jane_plays_sax ),( jane_plays_drums ))      |
 
|                                                                      |
 
|  (( pete_plays_guitar ),( paul_plays_guitar ),( jane_plays_guitar ))  |
 
|  (( pete_plays_sax    ),( paul_plays_sax    ),( jane_plays_sax    ))  |
 
|  (( pete_plays_drums  ),( paul_plays_drums  ),( jane_plays_drums  ))  |
 
|                                                                      |
 
|  (( pete_fears_13 ),( pete_fears_cats ),( pete_fears_height ))        |
 
|  (( paul_fears_13 ),( paul_fears_cats ),( paul_fears_height ))        |
 
|  (( jane_fears_13 ),( jane_fears_cats ),( jane_fears_height ))        |
 
|                                                                      |
 
|  (( pete_fears_13    ),( paul_fears_13    ),( jane_fears_13    ))  |
 
|  (( pete_fears_cats  ),( paul_fears_cats  ),( jane_fears_cats  ))  |
 
|  (( pete_fears_height ),( paul_fears_height ),( jane_fears_height ))  |
 
|                                                                      |
 
|  ((                                                                  |
 
|                                                                      |
 
|  ( pete_plays_guitar )                                                |
 
|  ( pete_fears_height )                                                |
 
|                                                                      |
 
|  ( pete_plays_guitar  pete_fears_height )                            |
 
|  ( paul_plays_guitar  paul_fears_height )                            |
 
|  ( jane_plays_guitar  jane_fears_height )                            |
 
|                                                                      |
 
|  ( paul_fears_cats )                                                  |
 
|  ( paul_plays_sax  )                                                  |
 
|                                                                      |
 
|  ( pete_plays_sax  pete_fears_cats )                                  |
 
|  ( paul_plays_sax  paul_fears_cats )                                  |
 
|  ( jane_plays_sax  jane_fears_cats )                                  |
 
|                                                                      |
 
|  ( pete_plays_drums  pete_fears_13 )                                  |
 
|  ( paul_plays_drums  paul_fears_13 )                                  |
 
|  ( jane_plays_drums  jane_fears_13 )                                  |
 
|                                                                      |
 
|  ( pete_plays_drums  pete_fears_height )                              |
 
|  ( paul_plays_drums  paul_fears_height )                              |
 
|  ( jane_plays_drums  jane_fears_height )                              |
 
|                                                                      |
 
|  ))                                                                  |
 
|                                                                      |
 
o-----------------------------------------------------------------------o
 
  
2.  Model(o)  = Consat.Mod  ><>  http://suo.ieee.org/ontology/msg03718.html
+
===Nov 2000 &bull; Sowa's Top Level Categories===
  
3.  Tenor(o)  = Consat.Ten (Just The Gist Of It)
+
====What Language To Use====
o-------------------------------------------------o
 
| (pete_plays_guitar )                            | <01> -
 
|  (pete_plays_sax )                              | <02> -
 
|  pete_plays_drums                              | <03> +
 
|    (paul_plays_drums )                          | <04> -
 
|    (jane_plays_drums )                        | <05> -
 
|      paul_plays_guitar                          | <06> +
 
|      (paul_plays_sax )                        | <07> -
 
|        (jane_plays_guitar )                    | <08> -
 
|        jane_plays_sax                          | <09> +
 
|          (pete_fears_13 )                      | <10> -
 
|          pete_fears_cats                      | <11> +
 
|            (pete_fears_height )                | <12> -
 
|            (paul_fears_cats )                  | <13> -
 
|              (jane_fears_cats )                | <14> -
 
|              paul_fears_13                    | <15> +
 
|                (paul_fears_height )            | <16> -
 
|                (jane_fears_13 )                | <17> -
 
|                  jane_fears_height *            | <18> +
 
o-------------------------------------------------o
 
  
4. Sense(o)  =  Consat.Sen
+
* http://web.archive.org/web/20070218222218/http://suo.ieee.org/email/threads.html#01956
o-------------------------------------------------o
+
# http://web.archive.org/web/20070320012929/http://suo.ieee.org/email/msg01956.html
| pete_plays_drums                                | <03>
 
|  paul_plays_guitar                              | <06>
 
|  jane_plays_sax                                | <09>
 
|    pete_fears_cats                              | <11>
 
|    paul_fears_13                              | <15>
 
|      jane_fears_height                          | <18>
 
o-------------------------------------------------o
 
  
We can worry later about the proper use of quotation marks
+
====Zeroth Order Logic====
in discussing such a case, where the file name "Yada.Yak"
 
denotes a piece of text that expresses a proposition that
 
describes an objective situation or an intentional object,
 
but whatever the case it is clear that we are knee & neck
 
deep in a sign relational situation of a modest complexity.
 
  
I think that the right sort of analogy might help us
+
* http://web.archive.org/web/20070218222218/http://suo.ieee.org/email/threads.html#01966
to sort it out, or at least to tell what's important
+
# http://web.archive.org/web/20070320012940/http://suo.ieee.org/email/msg01966.html
from the things that are less so. The paradigm that
 
comes to mind for me is the type of context in maths
 
where we talk about the "locus" or the "solution set"
 
of an equation, and here we think of the equation as
 
denoting its solution set or describing a locus, say,
 
a point or a curve or a surface or so on up the scale.
 
  
In this figure of speech, we might say for instance:
+
====TLC In KIF====
  
| o is
+
* http://web.archive.org/web/20130304163442/http://suo.ieee.org/ontology/thrd110.html#00048
| what "x^3 - 3x^2 + 3x - 1 = 0" denotes is
+
# http://web.archive.org/web/20081204195421/http://suo.ieee.org/ontology/msg00048.html
| what "(x-1)^3 = 0" denotes is
+
# http://web.archive.org/web/20070320014557/http://suo.ieee.org/ontology/msg00051.html
| what "1" denotes
 
| is 1.
 
  
Making explicit the assumptive interpretations
+
===Dec 2000 &bull; Sequential Interactions Generating Hypotheses===
that the context probably enfolds in this case,
 
we assume this description of the solution set:
 
  
{x in the Reals : x^3 - 3x^2 + 3x -1 = 0} = {1}.
+
* http://web.archive.org/web/20130306202621/http://suo.ieee.org/email/thrd217.html#02607
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg02607.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg02608.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg03183.html
  
In sign relational terms, we have the 3-tuples:
+
===Jan 2001 &bull; Differential Analytic Turing Automata===
  
| <o, "x^3 - 3x^2 + 3x - 1 = 0", "(x-1)^3 = 0">
+
====DATA &bull; Arisbe List====
|
 
| <o, "(x-1)^3 = 0", "1">
 
|
 
| <o, "1", "1">
 
  
As it turns out we discover that the
+
* http://web.archive.org/web/20150107163000/http://stderr.org/pipermail/arisbe/2001-January/thread.html#182
object o was really just 1 all along.
+
# http://web.archive.org/web/20061013224128/http://stderr.org/pipermail/arisbe/2001-January/000182.html
 +
# http://web.archive.org/web/20061013224814/http://stderr.org/pipermail/arisbe/2001-January/000200.html
  
But why do we put ourselves through the rigors of these
+
====DATA &bull; Ontology List====
transformations at all?  If 1 is what we mean, why not
 
just say "1" in the first place and be done with it?
 
A person who asks a question like that has forgetten
 
how we keep getting ourselves into these quandaries,
 
and who it is that assigns the problems, for it is
 
Nature herself who is the taskmistress here and the
 
problems are set in the manner that she determines,
 
not in the style to which we would like to become
 
accustomed.  The best that we can demand of our
 
various and sundry calculi is that they afford
 
us with the nets and the snares more readily
 
to catch the shape of the problematic game
 
as it flies up before us on its own wings,
 
and only then to tame it to the amenable
 
demeanors that we find to our liking.
 
  
In sum, the first place is not ours to take.
+
* http://web.archive.org/web/20130304165332/http://suo.ieee.org/ontology/thrd95.html#00596
We are but poor second players in this game.
+
# http://web.archive.org/web/20041021223934/http://suo.ieee.org/ontology/msg00596.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg00618.html
  
That understood, I can now lay out our present Example
+
===Mar 2001 &bull; Propositional Equation Reasoning Systems===
along the lines of this familiar mathematical exercise.
 
  
| o is
+
====PERS &bull; Arisbe List====
| what Consat.Log denotes is
 
| what Consat.Mod denotes is
 
| what Consat.Ten denotes is
 
| what Consat.Sen denotes.
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
* http://web.archive.org/web/20150107210802/http://stderr.org/pipermail/arisbe/2001-March/thread.html#380
 +
* http://web.archive.org/web/20150107212028/http://stderr.org/pipermail/arisbe/2001-April/thread.html#407
  
It will be good to keep this picture before us a while longer:
+
# http://web.archive.org/web/20150107210011/http://stderr.org/pipermail/arisbe/2001-March/000380.html
 +
# http://web.archive.org/web/20050920031758/http://stderr.org/pipermail/arisbe/2001-April/000407.html
 +
# http://web.archive.org/web/20051202010243/http://stderr.org/pipermail/arisbe/2001-April/000409.html
 +
# http://web.archive.org/web/20051202074355/http://stderr.org/pipermail/arisbe/2001-April/000411.html
 +
# http://web.archive.org/web/20051202021217/http://stderr.org/pipermail/arisbe/2001-April/000412.html
 +
# http://web.archive.org/web/20051201225716/http://stderr.org/pipermail/arisbe/2001-April/000413.html
 +
# http://web.archive.org/web/20051202001736/http://stderr.org/pipermail/arisbe/2001-April/000416.html
 +
# http://web.archive.org/web/20051202053817/http://stderr.org/pipermail/arisbe/2001-April/000417.html
 +
# http://web.archive.org/web/20051202013458/http://stderr.org/pipermail/arisbe/2001-April/000421.html
 +
# http://web.archive.org/web/20051202013024/http://stderr.org/pipermail/arisbe/2001-April/000427.html
 +
# http://web.archive.org/web/20051202032812/http://stderr.org/pipermail/arisbe/2001-April/000428.html
 +
# http://web.archive.org/web/20051201225109/http://stderr.org/pipermail/arisbe/2001-April/000430.html
 +
# http://web.archive.org/web/20050908023250/http://stderr.org/pipermail/arisbe/2001-April/000432.html
 +
# http://web.archive.org/web/20051202002952/http://stderr.org/pipermail/arisbe/2001-April/000433.html
 +
# http://web.archive.org/web/20051201220336/http://stderr.org/pipermail/arisbe/2001-April/000434.html
 +
# http://web.archive.org/web/20050906215058/http://stderr.org/pipermail/arisbe/2001-April/000435.html
  
o-----------------------------o-----------------------------o
+
====PERS &bull; Arisbe List &bull; Discussion====
|    Objective Framework    |  Interpretive Framework    |
 
o-----------------------------o-----------------------------o
 
|                                                          |
 
|                              s_1  = Logue(o)    |      |
 
|                              /                    |      |
 
|                            /                      |      |
 
|                            @                      |      |
 
|                          ·  \                      |      |
 
|                        ·    \                    |      |
 
|                      ·        i_1  = Model(o)    v      |
 
|                    ·          s_2  = Model(o)    |      |
 
|                  ·          /                    |      |
 
|                ·            /                      |      |
 
|  Object  = o · · · · · · @                      |      |
 
|                ·            \                      |      |
 
|                  ·          \                    |      |
 
|                    ·          i_2  = Tenor(o)    v      |
 
|                      ·        s_3  = Tenor(o)    |      |
 
|                        ·    /                    |      |
 
|                          ·  /                      |      |
 
|                            @                      |      |
 
|                            \                      |      |
 
|                              \                    |      |
 
|                              i_3  = Sense(o)    v      |
 
|                                                          |
 
o-----------------------------------------------------------o
 
Figure.  Computation As Semiotic Transformation
 
  
The labels that decorate the syntactic plane and indicate
+
* http://web.archive.org/web/20150107212028/http://stderr.org/pipermail/arisbe/2001-April/thread.html#397
the semiotic transitions in the interpretive panel of the
+
# http://web.archive.org/web/20150107212003/http://stderr.org/pipermail/arisbe/2001-April/000397.html
framework point us to text files whose contents rest here:
 
  
http://suo.ieee.org/ontology/msg03722.html
+
====PERS &bull; Ontology List====
  
The reason that I am troubling myself -- and no doubt you --
+
* http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/thrd74.html#01779
with the details of this Example is because it highlights
+
# http://web.archive.org/web/20070326233418/http://suo.ieee.org/ontology/msg01779.html
a number of the thistles that we will have to grasp if we
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg01897.html
ever want to escape from the traps of YARNBOL and YARWARS
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02005.html
in which so many of our fairweather fiends are seeking to
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02011.html
ensnare us, and not just us -- the whole web of the world.
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02014.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02015.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02024.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02046.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02047.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02068.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02102.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02109.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02117.html
 +
# http://web.archive.org/web/20040116001230/http://suo.ieee.org/ontology/msg02125.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02128.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02134.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02138.html
  
YARNBOL  = Yet Another Roman Numeral Based Ontology Language.
+
====PERS &bull; SUO List====
YARWARS  = Yet Another Representation Without A Reasoning System.
 
  
In order to avoid this, or to reverse the trend once it gets started,
+
* http://web.archive.org/web/20130109194711/http://suo.ieee.org/email/thrd187.html#04187
we just have to remember what a dynamic living process a computation
+
# http://web.archive.org/web/20140423181000/http://suo.ieee.org/email/msg04187.html
really is, precisely because it is meant to serve as an iconic image
+
# http://web.archive.org/web/20070922193822/http://suo.ieee.org/email/msg04305.html
of dynamic, deliberate, purposeful transformations that we are bound
+
# http://web.archive.org/web/20071007170752/http://suo.ieee.org/email/msg04413.html
to go through and to carry out in a hopeful pursuit of the solutions
+
# http://web.archive.org/web/20070121063018/http://suo.ieee.org/email/msg04419.html
to the many real live problems that life and society place before us.
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04422.html
So I take it rather seriously.
+
# http://web.archive.org/web/20070305132316/http://suo.ieee.org/email/msg04423.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04432.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04454.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04455.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04476.html
 +
# http://web.archive.org/web/20060718091105/http://suo.ieee.org/email/msg04510.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04517.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04525.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04533.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04536.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04542.html
 +
# http://web.archive.org/web/20050824231950/http://suo.ieee.org/email/msg04546.html
  
Okay, back to the grindstone.
+
===Jul 2001 &bull; Reflective Extension Of Logical Graphs===
  
The question is:  "Why are these trips necessary?"
+
====RefLog &bull; Arisbe List====
  
How come we don't just have one proper expression
+
* http://web.archive.org/web/20150109141200/http://stderr.org/pipermail/arisbe/2001-July/thread.html#711
for each situation under the sun, or all possible
+
# http://web.archive.org/web/20150109141000/http://stderr.org/pipermail/arisbe/2001-July/000711.html
suns, I guess, for some, and just use that on any
 
appearance, instance, occasion of that situation?
 
  
Why is it ever necessary to begin with an obscure description
+
====RefLog &bull; SUO List====
of a situation? -- for that is exactly what the propositional
 
expression caled "Logue(o)", for Example, the Consat.Log file,
 
really is.
 
  
Maybe I need to explain that first.
+
* http://web.archive.org/web/20070302133623/http://suo.ieee.org/email/thrd154.html#05694
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg05694.html
  
The first three items of syntax -- Logue(o), Model(o), Tenor(o) --
+
===Dec 2001 &bull; Functional Conception Of Quantificational Logic===
are all just so many different propositional expressions that
 
denote one and the same logical-valued function p : X -> %B%,
 
and one whose abstract image we may well enough describe as
 
a boolean function of the abstract type q : %B%^k -> %B%,
 
where k happens to be 18 in the present Consat Example.
 
  
If we were to write out the truth table for q : %B%^18 -> %B%
+
====FunLog &bull; Arisbe List====
it would take 2^18 = 262144 rows.  Using the bold letter #x#
 
for a coordinate tuple, writing #x# = <x_1, ..., x_18>, each
 
row of the table would have the form <x_1, ..., x_18, q(#x#)>.
 
And the function q is such that all rows evalue to %0% save 1.
 
  
Each of the four different formats expresses this fact about q
+
* http://web.archive.org/web/20141005034441/http://stderr.org/pipermail/arisbe/2001-December/thread.html#1212
in its own way. The first three are logically equivalent, and
+
# http://web.archive.org/web/20141005034614/http://stderr.org/pipermail/arisbe/2001-December/001212.html
the last one is the maximally determinate positive implication
+
# http://web.archive.org/web/20141005034615/http://stderr.org/pipermail/arisbe/2001-December/001213.html
of what the others all say.
+
# http://web.archive.org/web/20051202034557/http://stderr.org/pipermail/arisbe/2001-December/001216.html
 +
# http://web.archive.org/web/20051202074331/http://stderr.org/pipermail/arisbe/2001-December/001221.html
 +
# http://web.archive.org/web/20051201235028/http://stderr.org/pipermail/arisbe/2001-December/001222.html
 +
# http://web.archive.org/web/20051202052037/http://stderr.org/pipermail/arisbe/2001-December/001223.html
 +
# http://web.archive.org/web/20050827214411/http://stderr.org/pipermail/arisbe/2001-December/001224.html
 +
# http://web.archive.org/web/20051202092500/http://stderr.org/pipermail/arisbe/2001-December/001225.html
 +
# http://web.archive.org/web/20051202051942/http://stderr.org/pipermail/arisbe/2001-December/001226.html
 +
# http://web.archive.org/web/20050425140213/http://stderr.org/pipermail/arisbe/2001-December/001227.html
  
From this point of view, the logical computation that we went through,
+
====FunLog &bull; Ontology List====
in the sequence Logue, Model, Tenor, Sense, was a process of changing
 
from an obscure sign of the objective proposition to a more organized
 
arrangement of its satisfying or unsatisfying interpretations, to the
 
most succinct possible expression of the same meaning, to an adequate
 
positive projection of it that is useful enough in the proper context.
 
  
This is the sort of mill -- it's called "computation" -- that we have
+
* http://web.archive.org/web/20120222171225/http://suo.ieee.org/ontology/thrd38.html#03562
to be able to put our representations through on a recurrent, regular,
+
# http://web.archive.org/web/20110608022546/http://suo.ieee.org/ontology/msg03562.html
routine basis, that is, if we expect them to have any utility at all.
+
# http://web.archive.org/web/20110608022712/http://suo.ieee.org/ontology/msg03563.html
And it is only when we have started to do that in genuinely effective
+
# http://web.archive.org/web/20110608023312/http://suo.ieee.org/ontology/msg03564.html
and efficient ways, that we can even begin to think about facilitating
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03565.html
any bit of qualitative conceptual analysis through computational means.
+
# http://web.archive.org/web/20070812011325/http://suo.ieee.org/ontology/msg03569.html
 +
# http://web.archive.org/web/20110608023228/http://suo.ieee.org/ontology/msg03570.html
 +
# http://web.archive.org/web/20110608022616/http://suo.ieee.org/ontology/msg03568.html
 +
# http://web.archive.org/web/20110608023557/http://suo.ieee.org/ontology/msg03572.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03577.html
 +
# http://web.archive.org/web/20070317021141/http://suo.ieee.org/ontology/msg03578.html
 +
# http://web.archive.org/web/20110608021549/http://suo.ieee.org/ontology/msg03579.html
 +
# http://web.archive.org/web/20110608021332/http://suo.ieee.org/ontology/msg03580.html
 +
# http://web.archive.org/web/20110608020250/http://suo.ieee.org/ontology/msg03581.html
 +
# http://web.archive.org/web/20110608021344/http://suo.ieee.org/ontology/msg03582.html
 +
# http://web.archive.org/web/20110608021557/http://suo.ieee.org/ontology/msg03583.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg04247.html
  
And as far as the qualitative side of logical computation
+
===Dec 2001 &bull; Cactus Language===
and conceptual analysis goes, we have barely even started.
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
====Cactus Town Cartoons &bull; Arisbe List====
  
We are contemplating the sequence of initial and normal forms
+
* http://web.archive.org/web/20141005034441/http://stderr.org/pipermail/arisbe/2001-December/thread.html#1214
for the Consat problem and we have noted the following system
+
# http://web.archive.org/web/20050825005438/http://stderr.org/pipermail/arisbe/2001-December/001214.html
of logical relations, taking the enchained expressions of the
+
# http://web.archive.org/web/20051202101235/http://stderr.org/pipermail/arisbe/2001-December/001217.html
objective situation o in a pairwise associated way, of course:
 
  
Logue(o)  <=>  Model(o)  <=>  Tenor(o)  =>  Sense(o).
+
====Cactus Town Cartoons &bull; Ontology List====
  
The specifics of the propositional expressions are cited here:
+
* http://web.archive.org/web/20120222171225/http://suo.ieee.org/ontology/thrd38.html#03567
 +
# http://web.archive.org/web/20110608023426/http://suo.ieee.org/ontology/msg03567.html
 +
# http://web.archive.org/web/20110608024449/http://suo.ieee.org/ontology/msg03571.html
  
http://suo.ieee.org/ontology/msg03722.html
+
===Jan 2002 &bull; Zeroth Order Theories===
  
If we continue to pursue the analogy that we made with the form
+
====ZOT &bull; Arisbe List====
of mathematical activity commonly known as "solving equations",
 
then there are many salient features of this type of logical
 
problem solving endeavor that suddenly leap into the light.
 
  
First of all, we notice the importance of "equational reasoning"
+
* http://web.archive.org/web/20150109041904/http://stderr.org/pipermail/arisbe/2002-January/thread.html#1293
in mathematics, by which I mean, not just the quantitative type
+
# http://web.archive.org/web/20150109042401/http://stderr.org/pipermail/arisbe/2002-January/001293.html
of equation that forms the matter of the process, but also the
+
# http://web.archive.org/web/20150109042402/http://stderr.org/pipermail/arisbe/2002-January/001294.html
qualitative type of equation, or the "logical equivalence",
+
# http://web.archive.org/web/20050503213326/http://stderr.org/pipermail/arisbe/2002-January/001295.html
that connects each expression along the way, right up to
+
# http://web.archive.org/web/20050503213330/http://stderr.org/pipermail/arisbe/2002-January/001296.html
the penultimate stage, when we are satisfied in a given
+
# http://web.archive.org/web/20050504070444/http://stderr.org/pipermail/arisbe/2002-January/001299.html
context to take a projective implication of the total
+
# http://web.archive.org/web/20050504070430/http://stderr.org/pipermail/arisbe/2002-January/001300.html
knowledge of the situation that we have been taking
+
# http://web.archive.org/web/20050504070700/http://stderr.org/pipermail/arisbe/2002-January/001301.html
some pains to preserve at every intermediate stage
+
# http://web.archive.org/web/20050504070704/http://stderr.org/pipermail/arisbe/2002-January/001302.html
of the game.
+
# http://web.archive.org/web/20050504070712/http://stderr.org/pipermail/arisbe/2002-January/001304.html
 +
# http://web.archive.org/web/20050504070717/http://stderr.org/pipermail/arisbe/2002-January/001305.html
 +
# http://web.archive.org/web/20050504070722/http://stderr.org/pipermail/arisbe/2002-January/001306.html
 +
# http://web.archive.org/web/20050504070726/http://stderr.org/pipermail/arisbe/2002-January/001308.html
 +
# http://web.archive.org/web/20050504070730/http://stderr.org/pipermail/arisbe/2002-January/001309.html
 +
# http://web.archive.org/web/20050504070434/http://stderr.org/pipermail/arisbe/2002-January/001310.html
 +
# http://web.archive.org/web/20050504070742/http://stderr.org/pipermail/arisbe/2002-January/001313.html
 +
# http://web.archive.org/web/20050504070746/http://stderr.org/pipermail/arisbe/2002-January/001314.html
 +
# http://web.archive.org/web/20050504070438/http://stderr.org/pipermail/arisbe/2002-January/001315.html
 +
# http://web.archive.org/web/20050504070540/http://stderr.org/pipermail/arisbe/2002-January/001316.html
 +
# http://web.archive.org/web/20050504070750/http://stderr.org/pipermail/arisbe/2002-January/001317.html
  
This general pattern or strategy of inference, working its way through
+
====ZOT &bull; Arisbe List &bull; Discussion====
phases of "equational" or "total information preserving" inference and
 
phases of "implicational" or "selective information losing" inference,
 
is actually very common throughout mathematics, and I have in mind to
 
examine its character in greater detail and in a more general setting.
 
  
Just as the barest hint of things to come along these lines, you might
+
* http://web.archive.org/web/20150109041904/http://stderr.org/pipermail/arisbe/2002-January/thread.html#1293
consider the question of what would constitute the equational analogue
+
# http://web.archive.org/web/20050503213334/http://stderr.org/pipermail/arisbe/2002-January/001297.html
of modus ponens, in other words the scheme of inference that goes from
+
# http://web.archive.org/web/20050504070656/http://stderr.org/pipermail/arisbe/2002-January/001298.html
x and x=>y to y. Well the answer is a scheme of inference that passes
+
# http://web.archive.org/web/20050504070708/http://stderr.org/pipermail/arisbe/2002-January/001303.html
from x and x=>y to x&y, and then being reversible, back again. I will
+
# http://web.archive.org/web/20050504070544/http://stderr.org/pipermail/arisbe/2002-January/001307.html
explore the rationale and the utility of this gambit in future reports.
+
# http://web.archive.org/web/20050504070734/http://stderr.org/pipermail/arisbe/2002-January/001311.html
 +
# http://web.archive.org/web/20050504070738/http://stderr.org/pipermail/arisbe/2002-January/001312.html
 +
# http://web.archive.org/web/20050504070755/http://stderr.org/pipermail/arisbe/2002-January/001318.html
  
One observation that we can make already at this point,
+
====ZOT &bull; Ontology List====
however, is that these schemes of equational reasoning,
 
or reversible inference, remain poorly developed among
 
our currently prevailing styles of inference in logic,
 
their potentials for applied logical software hardly
 
being broached in our presently available systems.
 
  
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
+
* http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/thrd35.html#03680
 +
# http://web.archive.org/web/20070323210742/http://suo.ieee.org/ontology/msg03680.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03681.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03682.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03683.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03691.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03693.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03695.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03696.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03701.html
 +
# http://web.archive.org/web/20070329211521/http://suo.ieee.org/ontology/msg03702.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03703.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03706.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03707.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03708.html
 +
# http://web.archive.org/web/20080620074722/http://suo.ieee.org/ontology/msg03712.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03715.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03716.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03717.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03718.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03721.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03722.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03723.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03724.html
  
Extra Examples
+
====ZOT &bull; Ontology List &bull; Discussion====
  
1. Propositional logic example.
+
* http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/thrd35.html#03680
Files: Alpha.lex + Prop.log
+
* http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/thrd35.html#03697
Ref:   [Cha, 20, Example 2.12]
+
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03684.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03685.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03686.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03687.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03689.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03690.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03694.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03697.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03698.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03699.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03700.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03704.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03705.html
 +
# http://web.archive.org/web/20070330093628/http://suo.ieee.org/ontology/msg03709.html
 +
# http://web.archive.org/web/20080705071714/http://suo.ieee.org/ontology/msg03710.html
 +
# http://web.archive.org/web/20080620010020/http://suo.ieee.org/ontology/msg03711.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03713.html
 +
# http://web.archive.org/web/20080620074749/http://suo.ieee.org/ontology/msg03714.html
 +
# http://web.archive.org/web/20061005100254/http://suo.ieee.org/ontology/msg03719.html
 +
# http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03720.html
  
2.  Chemical synthesis problem.
+
===Mar 2003 &bull; Theme One Program &bull; Logical Cacti===
Files:  Chem.*
 
Ref:    [Cha, 21, Example 2.13]
 
  
3. N Queens problem.
+
* http://web.archive.org/web/20150224210000/http://stderr.org/pipermail/inquiry/2003-March/thread.html#102
Files: Queen*.*,  Q8.*,  Q5.*
+
* http://web.archive.org/web/20150224210000/http://stderr.org/pipermail/inquiry/2003-March/thread.html#114
Refs:   [BaC, 166], [VaH, 122], [Wir, 143].
+
# http://web.archive.org/web/20081007043317/http://stderr.org/pipermail/inquiry/2003-March/000114.html
Notes: Only the 5 Queens example will run in 640K memory.
+
# http://web.archive.org/web/20080908075558/http://stderr.org/pipermail/inquiry/2003-March/000115.html
        Use the "Queen.lex" file to load the "Q5.eg*" log files.
+
# http://web.archive.org/web/20080908080336/http://stderr.org/pipermail/inquiry/2003-March/000116.html
  
4.  Five Houses puzzle.
+
===Feb 2005 &bull; Theme One Program &bull; Logical Cacti===
Files:  House.*
 
Ref:    [VaH, 132].
 
Notes:  Will not run in 640K memory.
 
  
5. Graph coloring example.
+
* http://web.archive.org/web/20150109155110/http://stderr.org/pipermail/inquiry/2005-February/thread.html#2348
Files: Color.*
+
* http://web.archive.org/web/20150109155110/http://stderr.org/pipermail/inquiry/2005-February/thread.html#2360
Ref:   [Wil, 196].
+
# http://web.archive.org/web/20150109152359/http://stderr.org/pipermail/inquiry/2005-February/002360.html
 +
# http://web.archive.org/web/20150109152401/http://stderr.org/pipermail/inquiry/2005-February/002361.html
 +
# http://web.archive.org/web/20061013233259/http://stderr.org/pipermail/inquiry/2005-February/002362.html
 +
# http://web.archive.org/web/20081121103109/http://stderr.org/pipermail/inquiry/2005-February/002363.html
  
6.  Examples of Cook's Theorem in computational complexity,
+
[[Category:Artificial Intelligence]]
    that propositional satisfiability is NP-complete.
+
[[Category:Charles Sanders Peirce]]
 
+
[[Category:Combinatorics]]
Files: StiltN.* = "Space and Time Limited Turing Machine",
+
[[Category:Computer Science]]
        with N units of space and N units of time.
+
[[Category:Cybernetics]]
        StuntN.* = "Space and Time Limited Turing Machine",
+
[[Category:Equational Reasoning]]
        for computing the parity of a bit string,
+
[[Category:Formal Languages]]
        with Number of Tape cells of input equal to N.
+
[[Category:Formal Systems]]
Ref:    [Wil, 188-201].
+
[[Category:Graph Theory]]
Notes:  Can only run Turing machine example for input of size 2.
+
[[Category:Knowledge Representation]]
        Since the last tape cell is used for an end-of-file marker,
+
[[Category:Logic]]
        this amounts to only one significant digit of computation.
+
[[Category:Logical Graphs]]
        Use the "Stilt3.lex" file  to load the "Stunt2.egN" files.
+
[[Category:Mathematics]]
        Their Sense file outputs appear on the "Stunt2.seN" files.
+
[[Category:Philosophy]]
 
+
[[Category:Semiotics]]
7.  Fabric knowledge base.
+
[[Category:Visualization]]
Files:  Fabric.*, Fab.*
 
Ref:    [MaW, 8-16].
 
 
 
8.  Constraint Satisfaction example.
 
Files:  Consat1.*, Consat2.*
 
Ref:    [Win, 449, Exercise 3-9].
 
Notes:  Attributed to Kenneth D. Forbus.
 
 
 
References
 
 
 
| Angluin, Dana,
 
|"Learning with Hints", in
 
|'Proceedings of the 1988 Workshop on Computational Learning Theory',
 
| edited by D. Haussler & L. Pitt, Morgan Kaufmann, San Mateo, CA, 1989.
 
 
 
| Ball, W.W. Rouse, & Coxeter, H.S.M.,
 
|'Mathematical Recreations and Essays', 13th ed.,
 
| Dover, New York, NY, 1987.
 
 
 
| Chang, Chin-Liang & Lee, Richard Char-Tung,
 
|'Symbolic Logic and Mechanical Theorem Proving',
 
| Academic Press, New York, NY, 1973.
 
 
 
| Denning, Peter J., Dennis, Jack B., and Qualitz, Joseph E.,
 
|'Machines, Languages, and Computation',
 
| Prentice-Hall, Englewood Cliffs, NJ, 1978.
 
 
 
| Edelman, Gerald M.,
 
|'Topobiology:  An Introduction to Molecular Embryology',
 
| Basic Books, New York, NY, 1988.
 
 
 
| Lloyd, J.W.,
 
|'Foundations of Logic Programming',
 
| Springer-Verlag, Berlin, 1984.
 
 
 
| Maier, David & Warren, David S.,
 
|'Computing with Logic:  Logic Programming with Prolog',
 
| Benjamin/Cummings, Menlo Park, CA, 1988.
 
 
 
| McClelland, James L. and Rumelhart, David E.,
 
|'Explorations in Parallel Distributed Processing:
 
| A Handbook of Models, Programs, and Exercises',
 
| MIT Press, Cambridge, MA, 1988.
 
 
 
| Peirce, Charles Sanders,
 
|'Collected Papers of Charles Sanders Peirce',
 
| edited by Charles Hartshorne, Paul Weiss, & Arthur W. Burks,
 
| Harvard University Press, Cambridge, MA, 1931-1960.
 
 
 
| Peirce, Charles Sanders,
 
|'The New Elements of Mathematics',
 
| edited by Carolyn Eisele, Mouton, The Hague, 1976.
 
 
 
|'Charles S. Peirce: Selected Writings;  Values in a Universe of Chance',
 
| edited by Philip P. Wiener, Dover, New York, NY, 1966.
 
 
 
| Spencer Brown, George,
 
|'Laws of Form',
 
| George Allen & Unwin, London, UK, 1969.
 
 
 
| Van Hentenryck, Pascal,
 
|'Constraint Satisfaction in Logic Programming',
 
| MIT Press, Cambridge, MA, 1989.
 
 
 
| Wilf, Herbert S.,
 
|'Algorithms and Complexity',
 
| Prentice-Hall, Englewood Cliffs, NJ, 1986.
 
 
 
| Winston, Patrick Henry,
 
|'Artificial Intelligence, 2nd ed.,
 
| Addison-Wesley, Reading, MA, 1984.
 
 
 
| Wirth, Niklaus,
 
|'Algorithms + Data Structures = Programs',
 
| Prentice-Hall, Englewood Cliffs, NJ, 1976.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Cactus Town Cartoons
 
 
 
01.  http://suo.ieee.org/ontology/msg03567.html
 
02.  http://suo.ieee.org/ontology/msg03571.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Differential Analytic Turing Automata (DATA)
 
 
 
01.  http://suo.ieee.org/ontology/msg00596.html
 
02.  http://suo.ieee.org/ontology/msg00618.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Differential Logic
 
 
 
01.  http://suo.ieee.org/ontology/msg04040.html
 
02.  http://suo.ieee.org/ontology/msg04041.html
 
03.  http://suo.ieee.org/ontology/msg04045.html
 
04.  http://suo.ieee.org/ontology/msg04046.html
 
05.  http://suo.ieee.org/ontology/msg04047.html
 
06.  http://suo.ieee.org/ontology/msg04048.html
 
07.  http://suo.ieee.org/ontology/msg04052.html
 
08.  http://suo.ieee.org/ontology/msg04054.html
 
09.  http://suo.ieee.org/ontology/msg04055.html
 
10.  http://suo.ieee.org/ontology/msg04067.html
 
11.  http://suo.ieee.org/ontology/msg04068.html
 
12.  http://suo.ieee.org/ontology/msg04069.html
 
13.  http://suo.ieee.org/ontology/msg04070.html
 
14.  http://suo.ieee.org/ontology/msg04072.html
 
15.  http://suo.ieee.org/ontology/msg04073.html
 
16.  http://suo.ieee.org/ontology/msg04074.html
 
17.  http://suo.ieee.org/ontology/msg04077.html
 
18.  http://suo.ieee.org/ontology/msg04079.html
 
19.  http://suo.ieee.org/ontology/msg04080.html
 
20.
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Extensions Of Logical Graphs
 
 
 
01.  http://www.virtual-earth.de/CG/cg-list/old/msg03351.html
 
02.  http://www.virtual-earth.de/CG/cg-list/old/msg03352.html
 
03.  http://www.virtual-earth.de/CG/cg-list/old/msg03353.html
 
04.  http://www.virtual-earth.de/CG/cg-list/old/msg03354.html
 
05.  http://www.virtual-earth.de/CG/cg-list/old/msg03376.html
 
06.  http://www.virtual-earth.de/CG/cg-list/old/msg03379.html
 
07.  http://www.virtual-earth.de/CG/cg-list/old/msg03381.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Functional Conception Of Quantificational Logic
 
 
 
01.  http://suo.ieee.org/ontology/msg03562.html
 
02.  http://suo.ieee.org/ontology/msg03563.html
 
03.  http://suo.ieee.org/ontology/msg03577.html
 
04.  http://suo.ieee.org/ontology/msg03578.html
 
05.  http://suo.ieee.org/ontology/msg03579.html
 
06.  http://suo.ieee.org/ontology/msg03580.html
 
07.  http://suo.ieee.org/ontology/msg03581.html
 
08.  http://suo.ieee.org/ontology/msg03582.html
 
09.  http://suo.ieee.org/ontology/msg03583.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Propositional Equation Reasoning Systems (PERS)
 
 
 
01.  http://suo.ieee.org/email/msg04187.html
 
02.  http://suo.ieee.org/email/msg04305.html
 
03.  http://suo.ieee.org/email/msg04413.html
 
04.  http://suo.ieee.org/email/msg04419.html
 
05.  http://suo.ieee.org/email/msg04422.html
 
06.  http://suo.ieee.org/email/msg04423.html
 
07.  http://suo.ieee.org/email/msg04432.html
 
08.  http://suo.ieee.org/email/msg04454.html
 
09.  http://suo.ieee.org/email/msg04455.html
 
10.  http://suo.ieee.org/email/msg04476.html
 
11.  http://suo.ieee.org/email/msg04510.html
 
12.  http://suo.ieee.org/email/msg04517.html
 
13.  http://suo.ieee.org/email/msg04525.html
 
14.  http://suo.ieee.org/email/msg04533.html
 
15.  http://suo.ieee.org/email/msg04536.html
 
16.  http://suo.ieee.org/email/msg04542.html
 
17.  http://suo.ieee.org/email/msg04546.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Reflective Extension Of Logical Graphs (RefLog)
 
 
 
01.  http://suo.ieee.org/email/msg05694.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Sequential Interactions Generating Hypotheses
 
 
 
01.  http://suo.ieee.org/email/msg02607.html
 
02.  http://suo.ieee.org/email/msg02608.html
 
03.  http://suo.ieee.org/email/msg03183.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Sowa's Top Level Categories
 
 
 
01.  http://suo.ieee.org/email/msg01949.html
 
02.  http://suo.ieee.org/email/msg01956.html
 
03.  http://suo.ieee.org/email/msg01966.html
 
 
 
04.  http://suo.ieee.org/ontology/msg00048.html
 
05.  http://suo.ieee.org/ontology/msg00051.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Zeroth Order Logic (ZOL)
 
 
 
01.  http://suo.ieee.org/email/msg01246.html
 
02.  http://suo.ieee.org/email/msg01406.html
 
03.  http://suo.ieee.org/email/msg01546.html
 
04.  http://suo.ieee.org/email/msg01561.html
 
05.  http://suo.ieee.org/email/msg01670.html
 
06.  http://suo.ieee.org/email/msg01739.html
 
07.  http://suo.ieee.org/email/msg01966.html
 
08.  http://suo.ieee.org/email/msg01985.html
 
09.  http://suo.ieee.org/email/msg01988.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
 
 
Zeroth Order Theories (ZOT's)
 
 
 
01.  http://suo.ieee.org/ontology/msg03680.html
 
02.  http://suo.ieee.org/ontology/msg03681.html
 
03.  http://suo.ieee.org/ontology/msg03682.html
 
04.  http://suo.ieee.org/ontology/msg03683.html
 
05.  http://suo.ieee.org/ontology/msg03685.html
 
06.  http://suo.ieee.org/ontology/msg03687.html
 
07.  http://suo.ieee.org/ontology/msg03689.html
 
08.  http://suo.ieee.org/ontology/msg03691.html
 
09.  http://suo.ieee.org/ontology/msg03693.html
 
10.  http://suo.ieee.org/ontology/msg03694.html
 
11.  http://suo.ieee.org/ontology/msg03695.html
 
12.  http://suo.ieee.org/ontology/msg03696.html
 
13.  http://suo.ieee.org/ontology/msg03700.html
 
14.  http://suo.ieee.org/ontology/msg03701.html
 
15.  http://suo.ieee.org/ontology/msg03702.html
 
16.  http://suo.ieee.org/ontology/msg03703.html
 
17.  http://suo.ieee.org/ontology/msg03705.html
 
18.  http://suo.ieee.org/ontology/msg03706.html
 
19.  http://suo.ieee.org/ontology/msg03707.html
 
20.  http://suo.ieee.org/ontology/msg03708.html
 
21.  http://suo.ieee.org/ontology/msg03709.html
 
22.  http://suo.ieee.org/ontology/msg03711.html
 
23.  http://suo.ieee.org/ontology/msg03712.html
 
24.  http://suo.ieee.org/ontology/msg03715.html
 
25.  http://suo.ieee.org/ontology/msg03716.html
 
26.  http://suo.ieee.org/ontology/msg03717.html
 
27.  http://suo.ieee.org/ontology/msg03718.html
 
28.  http://suo.ieee.org/ontology/msg03720.html
 
29.  http://suo.ieee.org/ontology/msg03721.html
 
30.  http://suo.ieee.org/ontology/msg03722.html
 
31.  http://suo.ieee.org/ontology/msg03723.html
 
32.  http://suo.ieee.org/ontology/msg03724.html
 
 
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
 
</pre>
 

Latest revision as of 20:44, 2 August 2017

Author: Jon Awbrey

The Cactus Patch

Thus, what looks to us like a sphere of scientific knowledge more accurately should be represented as the inside of a highly irregular and spiky object, like a pincushion or porcupine, with very sharp extensions in certain directions, and virtually no knowledge in immediately adjacent areas. If our intellectual gaze could shift slightly, it would alter each quill's direction, and suddenly our entire reality would change.

— Herbert J. Bernstein, “Idols of Modern Science”, [HJB, 38]

In this and the four subsections that follow, I describe a calculus for representing propositions as sentences, in other words, as syntactically defined sequences of signs, and for manipulating these sentences chiefly in the light of their semantically defined contents, in other words, with respect to their logical values as propositions. In their computational representation, the expressions of this calculus parse into a class of tree-like data structures called painted cacti. This is a family of graph-theoretic data structures that can be observed to have especially nice properties, turning out to be not only useful from a computational standpoint but also quite interesting from a theoretical point of view. The rest of this subsection serves to motivate the development of this calculus and treats a number of general issues that surround the topic.

In order to facilitate the use of propositions as indicator functions it helps to acquire a flexible notation for referring to propositions in that light, for interpreting sentences in a corresponding role, and for negotiating the requirements of mutual sense between the two domains. If none of the formalisms that are readily available or in common use are able to meet all of the design requirements that come to mind, then it is necessary to contemplate the design of a new language that is especially tailored to the purpose. In the present application, there is a pressing need to devise a general calculus for composing propositions, computing their values on particular arguments, and inverting their indications to arrive at the sets of things in the universe that are indicated by them.

For computational purposes, it is convenient to have a middle ground or an intermediate language for negotiating between the koine of sentences regarded as strings of literal characters and the realm of propositions regarded as objects of logical value, even if this renders it necessary to introduce an artificial medium of exchange between these two domains. If one envisions these computations to be carried out in any organized fashion, and ultimately or partially by means of the familiar sorts of machines, then the strings that express these logical propositions are likely to find themselves parsed into tree-like data structures at some stage of the game. With regard to their abstract structures as graphs, there are several species of graph-theoretic data structures that can be used to accomplish this job in a reasonably effective and efficient way.

Over the course of this project, I plan to use two species of graphs:

  1. Painted And Rooted Cacti (PARCAI).
  2. Painted And Rooted Conifers (PARCOI).

For now, it is enough to discuss the former class of data structures, leaving the consideration of the latter class to a part of the project where their distinctive features are key to developments at that stage. Accordingly, within the context of the current patch of discussion, or until it becomes necessary to attach further notice to the conceivable varieties of parse graphs, the acronym "PARC" is sufficient to indicate the pertinent genus of abstract graphs that are under consideration.

By way of making these tasks feasible to carry out on a regular basis, a prospective language designer is required not only to supply a fluent medium for the expression of propositions, but further to accompany the assertions of their sentences with a canonical mechanism for teasing out the fibers of their indicator functions. Accordingly, with regard to a body of conceivable propositions, one needs to furnish a standard array of techniques for following the threads of their indications from their objective universe to their values for the mind and back again, that is, for tracing the clues that sentences provide from the universe of their objects to the signs of their values, and, in turn, from signs to objects. Ultimately, one seeks to render propositions so functional as indicators of sets and so essential for examining the equality of sets that they can constitute a veritable criterion for the practical conceivability of sets. Tackling this task requires me to introduce a number of new definitions and a collection of additional notational devices, to which I now turn.

Depending on whether a formal language is called by the type of sign that makes it up or whether it is named after the type of object that its signs are intended to denote, one may refer to this cactus language as a sentential calculus or as a propositional calculus, respectively.

When the syntactic definition of the language is well enough understood, then the language can begin to acquire a semantic function. In natural circumstances, the syntax and the semantics are likely to be engaged in a process of co-evolution, whether in ontogeny or in phylogeny, that is, the two developments probably form parallel sides of a single bootstrap. But this is not always the easiest way, at least, at first, to formally comprehend the nature of their action or the power of their interaction.

According to the customary mode of formal reconstruction, the language is first presented in terms of its syntax, in other words, as a formal language of strings called sentences, amounting to a particular subset of the possible strings that can be formed on a finite alphabet of signs. A syntactic definition of the cactus language, one that proceeds along purely formal lines, is carried out in the next Subsection. After that, the development of the language's more concrete aspects can be seen as a matter of defining two functions:

  1. The first is a function that takes each sentence of the language into a computational data structure, to be exact, a tree-like parse graph called a painted cactus.
  2. The second is a function that takes each sentence of the language, or its interpolated parse graph, into a logical proposition, in effect, ending up with an indicator function as the object denoted by the sentence.

The discussion of syntax brings up a number of associated issues that have to be clarified before going on. These are questions of style, that is, the sort of description, grammar, or theory that one finds available or chooses as preferable for a given language. These issues are discussed in the Subsection after next (Subsection 1.3.10.10).

There is an aspect of syntax that is so schematic in its basic character that it can be conveyed by computational data structures, so algorithmic in its uses that it can be automated by routine mechanisms, and so fixed in its nature that its practical exploitation can be served by the usual devices of computation. Because it involves the transformation of signs, it can be recognized as an aspect of semiotics. Since it can be carried out in abstraction from meaning, it is not up to the level of semantics, much less a complete pragmatics, though it does incline to the pragmatic aspects of computation that are auxiliary to and incidental to the human use of language. Therefore, I refer to this aspect of formal language use as the algorithmics or the mechanics of language processing. A mechanical conversion of the cactus language into its associated data structures is discussed in Subsection 1.3.10.11.

In the usual way of proceeding on formal grounds, meaning is added by giving each grammatical sentence, or each syntactically distinguished string, an interpretation as a logically meaningful sentence, in effect, equipping or providing each abstractly well-formed sentence with a logical proposition for it to denote. A semantic interpretation of the cactus language is carried out in Subsection 1.3.10.12.

The Cactus Language : Syntax

Picture two different configurations of such an irregular shape, superimposed on each other in space, like a double exposure photograph. Of the two images, the only part which coincides is the body. The two different sets of quills stick out into very different regions of space. The objective reality we see from within the first position, seemingly so full and spherical, actually agrees with the shifted reality only in the body of common knowledge. In every direction in which we look at all deeply, the realm of discovered scientific truth could be quite different. Yet in each of those two different situations, we would have thought the world complete, firmly known, and rather round in its penetration of the space of possible knowledge.

— Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]

In this Subsection, I describe the syntax of a family of formal languages that I intend to use as a sentential calculus, and thus to interpret for the purpose of reasoning about propositions and their logical relations. In order to carry out the discussion, I need a way of referring to signs as if they were objects like any others, in other words, as the sorts of things that are subject to being named, indicated, described, discussed, and renamed if necessary, that can be placed, arranged, and rearranged within a suitable medium of expression, or else manipulated in the mind, that can be articulated and decomposed into their elementary signs, and that can be strung together in sequences to form complex signs. Signs that have signs as their objects are called higher order signs, and this is a topic that demands an apt formalization, but in due time. The present discussion requires a quicker way to get into this subject, even if it takes informal means that cannot be made absolutely precise.

As a temporary notation, let the relationship between a particular sign \(s\!\) and a particular object \(o\!\), namely, the fact that \(s\!\) denotes \(o\!\) or the fact that \(o\!\) is denoted by \(s\!\), be symbolized in one of the following two ways:

\(\begin{array}{lccc} 1. & s & \rightarrow & o \\ \\ 2. & o & \leftarrow & s \\ \end{array}\)

Now consider the following paradigm:

\(\begin{array}{llccc} 1. & \operatorname{If} & ^{\backprime\backprime}\operatorname{A}^{\prime\prime} & \rightarrow & \operatorname{Ann}, \\ & \operatorname{that~is}, & ^{\backprime\backprime}\operatorname{A}^{\prime\prime} & \operatorname{denotes} & \operatorname{Ann}, \\ & \operatorname{then} & \operatorname{A} & = & \operatorname{Ann} \\ & \operatorname{and} & \operatorname{Ann} & = & \operatorname{A}. \\ & \operatorname{Thus} & ^{\backprime\backprime}\operatorname{Ann}^{\prime\prime} & \rightarrow & \operatorname{A}, \\ & \operatorname{that~is}, & ^{\backprime\backprime}\operatorname{Ann}^{\prime\prime} & \operatorname{denotes} & \operatorname{A}. \\ \end{array}\)

\(\begin{array}{llccc} 2. & \operatorname{If} & \operatorname{Bob} & \leftarrow & ^{\backprime\backprime}\operatorname{B}^{\prime\prime}, \\ & \operatorname{that~is}, & \operatorname{Bob} & \operatorname{is~denoted~by} & ^{\backprime\backprime}\operatorname{B}^{\prime\prime}, \\ & \operatorname{then} & \operatorname{Bob} & = & \operatorname{B} \\ & \operatorname{and} & \operatorname{B} & = & \operatorname{Bob}. \\ & \operatorname{Thus} & \operatorname{B} & \leftarrow & ^{\backprime\backprime}\operatorname{Bob}^{\prime\prime}, \\ & \operatorname{that~is}, & \operatorname{B} & \operatorname{is~denoted~by} & ^{\backprime\backprime}\operatorname{Bob}^{\prime\prime}. \\ \end{array}\)

When I say that the sign "blank" denotes the sign " ", it means that the string of characters inside the first pair of quotation marks can be used as another name for the string of characters inside the second pair of quotes. In other words, "blank" is a higher order sign whose object is " ", and the string of five characters inside the first pair of quotation marks is a sign at a higher level of signification than the string of one character inside the second pair of quotation marks. This relationship can be abbreviated in either one of the following ways:

\(\begin{array}{lll} ^{\backprime\backprime}\operatorname{~}^{\prime\prime} & \leftarrow & ^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \\ \\ ^{\backprime\backprime}\operatorname{blank}^{\prime\prime} & \rightarrow & ^{\backprime\backprime}\operatorname{~}^{\prime\prime} \\ \end{array}\)

Using the raised dot "\(\cdot\)" as a sign to mark the articulation of a quoted string into a sequence of possibly shorter quoted strings, and thus to mark the concatenation of a sequence of quoted strings into a possibly larger quoted string, one can write:

\(\begin{array}{lllll} ^{\backprime\backprime}\operatorname{~}^{\prime\prime} & \leftarrow & ^{\backprime\backprime}\operatorname{blank}^{\prime\prime} & = & ^{\backprime\backprime}\operatorname{b}^{\prime\prime} \, \cdot \, ^{\backprime\backprime}\operatorname{l}^{\prime\prime} \, \cdot \, ^{\backprime\backprime}\operatorname{a}^{\prime\prime} \, \cdot \, ^{\backprime\backprime}\operatorname{n}^{\prime\prime} \, \cdot \, ^{\backprime\backprime}\operatorname{k}^{\prime\prime} \\ \end{array}\)

This usage allows us to refer to the blank as a type of character, and also to refer any blank we choose as a token of this type, referring to either of them in a marked way, but without the use of quotation marks, as I just did. Now, since a blank is just what the name "blank" names, it is possible to represent the denotation of the sign " " by the name "blank" in the form of an identity between the named objects, thus:

\(\begin{array}{lll} ^{\backprime\backprime}\operatorname{~}^{\prime\prime} & = & \operatorname{blank} \\ \end{array}\)

With these kinds of identity in mind, it is possible to extend the use of the "\(\cdot\)" sign to mark the articulation of either named or quoted strings into both named and quoted strings. For example:

\(\begin{array}{lclcl} ^{\backprime\backprime}\operatorname{~~}^{\prime\prime} & = & ^{\backprime\backprime}\operatorname{~}^{\prime\prime} \, \cdot \, ^{\backprime\backprime}\operatorname{~}^{\prime\prime} & = & \operatorname{blank} \, \cdot \, \operatorname{blank} \\ \\ ^{\backprime\backprime}\operatorname{~blank}^{\prime\prime} & = & ^{\backprime\backprime}\operatorname{~}^{\prime\prime} \, \cdot \, ^{\backprime\backprime}\operatorname{blank}^{\prime\prime} & = & \operatorname{blank} \, \cdot \, ^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \\ \\ ^{\backprime\backprime}\operatorname{blank~}^{\prime\prime} & = & ^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \, \cdot \, ^{\backprime\backprime}\operatorname{~}^{\prime\prime} & = & ^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \, \cdot \, \operatorname{blank} \end{array}\)

A few definitions from formal language theory are required at this point.

An alphabet is a finite set of signs, typically, \(\mathfrak{A} = \{ \mathfrak{a}_1, \ldots, \mathfrak{a}_n \}.\)

A string over an alphabet \(\mathfrak{A}\) is a finite sequence of signs from \(\mathfrak{A}.\)

The length of a string is just its length as a sequence of signs.

The empty string is the unique sequence of length 0. It is sometimes denoted by an empty pair of quotation marks, \(^{\backprime\backprime\prime\prime},\) but more often by the Greek symbols epsilon or lambda.

A sequence of length \(k > 0\!\) is typically presented in the concatenated forms:

\(s_1 s_2 \ldots s_{k-1} s_k\!\)

or

\(s_1 \cdot s_2 \cdot \ldots \cdot s_{k-1} \cdot s_k\)

with \(s_j \in \mathfrak{A}\) for all \(j = 1 \ldots k.\)

Two alternative notations are often useful:

\(\varepsilon\!\) = \({}^{\backprime\backprime\prime\prime}\!\) = the empty string.
\(\underline\varepsilon\!\) = \(\{ \varepsilon \}\!\) = the language consisting of a single empty string.

The kleene star \(\mathfrak{A}^*\) of alphabet \(\mathfrak{A}\) is the set of all strings over \(\mathfrak{A}.\) In particular, \(\mathfrak{A}^*\) includes among its elements the empty string \(\varepsilon.\)

The kleene plus \(\mathfrak{A}^+\) of an alphabet \(\mathfrak{A}\) is the set of all positive length strings over \(\mathfrak{A},\) in other words, everything in \(\mathfrak{A}^*\) but the empty string.

A formal language \(\mathfrak{L}\) over an alphabet \(\mathfrak{A}\) is a subset of \(\mathfrak{A}^*.\) In brief, \(\mathfrak{L} \subseteq \mathfrak{A}^*.\) If \(s\!\) is a string over \(\mathfrak{A}\) and if \(s\!\) is an element of \(\mathfrak{L},\) then it is customary to call \(s\!\) a sentence of \(\mathfrak{L}.\) Thus, a formal language \(\mathfrak{L}\) is defined by specifying its elements, which amounts to saying what it means to be a sentence of \(\mathfrak{L}.\)

One last device turns out to be useful in this connection. If \(s\!\) is a string that ends with a sign \(t,\!\) then \(s \cdot t^{-1}\) is the string that results by deleting from \(s\!\) the terminal \(t.\!\)

In this context, I make the following distinction:

  1. To delete an appearance of a sign is to replace it with an appearance of the empty string "".
  2. To erase an appearance of a sign is to replace it with an appearance of the blank symbol " ".

A token is a particular appearance of a sign.

The informal mechanisms that have been illustrated in the immediately preceding discussion are enough to equip the rest of this discussion with a moderately exact description of the so-called cactus language that I intend to use in both my conceptual and my computational representations of the minimal formal logical system that is variously known to sundry communities of interpretation as propositional logic, sentential calculus, or more inclusively, zeroth order logic (ZOL).

The painted cactus language \(\mathfrak{C}\) is actually a parameterized family of languages, consisting of one language \(\mathfrak{C}(\mathfrak{P})\) for each set \(\mathfrak{P}\) of paints.

The alphabet \(\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P}\) is the disjoint union of two sets of symbols:

  1. \(\mathfrak{M}\) is the alphabet of measures, the set of punctuation marks, or the collection of syntactic constants that is common to all of the languages \(\mathfrak{C}(\mathfrak{P}).\) This set of signs is given as follows:

    \(\begin{array}{lccccccccccc} \mathfrak{M} & = & \{ & \mathfrak{m}_1 & , & \mathfrak{m}_2 & , & \mathfrak{m}_3 & , & \mathfrak{m}_4 & \} \\ & = & \{ & ^{\backprime\backprime} \, \operatorname{~} \, ^{\prime\prime} & , & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} & , & ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} & , & ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} & \} \\ & = & \{ & \operatorname{blank} & , & \operatorname{links} & , & \operatorname{comma} & , & \operatorname{right} & \} \\ \end{array}\)

  2. \(\mathfrak{P}\) is the palette, the alphabet of paints, or the collection of syntactic variables that is peculiar to the language \(\mathfrak{C}(\mathfrak{P}).\) This set of signs is given as follows:

    \(\mathfrak{P} = \{ \mathfrak{p}_j : j \in J \}.\)

The easiest way to define the language \(\mathfrak{C}(\mathfrak{P})\!\) is to indicate the general sorts of operations that suffice to construct the greater share of its sentences from the specified few of its sentences that require a special election. In accord with this manner of proceeding, I introduce a family of operations on strings of \(\mathfrak{A}^*\!\) that are called syntactic connectives. If the strings on which they operate are exclusively sentences of \(\mathfrak{C}(\mathfrak{P}),\!\) then these operations are tantamount to sentential connectives, and if the syntactic sentences, considered as abstract strings of meaningless signs, are given a semantics in which they denote propositions, considered as indicator functions over some universe, then these operations amount to propositional connectives.

Rather than presenting the most concise description of these languages right from the beginning, it serves comprehension to develop a picture of their forms in gradual stages, starting from the most natural ways of viewing their elements, if somewhat at a distance, and working through the most easily grasped impressions of their structures, if not always the sharpest acquaintances with their details.

The first step is to define two sets of basic operations on strings of \(\mathfrak{A}^*.\)

  1. The concatenation of one string \(s_1\!\) is just the string \(s_1.\!\)

    The concatenation of two strings \(s_1, s_2\!\) is the string \({s_1 \cdot s_2}.\!\)

    The concatenation of the \(k\!\) strings \((s_j)_{j = 1}^k\!\) is the string of the form \({s_1 \cdot \ldots \cdot s_k}.\!\)

  2. The surcatenation of one string \(s_1\!\) is the string \(^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

    The surcatenation of two strings \(s_1, s_2\!\) is \(^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

    The surcatenation of the \(k\!\) strings \((s_j)_{j = 1}^k\) is the string of the form \(^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

These definitions can be made a little more succinct by defining the following sorts of generic operators on strings:

  1. The concatenation \(\operatorname{Conc}_{j=1}^k\) of the sequence of \(k\!\) strings \((s_j)_{j=1}^k\) is defined recursively as follows:
    1. \(\operatorname{Conc}_{j=1}^1 s_j \ = \ s_1.\)
    2. For \(\ell > 1,\!\)

      \(\operatorname{Conc}_{j=1}^\ell s_j \ = \ \operatorname{Conc}_{j=1}^{\ell - 1} s_j \, \cdot \, s_\ell.\)

  2. The surcatenation \(\operatorname{Surc}_{j=1}^k\) of the sequence of \(k\!\) strings \((s_j)_{j=1}^k\) is defined recursively as follows:
    1. \(\operatorname{Surc}_{j=1}^1 s_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)
    2. For \(\ell > 1,\!\)

      \(\operatorname{Surc}_{j=1}^\ell s_j \ = \ \operatorname{Surc}_{j=1}^{\ell - 1} s_j \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_\ell \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

The definitions of these syntactic operations can now be organized in a slightly better fashion by making a few additional conventions and auxiliary definitions.

  1. The conception of the \(k\!\)-place concatenation operation can be extended to include its natural prequel:

    \(\operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}\)  =  the empty string.

    Next, the construction of the \(k\!\)-place concatenation can be broken into stages by means of the following conceptions:

    1. The precatenation \(\operatorname{Prec} (s_1, s_2)\) of the two strings \(s_1, s_2\!\) is the string that is defined as follows:

      \(\operatorname{Prec} (s_1, s_2) \ = \ s_1 \cdot s_2.\)

    2. The concatenation of the sequence of \(k\!\) strings \(s_1, \ldots, s_k\!\) can now be defined as an iterated precatenation over the sequence of \(k+1\!\) strings that begins with the string \(s_0 = \operatorname{Conc}^0 \, = \, ^{\backprime\backprime\prime\prime}\) and then continues on through the other \(k\!\) strings:

      1. \(\operatorname{Conc}_{j=0}^0 s_j \ = \ \operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}.\)

      2. For \(\ell > 0,\!\)

        \(\operatorname{Conc}_{j=1}^\ell s_j \ = \ \operatorname{Prec}(\operatorname{Conc}_{j=0}^{\ell - 1} s_j, s_\ell).\)

  2. The conception of the \(k\!\)-place surcatenation operation can be extended to include its natural "prequel":

    \(\operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.\)

    Finally, the construction of the \(k\!\)-place surcatenation can be broken into stages by means of the following conceptions:

    1. A subclause in \(\mathfrak{A}^*\) is a string that ends with a \(^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

    2. The subcatenation \(\operatorname{Subc} (s_1, s_2)\) of a subclause \(s_1\!\) by a string \(s_2\!\) is the string that is defined as follows:

      \(\operatorname{Subc} (s_1, s_2) \ = \ s_1 \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

    3. The surcatenation of the \(k\!\) strings \(s_1, \ldots, s_k\!\) can now be defined as an iterated subcatenation over the sequence of \(k+1\!\) strings that starts with the string \(s_0 \ = \ \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}\) and then continues on through the other \(k\!\) strings:

      1. \(\operatorname{Surc}_{j=0}^0 s_j \ = \ \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.\)

      2. For \(\ell > 0,\!\)

        \(\operatorname{Surc}_{j=1}^\ell s_j \ = \ \operatorname{Subc}(\operatorname{Surc}_{j=0}^{\ell - 1} s_j, s_\ell).\)

Notice that the expressions \(\operatorname{Conc}_{j=0}^0 s_j\) and \(\operatorname{Surc}_{j=0}^0 s_j\) are defined in such a way that the respective operators \(\operatorname{Conc}^0\) and \(\operatorname{Surc}^0\) simply ignore, in the manner of constants, whatever sequences of strings \(s_j\!\) may be listed as their ostensible arguments.

Having defined the basic operations of concatenation and surcatenation on arbitrary strings, in effect, giving them operational meaning for the all-inclusive language \(\mathfrak{L} = \mathfrak{A}^*,\) it is time to adjoin the notion of a more discriminating grammaticality, in other words, a more properly restrictive concept of a sentence.

If \(\mathfrak{L}\) is an arbitrary formal language over an alphabet of the sort that we are talking about, that is, an alphabet of the form \(\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P},\) then there are a number of basic structural relations that can be defined on the strings of \(\mathfrak{L}.\)

1. \(s\!\) is the concatenation of \(s_1\!\) and \(s_2\!\) in \(\mathfrak{L}\) if and only if
  \(s_1\!\) is a sentence of \(\mathfrak{L},\) \(s_2\!\) is a sentence of \(\mathfrak{L},\) and
  \(s = s_1 \cdot s_2.\)
2. \(s\!\) is the concatenation of the \(k\!\) strings \(s_1, \ldots, s_k\!\) in \(\mathfrak{L},\)
  if and only if \(s_j\!\) is a sentence of \(\mathfrak{L},\) for all \(j = 1 \ldots k,\) and
  \(s = \operatorname{Conc}_{j=1}^k s_j = s_1 \cdot \ldots \cdot s_k.\)
3. \(s\!\) is the discatenation of \(s_1\!\) by \(t\!\) if and only if
  \(s_1\!\) is a sentence of \(\mathfrak{L},\) \(t\!\) is an element of \(\mathfrak{A},\) and
  \(s_1 = s \cdot t.\)
  When this is the case, one more commonly writes:
  \(s = s_1 \cdot t^{-1}.\)
4. \(s\!\) is a subclause of \(\mathfrak{L}\) if and only if
  \(s\!\) is a sentence of \(\mathfrak{L}\) and \(s\!\) ends with a \(^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)
5. \(s\!\) is the subcatenation of \(s_1\!\) by \(s_2\!\) if and only if
  \(s_1\!\) is a subclause of \(\mathfrak{L},\) \(s_2\!\) is a sentence of \(\mathfrak{L},\) and
  \(s = s_1 \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)
6. \(s\!\) is the surcatenation of the \(k\!\) strings \(s_1, \ldots, s_k\!\) in \(\mathfrak{L},\)
  if and only if \(s_j\!\) is a sentence of \(\mathfrak{L},\) for all \({j = 1 \ldots k},\!\) and
  \(s \ = \ \operatorname{Surc}_{j=1}^k s_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

The converses of these decomposition relations are tantamount to the corresponding forms of composition operations, making it possible for these complementary forms of analysis and synthesis to articulate the structures of strings and sentences in two directions.

The painted cactus language with paints in the set \(\mathfrak{P} = \{ p_j : j \in J \}\) is the formal language \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) \subseteq \mathfrak{A}^* = (\mathfrak{M} \cup \mathfrak{P})^*\) that is defined as follows:

PC 1. The blank symbol \(m_1\!\) is a sentence.
PC 2. The paint \(p_j\!\) is a sentence, for each \(j\!\) in \(J.\!\)
PC 3. \(\operatorname{Conc}^0\) and \(\operatorname{Surc}^0\) are sentences.
PC 4. For each positive integer \(k,\!\)
  if \(s_1, \ldots, s_k\!\) are sentences,
  then \(\operatorname{Conc}_{j=1}^k s_j\) is a sentence,
  and \(\operatorname{Surc}_{j=1}^k s_j\) is a sentence.

As usual, saying that \(s\!\) is a sentence is just a conventional way of stating that the string \(s\!\) belongs to the relevant formal language \(\mathfrak{L}.\) An individual sentence of \(\mathfrak{C} (\mathfrak{P}),\!\) for any palette \(\mathfrak{P},\) is referred to as a painted and rooted cactus expression (PARCE) on the palette \(\mathfrak{P},\) or a cactus expression, for short. Anticipating the forms that the parse graphs of these PARCE's will take, to be described in the next Subsection, the language \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P})\) is also described as the set \(\operatorname{PARCE} (\mathfrak{P})\) of PARCE's on the palette \(\mathfrak{P},\) more generically, as the PARCE's that constitute the language \(\operatorname{PARCE}.\)

A bare PARCE, a bit loosely referred to as a bare cactus expression, is a PARCE on the empty palette \(\mathfrak{P} = \varnothing.\) A bare PARCE is a sentence in the bare cactus language, \(\mathfrak{C}^0 = \mathfrak{C} (\varnothing) = \operatorname{PARCE}^0 = \operatorname{PARCE} (\varnothing).\) This set of strings, regarded as a formal language in its own right, is a sublanguage of every cactus language \(\mathfrak{C} (\mathfrak{P}).\) A bare cactus expression is commonly encountered in practice when one has occasion to start with an arbitrary PARCE and then finds a reason to delete or to erase all of its paints.

Only one thing remains to cast this description of the cactus language into a form that is commonly found acceptable. As presently formulated, the principle PC 4 appears to be attempting to define an infinite number of new concepts all in a single step, at least, it appears to invoke the indefinitely long sequences of operators, \(\operatorname{Conc}^k\) and \(\operatorname{Surc}^k,\) for all \(k > 0.\!\) As a general rule, one prefers to have an effectively finite description of conceptual objects, and this means restricting the description to a finite number of schematic principles, each of which involves a finite number of schematic effects, that is, a finite number of schemata that explicitly relate conditions to results.

A start in this direction, taking steps toward an effective description of the cactus language, a finitary conception of its membership conditions, and a bounded characterization of a typical sentence in the language, can be made by recasting the present description of these expressions into the pattern of what is called, more or less roughly, a formal grammar.

A notation in the style of \(S :> T\!\) is now introduced, to be read among many others in this manifold of ways:

\(S\ \operatorname{covers}\ T\)
\(S\ \operatorname{governs}\ T\)
\(S\ \operatorname{rules}\ T\)
\(S\ \operatorname{subsumes}\ T\)
\(S\ \operatorname{types~over}\ T\)

The form \(S :> T\!\) is here recruited for polymorphic employment in at least the following types of roles:

  1. To signify that an individually named or quoted string \(T\!\) is being typed as a sentence \(S\!\) of the language of interest \(\mathfrak{L}.\)
  2. To express the fact or to make the assertion that each member of a specified set of strings \(T \subseteq \mathfrak{A}^*\) also belongs to the syntactic category \(S,\!\) the one that qualifies a string as being a sentence in the relevant formal language \(\mathfrak{L}.\)
  3. To specify the intension or to signify the intention that every string that fits the conditions of the abstract type \(T\!\) must also fall under the grammatical heading of a sentence, as indicated by the type \(S,\!\) all within the target language \(\mathfrak{L}.\)

In these types of situation the letter \(^{\backprime\backprime} S \, ^{\prime\prime}\) that signifies the type of a sentence in the language of interest, is called the initial symbol or the sentence symbol of a candidate formal grammar for the language, while any number of letters like \(^{\backprime\backprime} T \, ^{\prime\prime}\) signifying other types of strings that are necessary to a reasonable account or a rational reconstruction of the sentences that belong to the language, are collectively referred to as intermediate symbols.

Combining the singleton set \(\{ ^{\backprime\backprime} S \, ^{\prime\prime} \}\) whose sole member is the initial symbol with the set \(\mathfrak{Q}\) that assembles together all of the intermediate symbols results in the set \(\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q}\) of non-terminal symbols. Completing the package, the alphabet \(\mathfrak{A}\) of the language is also known as the set of terminal symbols. In this discussion, I will adopt the convention that \(\mathfrak{Q}\) is the set of intermediate symbols, but I will often use \(q\!\) as a typical variable that ranges over all of the non-terminal symbols, \(q \in \{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q}.\) Finally, it is convenient to refer to all of the symbols in \(\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q} \cup \mathfrak{A}\) as the augmented alphabet of the prospective grammar for the language, and accordingly to describe the strings in \(( \{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q} \cup \mathfrak{A} )^*\) as the augmented strings, in effect, expressing the forms that are superimposed on a language by one of its conceivable grammars. In certain settings it becomes desirable to separate the augmented strings that contain the symbol \(^{\backprime\backprime} S \, ^{\prime\prime}\) from all other sorts of augmented strings. In these situations the strings in the disjoint union \(\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup (\mathfrak{Q} \cup \mathfrak{A} )^*\) are known as the sentential forms of the associated grammar.

In forming a grammar for a language statements of the form \(W :> W',\!\) where \(W\!\) and \(W'\!\) are augmented strings or sentential forms of specified types that depend on the style of the grammar that is being sought, are variously known as characterizations, covering rules, productions, rewrite rules, subsumptions, transformations, or typing rules. These are collected together into a set \(\mathfrak{K}\) that serves to complete the definition of the formal grammar in question.

Correlative with the use of this notation, an expression of the form \(T <: S,\!\) read to say that \(T\!\) is covered by \(S,\!\) can be interpreted to say that \(T\!\) is of the type \(S.\!\) Depending on the context, this can be taken in either one of two ways:

  1. Treating \(T\!\) as a string variable, it means that the individual string \(T\!\) is typed as \(S.\!\)
  2. Treating \(T\!\) as a type name, it means that any instance of the type \(T\!\) also falls under the type \(S.\!\)

In accordance with these interpretations, an expression of the form \(t <: T\!\) can be read in all of the ways that one typically reads an expression of the form \(t : T.\!\)

There are several abuses of notation that commonly tolerated in the use of covering relations. The worst offense is that of allowing symbols to stand equivocally either for individual strings or else for their types. There is a measure of consistency to this practice, considering the fact that perfectly individual entities are rarely if ever grasped by means of signs and finite expressions, which entails that every appearance of an apparent token is only a type of more particular tokens, and meaning in the end that there is never any recourse but to the sort of discerning interpretation that can decide just how each sign is intended. In view of all this, I continue to permit expressions like \(t <: T\!\) and \(T <: S,\!\) where any of the symbols \(t, T, S\!\) can be taken to signify either the tokens or the subtypes of their covering types.

Note. For some time to come in the discussion that follows, although I will continue to focus on the cactus language as my principal object example, my more general purpose will be to develop the subject matter of the formal languages and grammars. I will do this by taking up a particular method of stepwise refinement and using it to extract a rigorous formal grammar for the cactus language, starting with little more than a rough description of the target language and applying a systematic analysis to develop a sequence of increasingly more effective and more exact approximations to the desired grammar.

Employing the notion of a covering relation it becomes possible to redescribe the cactus language \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P})\) in the following ways.

Grammar 1

Grammar 1 is something of a misnomer. It is nowhere near exemplifying any kind of a standard form and it is only intended as a starting point for the initiation of more respectable grammars. Such as it is, it uses the terminal alphabet \(\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P}\) that comes with the territory of the cactus language \(\mathfrak{C} (\mathfrak{P}),\!\) it specifies \(\mathfrak{Q} = \varnothing,\) in other words, it employs no intermediate symbols, and it embodies the covering set \(\mathfrak{K}\) as listed in the following display.


\(\mathfrak{C} (\mathfrak{P}) : \text{Grammar 1}\!\)

\(\mathfrak{Q} = \varnothing\)

\(\begin{array}{rcll} 1. & S & :> & m_1 \ = \ ^{\backprime\backprime} \operatorname{~} ^{\prime\prime} \\ 2. & S & :> & p_j, \, \text{for each} \, j \in J \\ 3. & S & :> & \operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime} \\ 4. & S & :> & \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime} \\ 5. & S & :> & S^* \\ 6. & S & :> & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S \, \cdot \, ( \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \, )^* \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ \end{array}\)


In this formulation, the last two lines specify that:

  1. The concept of a sentence in \(\mathfrak{L}\) covers any concatenation of sentences in \(\mathfrak{L},\) in effect, any number of freely chosen sentences that are available to be concatenated one after another.
  2. The concept of a sentence in \(\mathfrak{L}\) covers any surcatenation of sentences in \(\mathfrak{L},\) in effect, any string that opens with a \(^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime},\) continues with a sentence, possibly empty, follows with a finite number of phrases of the form \(^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S,\) and closes with a \(^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.\)

This appears to be just about the most concise description of the cactus language \(\mathfrak{C} (\mathfrak{P})\) that one can imagine, but there are a couple of problems that are commonly felt to afflict this style of presentation and to make it less than completely acceptable. Briefly stated, these problems turn on the following properties of the presentation:

  1. The invocation of the kleene star operation is not reduced to a manifestly finitary form.
  2. The type \(S\!\) that indicates a sentence is allowed to cover not only itself but also the empty string.

I will discuss these issues at first in general, and especially in regard to how the two features interact with one another, and then I return to address in further detail the questions that they engender on their individual bases.

In the process of developing a grammar for a language, it is possible to notice a number of organizational, pragmatic, and stylistic questions, whose moment to moment answers appear to decide the ongoing direction of the grammar that develops and the impact of whose considerations work in tandem to determine, or at least to influence, the sort of grammar that turns out. The issues that I can see arising at this point I can give the following prospective names, putting off the discussion of their natures and the treatment of their details to the points in the development of the present example where they evolve their full import.

  1. The degree of intermediate organization in a grammar.
  2. The distinction between empty and significant strings, and thus the distinction between empty and significant types of strings.
  3. The principle of intermediate significance. This is a constraint on the grammar that arises from considering the interaction of the first two issues.

In responding to these issues, it is advisable at first to proceed in a stepwise fashion, all the better to accommodate the chances of pursuing a series of parallel developments in the grammar, to allow for the possibility of reversing many steps in its development, indeed, to take into account the near certain necessity of having to revisit, to revise, and to reverse many decisions about how to proceed toward an optimal description or a satisfactory grammar for the language. Doing all this means exploring the effects of various alterations and innovations as independently from each other as possible.

The degree of intermediate organization in a grammar is measured by how many intermediate symbols it has and by how they interact with each other by means of its productions. With respect to this issue, Grammar 1 has no intermediate symbols at all, \(\mathfrak{Q} = \varnothing,\) and therefore remains at an ostensibly trivial degree of intermediate organization. Some additions to the list of intermediate symbols are practically obligatory in order to arrive at any reasonable grammar at all, other inclusions appear to have a more optional character, though obviously useful from the standpoints of clarity and ease of comprehension.

One of the troubles that is perceived to affect Grammar 1 is that it wastes so much of the available potential for efficient description in recounting over and over again the simple fact that the empty string is present in the language. This arises in part from the statement that \(S :> S^*,\!\) which implies that:

\(\begin{array}{lcccccccccccc} S & :> & S^* & = & \underline\varepsilon & \cup & S & \cup & S \cdot S & \cup & S \cdot S \cdot S & \cup & \ldots \\ \end{array}\)

There is nothing wrong with the more expansive pan of the covered equation, since it follows straightforwardly from the definition of the kleene star operation, but the covering statement to the effect that \(S :> S^*\!\) is not a very productive piece of information, in the sense of telling very much about the language that falls under the type of a sentence \(S.\!\) In particular, since it implies that \(S :> \underline\varepsilon,\) and since \(\underline\varepsilon \cdot \mathfrak{L} \, = \, \mathfrak{L} \cdot \underline\varepsilon \, = \, \mathfrak{L},\) for any formal language \(\mathfrak{L},\) the empty string \(\varepsilon\!\) is counted over and over in every term of the union, and every non-empty sentence under \(S\!\) appears again and again in every term of the union that follows the initial appearance of \(S.\!\) As a result, this style of characterization has to be classified as true but not very informative. If at all possible, one prefers to partition the language of interest into a disjoint union of subsets, thereby accounting for each sentence under its proper term, and one whose place under the sum serves as a useful parameter of its character or its complexity. In general, this form of description is not always possible to achieve, but it is usually worth the trouble to actualize it whenever it is.

Suppose that one tries to deal with this problem by eliminating each use of the kleene star operation, by reducing it to a purely finitary set of steps, or by finding an alternative way to cover the sublanguage that it is used to generate. This amounts, in effect, to recognizing a type, a complex process that involves the following steps:

  1. Noticing a category of strings that is generated by iteration or recursion.
  2. Acknowledging the fact that it needs to be covered by a non-terminal symbol.
  3. Making a note of it by instituting an explicitly-named grammatical category.

In sum, one introduces a non-terminal symbol for each type of sentence and each part of speech or sentential component that is generated by means of iteration or recursion under the ruling constraints of the grammar. In order to do this one needs to analyze the iteration of each grammatical operation in a way that is analogous to a mathematically inductive definition, but further in a way that is not forced explicitly to recognize a distinct and separate type of expression merely to account for and to recount every increment in the parameter of iteration.

Returning to the case of the cactus language, the process of recognizing an iterative type or a recursive type can be illustrated in the following way. The operative phrases in the simplest sort of recursive definition are its initial part and its generic part. For the cactus language \(\mathfrak{C} (\mathfrak{P}),\!\) one has the following definitions of concatenation as iterated precatenation and of surcatenation as iterated subcatenation, respectively:

\(\begin{array}{llll} 1. & \operatorname{Conc}_{j=1}^0 & = & ^{\backprime\backprime\prime\prime} \\ \\ & \operatorname{Conc}_{j=1}^k S_j & = & \operatorname{Prec} (\operatorname{Conc}_{j=1}^{k-1} S_j, S_k) \\ \\ 2. & \operatorname{Surc}_{j=1}^0 & = & ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime} \\ \\ & \operatorname{Surc}_{j=1}^k S_j & = & \operatorname{Subc} (\operatorname{Surc}_{j=1}^{k-1} S_j, S_k) \\ \\ \end{array}\)

In order to transform these recursive definitions into grammar rules, one introduces a new pair of intermediate symbols, \(\operatorname{Conc}\) and \(\operatorname{Surc},\) corresponding to the operations that share the same names but ignoring the inflexions of their individual parameters \(j\!\) and \(k.\!\) Recognizing the type of a sentence by means of the initial symbol \(S\!\) and interpreting \(\operatorname{Conc}\) and \(\operatorname{Surc}\) as names for the types of strings that are generated by concatenation and by surcatenation, respectively, one arrives at the following transformation of the ruling operator definitions into the form of covering grammar rules:

\(\begin{array}{llll} 1. & \operatorname{Conc} & :> & ^{\backprime\backprime\prime\prime} \\ \\ & \operatorname{Conc} & :> & \operatorname{Conc} \cdot S \\ \\ 2. & \operatorname{Surc} & :> & ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime} \\ \\ & \operatorname{Surc} & :> & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ \\ & \operatorname{Surc} & :> & \operatorname{Surc} \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \end{array}\)

As given, this particular fragment of the intended grammar contains a couple of features that are desirable to amend.

  1. Given the covering \(S :> \operatorname{Conc},\) the covering rule \(\operatorname{Conc} :> \operatorname{Conc} \cdot S\) says no more than the covering rule \(\operatorname{Conc} :> S \cdot S.\) Consequently, all of the information contained in these two covering rules is already covered by the statement that \(S :> S \cdot S.\)
  2. A grammar rule that invokes a notion of decatenation, deletion, erasure, or any other sort of retrograde production, is frequently considered to be lacking in elegance, and a there is a style of critique for grammars that holds it preferable to avoid these types of operations if it is at all possible to do so. Accordingly, contingent on the prescriptions of the informal rule in question, and pursuing the stylistic dictates that are writ in the realm of its aesthetic regime, it becomes necessary for us to backtrack a little bit, to temporarily withdraw the suggestion of employing these elliptical types of operations, but without, of course, eliding the record of doing so.

Grammar 2

One way to analyze the surcatenation of any number of sentences is to introduce an auxiliary type of string, not in general a sentence, but a proper component of any sentence that is formed by surcatenation. Doing this brings one to the following definition:

A tract is a concatenation of a finite sequence of sentences, with a literal comma \(^{\backprime\backprime} \operatorname{,} ^{\prime\prime}\) interpolated between each pair of adjacent sentences. Thus, a typical tract \(T\!\) takes the form:

\(\begin{array}{lllllllllll} T & = & S_1 & \cdot & ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} & \cdot & \ldots & \cdot & ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} & \cdot & S_k \\ \end{array}\)

A tract must be distinguished from the abstract sequence of sentences, \(S_1, \ldots, S_k,\!\) where the commas that appear to come to mind, as if being called up to separate the successive sentences of the sequence, remain as partially abstract conceptions, or as signs that retain a disengaged status on the borderline between the text and the mind. In effect, the types of commas that appear to follow in the abstract sequence continue to exist as conceptual abstractions and fail to be cognized in a wholly explicit fashion, whether as concrete tokens in the object language, or as marks in the strings of signs that are able to engage one's parsing attention.

Returning to the case of the painted cactus language \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P}),\) it is possible to put the currently assembled pieces of a grammar together in the light of the presently adopted canons of style, to arrive a more refined analysis of the fact that the concept of a sentence covers any concatenation of sentences and any surcatenation of sentences, and so to obtain the following form of a grammar:


\(\mathfrak{C} (\mathfrak{P}) : \text{Grammar 2}\!\)

\(\mathfrak{Q} = \{ \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}\)

\(\begin{array}{rcll} 1. & S & :> & \varepsilon \\ 2. & S & :> & m_1 \\ 3. & S & :> & p_j, \, \text{for each} \, j \in J \\ 4. & S & :> & S \, \cdot \, S \\ 5. & S & :> & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ 6. & T & :> & S \\ 7. & T & :> & T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \\ \end{array}\)


In this rendition, a string of type \(T\!\) is not in general a sentence itself but a proper part of speech, that is, a strictly lesser component of a sentence in any suitable ordering of sentences and their components. In order to see how the grammatical category \(T\!\) gets off the ground, that is, to detect its minimal strings and to discover how its ensuing generations get started from these, it is useful to observe that the covering rule \(T :> S\!\) means that \(T\!\) inherits all of the initial conditions of \(S,\!\) namely, \(T \, :> \, \varepsilon, m_1, p_j.\) In accord with these simple beginnings it comes to parse that the rule \(T \, :> \, T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S,\) with the substitutions \(T = \varepsilon\) and \(S = \varepsilon\) on the covered side of the rule, bears the germinal implication that \(T \, :> \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}.\)

Grammar 2 achieves a portion of its success through a higher degree of intermediate organization. Roughly speaking, the level of organization can be seen as reflected in the cardinality of the intermediate alphabet \(\mathfrak{Q} = \{ \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}\) but it is clearly not explained by this simple circumstance alone, since it is taken for granted that the intermediate symbols serve a purpose, a purpose that is easily recognizable but that may not be so easy to pin down and to specify exactly. Nevertheless, it is worth the trouble of exploring this aspect of organization and this direction of development a little further.

Grammar 3

Although it is not strictly necessary to do so, it is possible to organize the materials of our developing grammar in a slightly better fashion by recognizing two recurrent types of strings that appear in the typical cactus expression. In doing this, one arrives at the following two definitions:

A rune is a string of blanks and paints concatenated together. Thus, a typical rune \(R\!\) is a string over \(\{ m_1 \} \cup \mathfrak{P},\) possibly the empty string:

\(R\ \in\ ( \{ m_1 \} \cup \mathfrak{P} )^*\)

When there is no possibility of confusion, the letter \(^{\backprime\backprime} R \, ^{\prime\prime}\) can be used either as a string variable that ranges over the set of runes or else as a type name for the class of runes. The latter reading amounts to the enlistment of a fresh intermediate symbol, \(^{\backprime\backprime} R \, ^{\prime\prime} \in \mathfrak{Q},\) as a part of a new grammar for \(\mathfrak{C} (\mathfrak{P}).\) In effect, \(^{\backprime\backprime} R \, ^{\prime\prime}\) affords a grammatical recognition for any rune that forms a part of a sentence in \(\mathfrak{C} (\mathfrak{P}).\) In situations where these variant usages are likely to be confused, the types of strings can be indicated by means of expressions like \(r <: R\!\) and \(W <: R.\!\)

A foil is a string of the form \({}^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime},\!\) where \(T\!\) is a tract. Thus, a typical foil \(F\!\) has the form:

\(\begin{array}{*{15}{l}} F & = & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} & \cdot & S_1 & \cdot & ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} & \cdot & \ldots & \cdot & ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} & \cdot & S_k & \cdot & ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ \end{array}\)

This is just the surcatenation of the sentences \(S_1, \ldots, S_k.\!\) Given the possibility that this sequence of sentences is empty, and thus that the tract \(T\!\) is the empty string, the minimum foil \(F\!\) is the expression \(^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.\) Explicitly marking each foil \(F\!\) that is embodied in a cactus expression is tantamount to recognizing another intermediate symbol, \(^{\backprime\backprime} F \, ^{\prime\prime} \in \mathfrak{Q},\) further articulating the structures of sentences and expanding the grammar for the language \(\mathfrak{C} (\mathfrak{P}).\!\) All of the same remarks about the versatile uses of the intermediate symbols, as string variables and as type names, apply again to the letter \(^{\backprime\backprime} F \, ^{\prime\prime}.\)


\(\mathfrak{C} (\mathfrak{P}) : \text{Grammar 3}\!\)

\(\mathfrak{Q} = \{ \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}\)

\(\begin{array}{rcll} 1. & S & :> & R \\ 2. & S & :> & F \\ 3. & S & :> & S \, \cdot \, S \\ 4. & R & :> & \varepsilon \\ 5. & R & :> & m_1 \\ 6. & R & :> & p_j, \, \text{for each} \, j \in J \\ 7. & R & :> & R \, \cdot \, R \\ 8. & F & :> & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ 9. & T & :> & S \\ 10. & T & :> & T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \\ \end{array}\!\)


In Grammar 3, the first three Rules say that a sentence (a string of type \(S\!\)), is a rune (a string of type \(R\!\)), a foil (a string of type \(F\!\)), or an arbitrary concatenation of strings of these two types. Rules 4 through 7 specify that a rune \(R\!\) is an empty string \(\varepsilon,\) a blank symbol \(m_1,\!\) a paint \(p_j,\!\) or any concatenation of strings of these three types. Rule 8 characterizes a foil \(F\!\) as a string of the form \({}^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime},\!\) where \(T\!\) is a tract. The last two Rules say that a tract \(T\!\) is either a sentence \(S\!\) or else the concatenation of a tract, a comma, and a sentence, in that order.

At this point in the succession of grammars for \(\mathfrak{C} (\mathfrak{P}),\!\) the explicit uses of indefinite iterations, like the kleene star operator, are now completely reduced to finite forms of concatenation, but the problems that some styles of analysis have with allowing non-terminal symbols to cover both themselves and the empty string are still present.

Any degree of reflection on this difficulty raises the general question: What is a practical strategy for accounting for the empty string in the organization of any formal language that counts it among its sentences? One answer that presents itself is this: If the empty string belongs to a formal language, it suffices to count it once at the beginning of the formal account that enumerates its sentences and then to move on to more interesting materials.

Returning to the case of the cactus language \(\mathfrak{C} (\mathfrak{P}),\!\) in other words, the formal language \(\operatorname{PARCE}\!\) of painted and rooted cactus expressions, it serves the purpose of efficient accounting to partition the language into the following couple of sublanguages:

  1. The emptily painted and rooted cactus expressions make up the language \(\operatorname{EPARCE}\) that consists of a single empty string as its only sentence. In short:

    \(\operatorname{EPARCE} \ = \ \underline\varepsilon \ = \ \{ \varepsilon \}\)

  2. The significantly painted and rooted cactus expressions make up the language \(\operatorname{SPARCE}\) that consists of everything else, namely, all of the non-empty strings in the language \(\operatorname{PARCE}.\) In sum:

    \(\operatorname{SPARCE} \ = \ \operatorname{PARCE} \setminus \varepsilon\)

As a result of marking the distinction between empty and significant sentences, that is, by categorizing each of these three classes of strings as an entity unto itself and by conceptualizing the whole of its membership as falling under a distinctive symbol, one obtains an equation of sets that connects the three languages being marked:

\(\operatorname{SPARCE} \ = \ \operatorname{PARCE} \ - \ \operatorname{EPARCE}\)

In sum, one has the disjoint union:

\(\operatorname{PARCE} \ = \ \operatorname{EPARCE} \ \cup \ \operatorname{SPARCE}\)

For brevity in the present case, and to serve as a generic device in any similar array of situations, let \(S\!\) be the type of an arbitrary sentence, possibly empty, and let \(S'\!\) be the type of a specifically non-empty sentence. In addition, let \(\underline\varepsilon\) be the type of the empty sentence, in effect, the language \(\underline\varepsilon = \{ \varepsilon \}\) that contains a single empty string, and let a plus sign \(^{\backprime\backprime} + ^{\prime\prime}\) signify a disjoint union of types. In the most general type of situation, where the type \(S\!\) is permitted to include the empty string, one notes the following relation among types:

\(S \ = \ \underline\varepsilon \ + \ S'\)

With the distinction between empty and significant expressions in mind, I return to the grasp of the cactus language \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) = \operatorname{PARCE} (\mathfrak{P})\) that is afforded by Grammar 2, and, taking that as a point of departure, explore other avenues of possible improvement in the comprehension of these expressions. In order to observe the effects of this alteration as clearly as possible, in isolation from any other potential factors, it is useful to strip away the higher levels intermediate organization that are present in Grammar 3, and start again with a single intermediate symbol, as used in Grammar 2. One way of carrying out this strategy leads on to a grammar of the variety that will be articulated next.

Grammar 4

If one imposes the distinction between empty and significant types on each non-terminal symbol in Grammar 2, then the non-terminal symbols \(^{\backprime\backprime} S \, ^{\prime\prime}\) and \(^{\backprime\backprime} T \, ^{\prime\prime}\) give rise to the expanded set of non-terminal symbols \(^{\backprime\backprime} S \, ^{\prime\prime}, \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime},\) leaving the last three of these to form the new intermediate alphabet. Grammar 4 has the intermediate alphabet \(\mathfrak{Q} \, = \, \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime} \, \},\) with the set \(\mathfrak{K}\) of covering rules as listed in the next display.


\(\mathfrak{C} (\mathfrak{P}) : \text{Grammar 4}\!\)

\(\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime} \, \}\)

\(\begin{array}{rcll} 1. & S & :> & \varepsilon \\ 2. & S & :> & S' \\ 3. & S' & :> & m_1 \\ 4. & S' & :> & p_j, \, \text{for each} \, j \in J \\ 5. & S' & :> & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ 6. & S' & :> & S' \, \cdot \, S' \\ 7. & T & :> & \varepsilon \\ 8. & T & :> & T' \\ 9. & T' & :> & T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \\ \end{array}\)


In this version of a grammar for \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P}),\) the intermediate type \(T\!\) is partitioned as \(T = \underline\varepsilon + T',\) thereby parsing the intermediate symbol \(T\!\) in parallel fashion with the division of its overlying type as \(S = \underline\varepsilon + S'.\) This is an option that I will choose to close off for now, but leave it open to consider at a later point. Thus, it suffices to give a brief discussion of what it involves, in the process of moving on to its chief alternative.

There does not appear to be anything radically wrong with trying this approach to types. It is reasonable and consistent in its underlying principle, and it provides a rational and a homogeneous strategy toward all parts of speech, but it does require an extra amount of conceptual overhead, in that every non-trivial type has to be split into two parts and comprehended in two stages. Consequently, in view of the largely practical difficulties of making the requisite distinctions for every intermediate symbol, it is a common convention, whenever possible, to restrict intermediate types to covering exclusively non-empty strings.

For the sake of future reference, it is convenient to refer to this restriction on intermediate symbols as the intermediate significance constraint. It can be stated in a compact form as a condition on the relations between non-terminal symbols \(q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q}\) and sentential forms \(W \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*.\)


\(\text{Condition On Intermediate Significance}\!\)

\(\begin{array}{lccc} \text{If} & q & :> & W \\ \text{and} & W & = & \varepsilon \\ \text{then} & q & = & ^{\backprime\backprime} S \, ^{\prime\prime} \\ \end{array}\)


If this is beginning to sound like a monotone condition, then it is not absurd to sharpen the resemblance and render the likeness more acute. This is done by declaring a couple of ordering relations, denoting them under variant interpretations by the same sign, \(^{\backprime\backprime}\!< \, ^{\prime\prime}.\)

  1. The ordering \(^{\backprime\backprime}\!< \, ^{\prime\prime}\) on the set of non-terminal symbols, \(q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q},\) ordains the initial symbol \(^{\backprime\backprime} S \, ^{\prime\prime}\) to be strictly prior to every intermediate symbol. This is tantamount to the axiom that \(^{\backprime\backprime} S \, ^{\prime\prime} < q,\) for all \(q \in \mathfrak{Q}.\)
  2. The ordering \(^{\backprime\backprime}\!< \, ^{\prime\prime}\) on the collection of sentential forms, \(W \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*,\) ordains the empty string to be strictly minor to every other sentential form. This is stipulated in the axiom that \(\varepsilon < W,\) for every non-empty sentential form \(W.\!\)

Given these two orderings, the constraint in question on intermediate significance can be stated as follows:


\(\text{Condition On Intermediate Significance}\!\)

\(\begin{array}{lccc} \text{If} & q & :> & W \\ \text{and} & q & > & ^{\backprime\backprime} S \, ^{\prime\prime} \\ \text{then} & W & > & \varepsilon \\ \end{array}\)


Achieving a grammar that respects this convention typically requires a more detailed account of the initial setting of a type, both with regard to the type of context that incites its appearance and also with respect to the minimal strings that arise under the type in question. In order to find covering productions that satisfy the intermediate significance condition, one must be prepared to consider a wider variety of calling contexts or inciting situations that can be noted to surround each recognized type, and also to enumerate a larger number of the smallest cases that can be observed to fall under each significant type.

Grammar 5

With the foregoing array of considerations in mind, one is gradually led to a grammar for \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P})\) in which all of the covering productions have either one of the following two forms:

\(\begin{array}{ccll} S & :> & \varepsilon & \\ q & :> & W, & \text{with} \ q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \ \text{and} \ W \in (\mathfrak{Q} \cup \mathfrak{A})^+ \\ \end{array}\)

A grammar that fits into this mold is called a context-free grammar. The first type of rewrite rule is referred to as a special production, while the second type of rewrite rule is called an ordinary production. An ordinary derivation is one that employs only ordinary productions. In ordinary productions, those that have the form \(q :> W,\!\) the replacement string \(W\!\) is never the empty string, and so the lengths of the augmented strings or the sentential forms that follow one another in an ordinary derivation, on account of using the ordinary types of rewrite rules, never decrease at any stage of the process, up to and including the terminal string that is finally generated by the grammar. This type of feature is known as the non-contracting property of productions, derivations, and grammars. A grammar is said to have the property if all of its covering productions, with the possible exception of \(S :> \varepsilon,\) are non-contracting. In particular, context-free grammars are special cases of non-contracting grammars. The presence of the non-contracting property within a formal grammar makes the length of the augmented string available as a parameter that can figure into mathematical inductions and motivate recursive proofs, and this handle on the generative process makes it possible to establish the kinds of results about the generated language that are not easy to achieve in more general cases, nor by any other means even in these brands of special cases.

Grammar 5 is a context-free grammar for the painted cactus language that uses \(\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \},\) with \(\mathfrak{K}\) as listed in the next display.


\(\mathfrak{C} (\mathfrak{P}) : \text{Grammar 5}\!\)

\(\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}\)

\(\begin{array}{rcll} 1. & S & :> & \varepsilon \\ 2. & S & :> & S' \\ 3. & S' & :> & m_1 \\ 4. & S' & :> & p_j, \, \text{for each} \, j \in J \\ 5. & S' & :> & S' \, \cdot \, S' \\ 6. & S' & :> & ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime} \\ 7. & S' & :> & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ 8. & T & :> & ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \\ 9. & T & :> & S' \\ 10. & T & :> & T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \\ 11. & T & :> & T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S' \\ \end{array}\)


Finally, it is worth trying to bring together the advantages of these diverse styles of grammar, to whatever extent that they are compatible. To do this, a prospective grammar must be capable of maintaining a high level of intermediate organization, like that arrived at in Grammar 2, while respecting the principle of intermediate significance, and thus accumulating all the benefits of the context-free format in Grammar 5. A plausible synthesis of most of these features is given in Grammar 6.

Grammar 6

Grammar 6 has the intermediate alphabet \(\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \},\) with the production set \(\mathfrak{K}\) as listed in the next display.


\({\mathfrak{C} (\mathfrak{P}) : \text{Grammar 6}}\!\)

\(\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}\!\)

\(\begin{array}{rcll} 1. & S & :> & \varepsilon \\ 2. & S & :> & S' \\ 3. & S' & :> & R \\ 4. & S' & :> & F \\ 5. & S' & :> & S' \, \cdot \, S' \\ 6. & R & :> & m_1 \\ 7. & R & :> & p_j, \, \text{for each} \, j \in J \\ 8. & R & :> & R \, \cdot \, R \\ 9. & F & :> & ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime} \\ 10. & F & :> & ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \\ 11. & T & :> & ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \\ 12. & T & :> & S' \\ 13. & T & :> & T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \\ 14. & T & :> & T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S' \\ \end{array}\)


The preceding development provides a typical example of how an initially effective and conceptually succinct description of a formal language, but one that is terse to the point of allowing its prospective interpreter to waste exorbitant amounts of energy in trying to unravel its implications, can be converted into a form that is more efficient from the operational point of view, even if slightly more ungainly in regard to its elegance.

The basic idea behind all of this machinery remains the same: Besides the select body of formulas that are introduced as boundary conditions, it merely institutes the following general rule:

\(\operatorname{If}\) the strings \(S_1, \ldots, S_k\!\) are sentences,
\(\operatorname{Then}\) their concatenation in the form
  \(\operatorname{Conc}_{j=1}^k S_j \ = \ S_1 \, \cdot \, \ldots \, \cdot \, S_k\)
  is a sentence,
\(\operatorname{And}\) their surcatenation in the form
  \(\operatorname{Surc}_{j=1}^k S_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}\)
  is a sentence.

Generalities About Formal Grammars

It is fitting to wrap up the foregoing developments by summarizing the notion of a formal grammar that appeared to evolve in the present case. For the sake of future reference and the chance of a wider application, it is also useful to try to extract the scheme of a formalization that potentially holds for any formal language. The following presentation of the notion of a formal grammar is adapted, with minor modifications, from the treatment in (DDQ, 60–61).

A formal grammar \(\mathfrak{G}\) is given by a four-tuple \(\mathfrak{G} = ( \, ^{\backprime\backprime} S \, ^{\prime\prime}, \, \mathfrak{Q}, \, \mathfrak{A}, \, \mathfrak{K} \, )\) that takes the following form of description:

  1. \(^{\backprime\backprime} S \, ^{\prime\prime}\) is the initial, special, start, or sentence symbol. Since the letter \(^{\backprime\backprime} S \, ^{\prime\prime}\) serves this function only in a special setting, its employment in this role need not create any confusion with its other typical uses as a string variable or as a sentence variable.
  2. \(\mathfrak{Q} = \{ q_1, \ldots, q_m \}\) is a finite set of intermediate symbols, all distinct from \(^{\backprime\backprime} S \, ^{\prime\prime}.\)
  3. \(\mathfrak{A} = \{ a_1, \dots, a_n \}\) is a finite set of terminal symbols, also known as the alphabet of \(\mathfrak{G},\) all distinct from \(^{\backprime\backprime} S \, ^{\prime\prime}\) and disjoint from \(\mathfrak{Q}.\) Depending on the particular conception of the language \(\mathfrak{L}\) that is covered, generated, governed, or ruled by the grammar \(\mathfrak{G},\) that is, whether \(\mathfrak{L}\) is conceived to be a set of words, sentences, paragraphs, or more extended structures of discourse, it is usual to describe \(\mathfrak{A}\) as the alphabet, lexicon, vocabulary, liturgy, or phrase book of both the grammar \(\mathfrak{G}\) and the language \(\mathfrak{L}\) that it regulates.
  4. \(\mathfrak{K}\) is a finite set of characterizations. Depending on how they come into play, these are variously described as covering rules, formations, productions, rewrite rules, subsumptions, transformations, or typing rules.

To describe the elements of \(\mathfrak{K}\) it helps to define some additional terms:

  1. The symbols in \(\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \cup \mathfrak{A}\) form the augmented alphabet of \(\mathfrak{G}.\)
  2. The symbols in \(\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q}\) are the non-terminal symbols of \(\mathfrak{G}.\)
  3. The symbols in \(\mathfrak{Q} \cup \mathfrak{A}\) are the non-initial symbols of \(\mathfrak{G}.\)
  4. The strings in \(( \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \cup \mathfrak{A} )^*\) are the augmented strings for \(\mathfrak{G}.\)
  5. The strings in \(\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*\) are the sentential forms for \(\mathfrak{G}.\)

Each characterization in \(\mathfrak{K}\) is an ordered pair of strings \((S_1, S_2)\!\) that takes the following form:

\(S_1 \ = \ Q_1 \cdot q \cdot Q_2,\)
\(S_2 \ = \ Q_1 \cdot W \cdot Q_2.\)

In this scheme, \(S_1\!\) and \(S_2\!\) are members of the augmented strings for \(\mathfrak{G},\) more precisely, \(S_1\!\) is a non-empty string and a sentential form over \(\mathfrak{G},\) while \(S_2\!\) is a possibly empty string and also a sentential form over \(\mathfrak{G}.\)

Here also, \(q\!\) is a non-terminal symbol, that is, \(q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q},\) while \(Q_1, Q_2,\!\) and \(W\!\) are possibly empty strings of non-initial symbols, a fact that can be expressed in the form, \(Q_1, Q_2, W \in (\mathfrak{Q} \cup \mathfrak{A})^*.\)

In practice, the couplets in \(\mathfrak{K}\) are used to derive, to generate, or to produce sentences of the corresponding language \(\mathfrak{L} = \mathfrak{L} (\mathfrak{G}).\) The language \(\mathfrak{L}\) is then said to be governed, licensed, or regulated by the grammar \(\mathfrak{G},\) a circumstance that is expressed in the form \(\mathfrak{L} = \langle \mathfrak{G} \rangle.\) In order to facilitate this active employment of the grammar, it is conventional to write the abstract characterization \((S_1, S_2)\!\) and the specific characterization \((Q_1 \cdot q \cdot Q_2, \ Q_1 \cdot W \cdot Q_2)\) in the following forms, respectively:

\(\begin{array}{lll} S_1 & :> & S_2 \\ Q_1 \cdot q \cdot Q_2 & :> & Q_1 \cdot W \cdot Q_2 \\ \end{array}\)

In this usage, the characterization \(S_1 :> S_2\!\) is tantamount to a grammatical license to transform a string of the form \(Q_1 \cdot q \cdot Q_2\) into a string of the form \(Q1 \cdot W \cdot Q2,\) in effect, replacing the non-terminal symbol \(q\!\) with the non-initial string \(W\!\) in any selected, preserved, and closely adjoining context of the form \(Q1 \cdot \underline{[[User:Jon Awbrey|Jon Awbrey]] ([[User talk:Jon Awbrey|talk]])} \cdot Q2.\) In this application the notation \(S_1 :> S_2\!\) can be read to say that \(S_1\!\) produces \(S_2\!\) or that \(S_1\!\) transforms into \(S_2.\!\)

An immediate derivation in \(\mathfrak{G}\!\) is an ordered pair \((W, W^\prime)\!\) of sentential forms in \(\mathfrak{G}\!\) such that:

\(\begin{array}{llll} W = Q_1 \cdot X \cdot Q_2, & W' = Q_1 \cdot Y \cdot Q_2, & \text{and} & (X, Y) \in \mathfrak{K}. \end{array}\)

As noted above, it is usual to express the condition \((X, Y) \in \mathfrak{K}\) by writing \(X :> Y \, \text{in} \, \mathfrak{G}.\)

The immediate derivation relation is indicated by saying that \(W\!\) immediately derives \(W',\!\) by saying that \(W'\!\) is immediately derived from \(W\!\) in \(\mathfrak{G},\) and also by writing:

\(W ::> W'.\!\)

A derivation in \(\mathfrak{G}\) is a finite sequence \((W_1, \ldots, W_k)\!\) of sentential forms over \(\mathfrak{G}\) such that each adjacent pair \((W_j, W_{j+1})\!\) of sentential forms in the sequence is an immediate derivation in \(\mathfrak{G},\) in other words, such that:

\(W_j ::> W_{j+1},\ \text{for all}\ j = 1\ \text{to}\ k - 1.\)

If there exists a derivation \((W_1, \ldots, W_k)\!\) in \(\mathfrak{G},\) one says that \(W_1\!\) derives \(W_k\!\) in \(\mathfrak{G}\) or that \(W_k\!\) is derivable from \(W_1\!\) in \(\mathfrak{G},\) and one typically summarizes the derivation by writing:

\(W_1 :\!*\!:> W_k.\!\)

The language \(\mathfrak{L} = \mathfrak{L} (\mathfrak{G}) = \langle \mathfrak{G} \rangle\) that is generated by the formal grammar \(\mathfrak{G} = ( \, ^{\backprime\backprime} S \, ^{\prime\prime}, \, \mathfrak{Q}, \, \mathfrak{A}, \, \mathfrak{K} \, )\) is the set of strings over the terminal alphabet \(\mathfrak{A}\) that are derivable from the initial symbol \(^{\backprime\backprime} S \, ^{\prime\prime}\) by way of the intermediate symbols in \(\mathfrak{Q}\) according to the characterizations in \(\mathfrak{K}.\) In sum:

\(\mathfrak{L} (\mathfrak{G}) \ = \ \langle \mathfrak{G} \rangle \ = \ \{ \, W \in \mathfrak{A}^* \, : \, ^{\backprime\backprime} S \, ^{\prime\prime} \, :\!*\!:> \, W \, \}.\)

Finally, a string \(W\!\) is called a word, a sentence, or so on, of the language generated by \(\mathfrak{G}\) if and only if \(W\!\) is in \(\mathfrak{L} (\mathfrak{G}).\)

The Cactus Language : Stylistics

As a result, we can hardly conceive of how many possibilities there are for what we call objective reality. Our sharp quills of knowledge are so narrow and so concentrated in particular directions that with science there are myriads of totally different real worlds, each one accessible from the next simply by slight alterations — shifts of gaze — of every particular discipline and subspecialty.

— Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]

This Subsection highlights an issue of style that arises in describing a formal language. In broad terms, I use the word style to refer to a loosely specified class of formal systems, typically ones that have a set of distinctive features in common. For instance, a style of proof system usually dictates one or more rules of inference that are acknowledged as conforming to that style. In the present context, the word style is a natural choice to characterize the varieties of formal grammars, or any other sorts of formal systems that can be contemplated for deriving the sentences of a formal language.

In looking at what seems like an incidental issue, the discussion arrives at a critical point. The question is: What decides the issue of style? Taking a given language as the object of discussion, what factors enter into and determine the choice of a style for its presentation, that is, a particular way of arranging and selecting the materials that come to be involved in a description, a grammar, or a theory of the language? To what degree is the determination accidental, empirical, pragmatic, rhetorical, or stylistic, and to what extent is the choice essential, logical, and necessary? For that matter, what determines the order of signs in a word, a sentence, a text, or a discussion? All of the corresponding parallel questions about the character of this choice can be posed with regard to the constituent part as well as with regard to the main constitution of the formal language.

In order to answer this sort of question, at any level of articulation, one has to inquire into the type of distinction that it invokes, between arrangements and orders that are essential, logical, and necessary and orders and arrangements that are accidental, rhetorical, and stylistic. As a rough guide to its comprehension, a logical order, if it resides in the subject at all, can be approached by considering all of the ways of saying the same things, in all of the languages that are capable of saying roughly the same things about that subject. Of course, the all that appears in this rule of thumb has to be interpreted as a fittingly qualified sort of universal. For all practical purposes, it simply means all of the ways that a person can think of and all of the languages that a person can conceive of, with all things being relative to the particular moment of investigation. For all of these reasons, the rule must stand as little more than a rough idea of how to approach its object.

If it is demonstrated that a given formal language can be presented in any one of several styles of formal grammar, then the choice of a format is accidental, optional, and stylistic to the very extent that it is free. But if it can be shown that a particular language cannot be successfully presented in a particular style of grammar, then the issue of style is no longer free and rhetorical, but becomes to that very degree essential, necessary, and obligatory, in other words, a question of the objective logical order that can be found to reside in the object language.

As a rough illustration of the difference between logical and rhetorical orders, consider the kinds of order that are expressed and exhibited in the following conjunction of implications:

\(X \Rightarrow Y\ \operatorname{and}\ Y \Rightarrow Z.\)

Here, there is a happy conformity between the logical content and the rhetorical form, indeed, to such a degree that one hardly notices the difference between them. The rhetorical form is given by the order of sentences in the two implications and the order of implications in the conjunction. The logical content is given by the order of propositions in the extended implicational sequence:

\(X\ \le\ Y\ \le\ Z.\)

To see the difference between form and content, or manner and matter, it is enough to observe a few of the ways that the expression can be varied without changing its meaning, for example:

\(Z \Leftarrow Y\ \operatorname{and}\ Y \Leftarrow X.\)

Any style of declarative programming, also called logic programming, depends on a capacity, as embodied in a programming language or other formal system, to describe the relation between problems and solutions in logical terms. A recurring problem in building this capacity is in bridging the gap between ostensibly non-logical orders and the logical orders that are used to describe and to represent them. For instance, to mention just a couple of the most pressing cases, and the ones that are currently proving to be the most resistant to a complete analysis, one has the orders of dynamic evolution and rhetorical transition that manifest themselves in the process of inquiry and in the communication of its results.

This patch of the ongoing discussion is concerned with describing a particular variety of formal languages, whose typical representative is the painted cactus language \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P}).\!\) It is the intention of this work to interpret this language for propositional logic, and thus to use it as a sentential calculus, an order of reasoning that forms an active ingredient and a significant component of all logical reasoning. To describe this language, the standard devices of formal grammars and formal language theory are more than adequate, but this only raises the next question: What sorts of devices are exactly adequate, and fit the task to a "T"? The ultimate desire is to turn the tables on the order of description, and so begins a process of eversion that evolves to the point of asking: To what extent can the language capture the essential features and laws of its own grammar and describe the active principles of its own generation? In other words: How well can the language be described by using the language itself to do so?

In order to speak to these questions, I have to express what a grammar says about a language in terms of what a language can say on its own. In effect, it is necessary to analyze the kinds of meaningful statements that grammars are capable of making about languages in general and to relate them to the kinds of meaningful statements that the syntactic sentences of the cactus language might be interpreted as making about the very same topics. So far in the present discussion, the sentences of the cactus language do not make any meaningful statements at all, much less any meaningful statements about languages and their constitutions. As of yet, these sentences subsist in the form of purely abstract, formal, and uninterpreted combinatorial constructions.

Before the capacity of a language to describe itself can be evaluated, the missing link to meaning has to be supplied for each of its strings. This calls for a dimension of semantics and a notion of interpretation, topics that are taken up for the case of the cactus language \(\mathfrak{C} (\mathfrak{P})\) in Subsection 1.3.10.12. Once a plausible semantics is prescribed for this language it will be possible to return to these questions and to address them in a meaningful way.

The prominent issue at this point is the distinct placements of formal languages and formal grammars with respect to the question of meaning. The sentences of a formal language are merely the abstract strings of abstract signs that happen to belong to a certain set. They do not by themselves make any meaningful statements at all, not without mounting a separate effort of interpretation, but the rules of a formal grammar make meaningful statements about a formal language, to the extent that they say what strings belong to it and what strings do not. Thus, the formal grammar, a formalism that appears to be even more skeletal than the formal language, still has bits and pieces of meaning attached to it. In a sense, the question of meaning is factored into two parts, structure and value, leaving the aspect of value reduced in complexity and subtlety to the simple question of belonging. Whether this single bit of meaningful value is enough to encompass all of the dimensions of meaning that we require, and whether it can be compounded to cover the complexity that actually exists in the realm of meaning — these are questions for an extended future inquiry.

Perhaps I ought to comment on the differences between the present and the standard definition of a formal grammar, since I am attempting to strike a compromise with several alternative conventions of usage, and thus to leave certain options open for future exploration. All of the changes are minor, in the sense that they are not intended to alter the classes of languages that are able to be generated, but only to clear up various ambiguities and sundry obscurities that affect their conception.

Primarily, the conventional scope of non-terminal symbols was expanded to encompass the sentence symbol, mainly on account of all the contexts where the initial and the intermediate symbols are naturally invoked in the same breath. By way of compensating for the usual exclusion of the sentence symbol from the non-terminal class, an equivalent distinction was introduced in the fashion of a distinction between the initial and the intermediate symbols, and this serves its purpose in all of those contexts where the two kind of symbols need to be treated separately.

At the present point, I remain a bit worried about the motivations and the justifications for introducing this distinction, under any name, in the first place. It is purportedly designed to guarantee that the process of derivation at least gets started in a definite direction, while the real questions have to do with how it all ends. The excuses of efficiency and expediency that I offered as plausible and sufficient reasons for distinguishing between empty and significant sentences are likely to be ephemeral, if not entirely illusory, since intermediate symbols are still permitted to characterize or to cover themselves, not to mention being allowed to cover the empty string, and so the very types of traps that one exerts oneself to avoid at the outset are always there to afflict the process at all of the intervening times.

If one reflects on the form of grammar that is being prescribed here, it looks as if one sought, rather futilely, to avoid the problems of recursion by proscribing the main program from calling itself, while allowing any subprogram to do so. But any trouble that is avoidable in the part is also avoidable in the main, while any trouble that is inevitable in the part is also inevitable in the main. Consequently, I am reserving the right to change my mind at a later stage, perhaps to permit the initial symbol to characterize, to cover, to regenerate, or to produce itself, if that turns out to be the best way in the end.

Before I leave this Subsection, I need to say a few things about the manner in which the abstract theory of formal languages and the pragmatic theory of sign relations interact with each other.

Formal language theory can seem like an awfully picky subject at times, treating every symbol as a thing in itself the way it does, sorting out the nominal types of symbols as objects in themselves, and singling out the passing tokens of symbols as distinct entities in their own rights. It has to continue doing this, if not for any better reason than to aid in clarifying the kinds of languages that people are accustomed to use, to assist in writing computer programs that are capable of parsing real sentences, and to serve in designing programming languages that people would like to become accustomed to use. As a matter of fact, the only time that formal language theory becomes too picky, or a bit too myopic in its focus, is when it leads one to think that one is dealing with the thing itself and not just with the sign of it, in other words, when the people who use the tools of formal language theory forget that they are dealing with the mere signs of more interesting objects and not with the objects of ultimate interest in and of themselves.

As a result, there a number of deleterious effects that can arise from the extreme pickiness of formal language theory, arising, as is often the case, when formal theorists forget the practical context of theorization. It frequently happens that the exacting task of defining the membership of a formal language leads one to think that this object and this object alone is the justifiable end of the whole exercise. The distractions of this mediate objective render one liable to forget that one's penultimate interest lies always with various kinds of equivalence classes of signs, not entirely or exclusively with their more meticulous representatives.

When this happens, one typically goes on working oblivious to the fact that many details about what transpires in the meantime do not matter at all in the end, and one is likely to remain in blissful ignorance of the circumstance that many special details of language membership are bound, destined, and pre-determined to be glossed over with some measure of indifference, especially when it comes down to the final constitution of those equivalence classes of signs that are able to answer for the genuine objects of the whole enterprise of language. When any form of theory, against its initial and its best intentions, leads to this kind of absence of mind that is no longer beneficial in all of its main effects, the situation calls for an antidotal form of theory, one that can restore the presence of mind that all forms of theory are meant to augment.

The pragmatic theory of sign relations is called for in settings where everything that can be named has many other names, that is to say, in the usual case. Of course, one would like to replace this superfluous multiplicity of signs with an organized system of canonical signs, one for each object that needs to be denoted, but reducing the redundancy too far, beyond what is necessary to eliminate the factor of "noise" in the language, that is, to clear up its effectively useless distractions, can destroy the very utility of a typical language, which is intended to provide a ready means to express a present situation, clear or not, and to describe an ongoing condition of experience in just the way that it seems to present itself. Within this fleshed out framework of language, moreover, the process of transforming the manifestations of a sign from its ordinary appearance to its canonical aspect is the whole problem of computation in a nutshell.

It is a well-known truth, but an often forgotten fact, that nobody computes with numbers, but solely with numerals in respect of numbers, and numerals themselves are symbols. Among other things, this renders all discussion of numeric versus symbolic computation a bit beside the point, since it is only a question of what kinds of symbols are best for one's immediate application or for one's selection of ongoing objectives. The numerals that everybody knows best are just the canonical symbols, the standard signs or the normal terms for numbers, and the process of computation is a matter of getting from the arbitrarily obscure signs that the data of a situation are capable of throwing one's way to the indications of its character that are clear enough to motivate action.

Having broached the distinction between propositions and sentences, one can see its similarity to the distinction between numbers and numerals. What are the implications of the foregoing considerations for reasoning about propositions and for the realm of reckonings in sentential logic? If the purpose of a sentence is just to denote a proposition, then the proposition is just the object of whatever sign is taken for a sentence. This means that the computational manifestation of a piece of reasoning about propositions amounts to a process that takes place entirely within a language of sentences, a procedure that can rationalize its account by referring to the denominations of these sentences among propositions.

The application of these considerations in the immediate setting is this: Do not worry too much about what roles the empty string \(\varepsilon \, = \, ^{\backprime\backprime\prime\prime}\) and the blank symbol \(m_1 \, = \, ^{\backprime\backprime} \operatorname{~} ^{\prime\prime}\) are supposed to play in a given species of formal languages. As it happens, it is far less important to wonder whether these types of formal tokens actually constitute genuine sentences than it is to decide what equivalence classes it makes sense to form over all of the sentences in the resulting language, and only then to bother about what equivalence classes these limiting cases of sentences are most conveniently taken to represent.

These concerns about boundary conditions betray a more general issue. Already by this point in discussion the limits of the purely syntactic approach to a language are beginning to be visible. It is not that one cannot go a whole lot further by this road in the analysis of a particular language and in the study of languages in general, but when it comes to the questions of understanding the purpose of a language, of extending its usage in a chosen direction, or of designing a language for a particular set of uses, what matters above all else are the pragmatic equivalence classes of signs that are demanded by the application and intended by the designer, and not so much the peculiar characters of the signs that represent these classes of practical meaning.

Any description of a language is bound to have alternative descriptions. More precisely, a circumscribed description of a formal language, as any effectively finite description is bound to be, is certain to suggest the equally likely existence and the possible utility of other descriptions. A single formal grammar describes but a single formal language, but any formal language is described by many different formal grammars, not all of which afford the same grasp of its structure, provide an equivalent comprehension of its character, or yield an interchangeable view of its aspects. Consequently, even with respect to the same formal language, different formal grammars are typically better for different purposes.

With the distinctions that evolve among the different styles of grammar, and with the preferences that different observers display toward them, there naturally comes the question: What is the root of this evolution?

One dimension of variation in the styles of formal grammars can be seen by treating the union of languages, and especially the disjoint union of languages, as a sum, by treating the concatenation of languages as a product, and then by distinguishing the styles of analysis that favor sums of products from those that favor products of sums as their canonical forms of description. If one examines the relation between languages and grammars carefully enough to see the presence and the influence of these different styles, and when one comes to appreciate the ways that different styles of grammars can be used with different degrees of success for different purposes, then one begins to see the possibility that alternative styles of description can be based on altogether different linguistic and logical operations.

It possible to trace this divergence of styles to an even more primitive division, one that distinguishes the additive or the parallel styles from the multiplicative or the serial styles. The issue is somewhat confused by the fact that an additive analysis is typically expressed in the form of a series, in other words, a disjoint union of sets or a linear sum of their independent effects. But it is easy enough to sort this out if one observes the more telling connection between parallel and independent. Another way to keep the right associations straight is to employ the term sequential in preference to the more misleading term serial. Whatever one calls this broad division of styles, the scope and sweep of their dimensions of variation can be delineated in the following way:

  1. The additive or parallel styles favor sums of products \((\textstyle\sum\prod)\) as canonical forms of expression, pulling sums, unions, co-products, and logical disjunctions to the outermost layers of analysis and synthesis, while pushing products, intersections, concatenations, and logical conjunctions to the innermost levels of articulation and generation. In propositional logic, this style leads to the disjunctive normal form (DNF).
  2. The multiplicative or serial styles favor products of sums \((\textstyle\prod\sum)\) as canonical forms of expression, pulling products, intersections, concatenations, and logical conjunctions to the outermost layers of analysis and synthesis, while pushing sums, unions, co-products, and logical disjunctions to the innermost levels of articulation and generation. In propositional logic, this style leads to the conjunctive normal form (CNF).

There is a curious sort of diagnostic clue that often serves to reveal the dominance of one mode or the other within an individual thinker's cognitive style. Examined on the question of what constitutes the natural numbers, an additive thinker tends to start the sequence at 0, while a multiplicative thinker tends to regard it as beginning at 1.

In any style of description, grammar, or theory of a language, it is usually possible to tease out the influence of these contrasting traits, namely, the additive attitude versus the mutiplicative tendency that go to make up the particular style in question, and even to determine the dominant inclination or point of view that establishes its perspective on the target domain.

In each style of formal grammar, the multiplicative aspect is present in the sequential concatenation of signs, both in the augmented strings and in the terminal strings. In settings where the non-terminal symbols classify types of strings, the concatenation of the non-terminal symbols signifies the cartesian product over the corresponding sets of strings.

In the context-free style of formal grammar, the additive aspect is easy enough to spot. It is signaled by the parallel covering of many augmented strings or sentential forms by the same non-terminal symbol. Expressed in active terms, this calls for the independent rewriting of that non-terminal symbol by a number of different successors, as in the following scheme:

\(\begin{matrix} q & :> & W_1 \\ \\ \cdots & \cdots & \cdots \\ \\ q & :> & W_k \\ \end{matrix}\)

It is useful to examine the relationship between the grammatical covering or production relation \((:>\!)\) and the logical relation of implication \((\Rightarrow),\) with one eye to what they have in common and one eye to how they differ. The production \(q :> W\!\) says that the appearance of the symbol \(q\!\) in a sentential form implies the possibility of exchanging it for \(W.\!\) Although this sounds like a possible implication, to the extent that \(q\!\) implies a possible \(W\!\) or that \(q\!\) possibly implies \(W,\!\) the qualifiers possible and possibly are the critical elements in these statements, and they are crucial to the meaning of what is actually being implied. In effect, these qualifications reverse the direction of implication, yielding \(^{\backprime\backprime} \, q \Leftarrow W \, ^{\prime\prime}\) as the best analogue for the sense of the production.

One way to sum this up is to say that non-terminal symbols have the significance of hypotheses. The terminal strings form the empirical matter of a language, while the non-terminal symbols mark the patterns or the types of substrings that can be noticed in the profusion of data. If one observes a portion of a terminal string that falls into the pattern of the sentential form \(W,\!\) then it is an admissible hypothesis, according to the theory of the language that is constituted by the formal grammar, that this piece not only fits the type \(q\!\) but even comes to be generated under the auspices of the non-terminal symbol \(^{\backprime\backprime} q ^{\prime\prime}.\)

A moment's reflection on the issue of style, giving due consideration to the received array of stylistic choices, ought to inspire at least the question: "Are these the only choices there are?" In the present setting, there are abundant indications that other options, more differentiated varieties of description and more integrated ways of approaching individual languages, are likely to be conceivable, feasible, and even more ultimately viable. If a suitably generic style, one that incorporates the full scope of logical combinations and operations, is broadly available, then it would no longer be necessary, or even apt, to argue in universal terms about which style is best, but more useful to investigate how we might adapt the local styles to the local requirements. The medium of a generic style would yield a viable compromise between additive and multiplicative canons, and render the choice between parallel and serial a false alternative, at least, when expressed in the globally exclusive terms that are currently most commonly adopted to pose it.

One set of indications comes from the study of machines, languages, and computation, especially the theories of their structures and relations. The forms of composition and decomposition that are generally known as parallel and serial are merely the extreme special cases, in variant directions of specialization, of a more generic form, usually called the cascade form of combination. This is a well-known fact in the theories that deal with automata and their associated formal languages, but its implications do not seem to be widely appreciated outside these fields. In particular, it dispells the need to choose one extreme or the other, since most of the natural cases are likely to exist somewhere in between.

Another set of indications appears in algebra and category theory, where forms of composition and decomposition related to the cascade combination, namely, the semi-direct product and its special case, the wreath product, are encountered at higher levels of generality than the cartesian products of sets or the direct products of spaces.

In these domains of operation, one finds it necessary to consider also the co-product of sets and spaces, a construction that artificially creates a disjoint union of sets, that is, a union of spaces that are being treated as independent. It does this, in effect, by indexing, coloring, or preparing the otherwise possibly overlapping domains that are being combined. What renders this a chimera or a hybrid form of combination is the fact that this indexing is tantamount to a cartesian product of a singleton set, namely, the conventional index, color, or affix in question, with the individual domain that is entering as a factor, a term, or a participant in the final result.

One of the insights that arises out of Peirce's logical work is that the set operations of complementation, intersection, and union, along with the logical operations of negation, conjunction, and disjunction that operate in isomorphic tandem with them, are not as fundamental as they first appear. This is because all of them can be constructed from or derived from a smaller set of operations, in fact, taking the logical side of things, from either one of two sole sufficient operators, called amphecks by Peirce, strokes by those who re-discovered them later, and known in computer science as the NAND and the NNOR operators. For this reason, that is, by virtue of their precedence in the orders of construction and derivation, these operations have to be regarded as the simplest and the most primitive in principle, even if they are scarcely recognized as lying among the more familiar elements of logic.

I am throwing together a wide variety of different operations into each of the bins labeled additive and multiplicative, but it is easy to observe a natural organization and even some relations approaching isomorphisms among and between the members of each class.

The relation between logical disjunction and set-theoretic union and the relation between logical conjunction and set-theoretic intersection ought to be clear enough for the purposes of the immediately present context. In any case, all of these relations are scheduled to receive a thorough examination in a subsequent discussion (Subsection 1.3.10.13). But the relation of a set-theoretic union to a category-theoretic co-product and the relation of a set-theoretic intersection to a syntactic concatenation deserve a closer look at this point.

The effect of a co-product as a disjointed union, in other words, that creates an object tantamount to a disjoint union of sets in the resulting co-product even if some of these sets intersect non-trivially and even if some of them are identical in reality, can be achieved in several ways. The most usual conception is that of making a separate copy, for each part of the intended co-product, of the set that is intended to go there. Often one thinks of the set that is assigned to a particular part of the co-product as being distinguished by a particular color, in other words, by the attachment of a distinct index, label, or tag, being a marker that is inherited by and passed on to every element of the set in that part. A concrete image of this construction can be achieved by imagining that each set and each element of each set is placed in an ordered pair with the sign of its color, index, label, or tag. One describes this as the injection of each set into the corresponding part of the co-product.

For example, given the sets \(P\!\) and \(Q,\!\) overlapping or not, one can define the indexed or marked sets \(P_{[1]}\!\) and \(Q_{[2]},\!\) amounting to the copy of \(P\!\) into the first part of the co-product and the copy of \(Q\!\) into the second part of the co-product, in the following manner:

\(\begin{array}{lllll} P_{[1]} & = & (P, 1) & = & \{ (x, 1) : x \in P \}, \\ Q_{[2]} & = & (Q, 2) & = & \{ (x, 2) : x \in Q \}. \\ \end{array}\)

Using the coproduct operator (\(\textstyle\coprod\)) for this construction, the sum, the coproduct, or the disjointed union of \(P\!\) and \(Q\!\) in that order can be represented as the ordinary union of \(P_{[1]}\!\) and \(Q_{[2]}.\!\)

\(\begin{array}{lll} P \coprod Q & = & P_{[1]} \cup Q_{[2]}. \\ \end{array}\)

The concatenation \(\mathfrak{L}_1 \cdot \mathfrak{L}_2\) of the formal languages \(\mathfrak{L}_1\!\) and \(\mathfrak{L}_2\!\) is just the cartesian product of sets \(\mathfrak{L}_1 \times \mathfrak{L}_2\) without the extra \(\times\!\)'s, but the relation of cartesian products to set-theoretic intersections and thus to logical conjunctions is far from being clear. One way of seeing a type of relation is to focus on the information that is needed to specify each construction, and thus to reflect on the signs that are used to carry this information. As a first approach to the topic of information, according to a strategy that seeks to be as elementary and as informal as possible, I introduce the following set of ideas, intended to be taken in a very provisional way.

A stricture is a specification of a certain set in a certain place, relative to a number of other sets, yet to be specified. It is assumed that one knows enough to tell if two strictures are equivalent as pieces of information, but any more determinate indications, like names for the places that are mentioned in the stricture, or bounds on the number of places that are involved, are regarded as being extraneous impositions, outside the proper concern of the definition, no matter how convenient they are found to be for a particular discussion. As a schematic form of illustration, a stricture can be pictured in the following shape:

\(^{\backprime\backprime}\) \(\ldots \times X \times Q \times X \times \ldots\) \(^{\prime\prime}\)

A strait is the object that is specified by a stricture, in effect, a certain set in a certain place of an otherwise yet to be specified relation. Somewhat sketchily, the strait that corresponds to the stricture just given can be pictured in the following shape:

  \(\ldots \times X \times Q \times X \times \ldots\)  

In this picture \(Q\!\) is a certain set and \(X\!\) is the universe of discourse that is relevant to a given discussion. Since a stricture does not, by itself, contain a sufficient amount of information to specify the number of sets that it intends to set in place, or even to specify the absolute location of the set that its does set in place, it appears to place an unspecified number of unspecified sets in a vague and uncertain strait. Taken out of its interpretive context, the residual information that a stricture can convey makes all of the following potentially equivalent as strictures:

\(\begin{array}{ccccccc} ^{\backprime\backprime} Q ^{\prime\prime} & , & ^{\backprime\backprime} X \times Q \times X ^{\prime\prime} & , & ^{\backprime\backprime} X \times X \times Q \times X \times X ^{\prime\prime} & , & \ldots \\ \end{array}\)

With respect to what these strictures specify, this leaves all of the following equivalent as straits:

\(\begin{array}{ccccccc} Q & = & X \times Q \times X & = & X \times X \times Q \times X \times X & = & \ldots \\ \end{array}\)

Within the framework of a particular discussion, it is customary to set a bound on the number of places and to limit the variety of sets that are regarded as being under active consideration, and it is also convenient to index the places of the indicated relations, and of their encompassing cartesian products, in some fixed way. But the whole idea of a stricture is to specify a strait that is capable of extending through and beyond any fixed frame of discussion. In other words, a stricture is conceived to constrain a strait at a certain point, and then to leave it literally embedded, if tacitly expressed, in a yet to be fully specified relation, one that involves an unspecified number of unspecified domains.

A quantity of information is a measure of constraint. In this respect, a set of comparable strictures is ordered on account of the information that each one conveys, and a system of comparable straits is ordered in accord with the amount of information that it takes to pin each one of them down. Strictures that are more constraining and straits that are more constrained are placed at higher levels of information than those that are less so, and entities that involve more information are said to have a greater complexity in comparison with those entities that involve less information, that are said to have a greater simplicity.

In order to create a concrete example, let me now institute a frame of discussion where the number of places in a relation is bounded at two, and where the variety of sets under active consideration is limited to the typical subsets \(P\!\) and \(Q\!\) of a universe \(X.\!\) Under these conditions, one can use the following sorts of expression as schematic strictures:

\(\begin{matrix} ^{\backprime\backprime} X ^{\prime\prime} & ^{\backprime\backprime} P ^{\prime\prime} & ^{\backprime\backprime} Q ^{\prime\prime} \\ \\ ^{\backprime\backprime} X \times X ^{\prime\prime} & ^{\backprime\backprime} X \times P ^{\prime\prime} & ^{\backprime\backprime} X \times Q ^{\prime\prime} \\ \\ ^{\backprime\backprime} P \times X ^{\prime\prime} & ^{\backprime\backprime} P \times P ^{\prime\prime} & ^{\backprime\backprime} P \times Q ^{\prime\prime} \\ \\ ^{\backprime\backprime} Q \times X ^{\prime\prime} & ^{\backprime\backprime} Q \times P ^{\prime\prime} & ^{\backprime\backprime} Q \times Q ^{\prime\prime} \\ \end{matrix}\)

These strictures and their corresponding straits are stratified according to their amounts of information, or their levels of constraint, as follows:

\(\begin{array}{lcccc} \text{High:} & ^{\backprime\backprime} P \times P ^{\prime\prime} & ^{\backprime\backprime} P \times Q ^{\prime\prime} & ^{\backprime\backprime} Q \times P ^{\prime\prime} & ^{\backprime\backprime} Q \times Q ^{\prime\prime} \\ \\ \text{Med:} & ^{\backprime\backprime} P ^{\prime\prime} & ^{\backprime\backprime} X \times P ^{\prime\prime} & ^{\backprime\backprime} P \times X ^{\prime\prime} \\ \\ \text{Med:} & ^{\backprime\backprime} Q ^{\prime\prime} & ^{\backprime\backprime} X \times Q ^{\prime\prime} & ^{\backprime\backprime} Q \times X ^{\prime\prime} \\ \\ \text{Low:} & ^{\backprime\backprime} X ^{\prime\prime} & ^{\backprime\backprime} X \times X ^{\prime\prime} \\ \end{array}\)

Within this framework, the more complex strait \(P \times Q\) can be expressed in terms of the simpler straits, \(P \times X\) and \(X \times Q.\) More specifically, it lends itself to being analyzed as their intersection, in the following way:

\(\begin{array}{lllll} P \times Q & = & P \times X & \cap & X \times Q. \\ \end{array}\)

From here it is easy to see the relation of concatenation, by virtue of these types of intersection, to the logical conjunction of propositions. The cartesian product \(P \times Q\) is described by a conjunction of propositions, namely, \(P_{[1]} \land Q_{[2]},\) subject to the following interpretation:

  1. \(P_{[1]}\!\) asserts that there is an element from the set \(P\!\) in the first place of the product.
  2. \(Q_{[2]}\!\) asserts that there is an element from the set \(Q\!\) in the second place of the product.

The integration of these two pieces of information can be taken in that measure to specify a yet to be fully determined relation.

In a corresponding fashion at the level of the elements, the ordered pair \((p, q)\!\) is described by a conjunction of propositions, namely, \(p_{[1]} \land q_{[2]},\) subject to the following interpretation:

  1. \(p_{[1]}\!\) says that \(p\!\) is in the first place of the product element under construction.
  2. \(q_{[2]}\!\) says that \(q\!\) is in the second place of the product element under construction.

Notice that, in construing the cartesian product of the sets \(P\!\) and \(Q\!\) or the concatenation of the languages \(\mathfrak{L}_1\!\) and \(\mathfrak{L}_2\!\) in this way, one shifts the level of the active construction from the tupling of the elements in \(P\!\) and \(Q\!\) or the concatenation of the strings that are internal to the languages \(\mathfrak{L}_1\!\) and \(\mathfrak{L}_2\!\) to the concatenation of the external signs that it takes to indicate these sets or these languages, in other words, passing to a conjunction of indexed propositions, \(P_{[1]}\!\) and \(Q_{[2]},\!\) or to a conjunction of assertions, \((\mathfrak{L}_1)_{[1]}\) and \((\mathfrak{L}_2)_{[2]},\) that marks the sets or the languages in question for insertion in the indicated places of a product set or a product language, respectively. In effect, the subscripting by the indices \(^{\backprime\backprime} [1] ^{\prime\prime}\) and \(^{\backprime\backprime} [2] ^{\prime\prime}\) can be recognized as a special case of concatenation, albeit through the posting of editorial remarks from an external mark-up language.

In order to systematize the relations that strictures and straits placed at higher levels of complexity, constraint, information, and organization have with those that are placed at the associated lower levels, I introduce the following pair of definitions:

The \(j^\text{th}\!\) excerpt of a stricture of the form \(^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime},\) regarded within a frame of discussion where the number of places is limited to \(k,\!\) is the stricture of the form \(^{\backprime\backprime} \, X \times \ldots \times S_j \times \ldots \times X \, ^{\prime\prime}.\) In the proper context, this can be written more succinctly as the stricture \(^{\backprime\backprime} \, (S_j)_{[j]} \, ^{\prime\prime},\) an assertion that places the \(j^\text{th}\!\) set in the \(j^\text{th}\!\) place of the product.

The \(j^\text{th}\!\) extract of a strait of the form \(S_1 \times \ldots \times S_k,\!\) constrained to a frame of discussion where the number of places is restricted to \(k,\!\) is the strait of the form \(X \times \ldots \times S_j \times \ldots \times X.\) In the appropriate context, this can be denoted more succinctly by the stricture \(^{\backprime\backprime} \, (S_j)_{[j]} \, ^{\prime\prime},\) an assertion that places the \(j^\text{th}\!\) set in the \(j^\text{th}\!\) place of the product.

In these terms, a stricture of the form \(^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime}\) can be expressed in terms of simpler strictures, to wit, as a conjunction of its \(k\!\) excerpts:

\(\begin{array}{lll} ^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime} & = & ^{\backprime\backprime} \, (S_1)_{[1]} \, ^{\prime\prime} \, \land \, \ldots \, \land \, ^{\backprime\backprime} \, (S_k)_{[k]} \, ^{\prime\prime}. \end{array}\)

In a similar vein, a strait of the form \(S_1 \times \ldots \times S_k\!\) can be expressed in terms of simpler straits, namely, as an intersection of its \(k\!\) extracts:

\(\begin{array}{lll} S_1 \times \ldots \times S_k & = & (S_1)_{[1]} \, \cap \, \ldots \, \cap \, (S_k)_{[k]}. \end{array}\)

There is a measure of ambiguity that remains in this formulation, but it is the best that I can do in the present informal context.

The Cactus Language : Mechanics

We are only now beginning to see how this works. Clearly one of the mechanisms for picking a reality is the sociohistorical sense of what is important — which research program, with all its particularity of knowledge, seems most fundamental, most productive, most penetrating. The very judgments which make us push narrowly forward simultaneously make us forget how little we know. And when we look back at history, where the lesson is plain to find, we often fail to imagine ourselves in a parallel situation. We ascribe the differences in world view to error, rather than to unexamined but consistent and internally justified choice.

— Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]

In this Subsection, I discuss the mechanics of parsing the cactus language into the corresponding class of computational data structures. This provides each sentence of the language with a translation into a computational form that articulates its syntactic structure and prepares it for automated modes of processing and evaluation. For this purpose, it is necessary to describe the target data structures at a fairly high level of abstraction only, ignoring the details of address pointers and record structures and leaving the more operational aspects of implementation to the imagination of prospective programmers. In this way, I can put off to another stage of elaboration and refinement the description of the program that constructs these pointers and operates on these graph-theoretic data structures.

The structure of a painted cactus, insofar as it presents itself to the visual imagination, can be described as follows. The overall structure, as given by its underlying graph, falls within the species of graph that is commonly known as a rooted cactus, and the only novel feature that it adds to this is that each of its nodes can be painted with a finite sequence of paints, chosen from a palette that is given by the parametric set \(\{ \, ^{\backprime\backprime} \operatorname{~} ^{\prime\prime} \, \} \cup \mathfrak{P} = \{ m_1 \} \cup \{ p_1, \ldots, p_k \}.\)

It is conceivable, from a purely graph-theoretical point of view, to have a class of cacti that are painted but not rooted, and so it is frequently necessary, for the sake of precision, to more exactly pinpoint the target species of graphical structure as a painted and rooted cactus (PARC).

A painted cactus, as a rooted graph, has a distinguished node that is called its root. By starting from the root and working recursively, the rest of its structure can be described in the following fashion.

Each node of a PARC consists of a graphical point or vertex plus a finite sequence of attachments, described in relative terms as the attachments at or to that node. An empty sequence of attachments defines the empty node. Otherwise, each attachment is one of three kinds: a blank, a paint, or a type of PARC that is called a lobe.

Each lobe of a PARC consists of a directed graphical cycle plus a finite sequence of accoutrements, described in relative terms as the accoutrements of or on that lobe. Recalling the circumstance that every lobe that comes under consideration comes already attached to a particular node, exactly one vertex of the corresponding cycle is the vertex that comes from that very node. The remaining vertices of the cycle have their definitions filled out according to the accoutrements of the lobe in question. An empty sequence of accoutrements is taken to be tantamount to a sequence that contains a single empty node as its unique accoutrement, and either one of these ways of approaching it can be regarded as defining a graphical structure that is called a needle or a terminal edge. Otherwise, each accoutrement of a lobe is itself an arbitrary PARC.

Although this definition of a lobe in terms of its intrinsic structural components is logically sufficient, it is also useful to characterize the structure of a lobe in comparative terms, that is, to view the structure that typifies a lobe in relation to the structures of other PARC's and to mark the inclusion of this special type within the general run of PARC's. This approach to the question of types results in a form of description that appears to be a bit more analytic, at least, in mnemonic or prima facie terms, if not ultimately more revealing. Working in this vein, a lobe can be characterized as a special type of PARC that is called an unpainted root plant (UR-plant).

An UR-plant is a PARC of a simpler sort, at least, with respect to the recursive ordering of structures that is being followed here. As a type, it is defined by the presence of two properties, that of being planted and that of having an unpainted root. These are defined as follows:

  1. A PARC is planted if its list of attachments has just one PARC.
  2. A PARC is UR if its list of attachments has no blanks or paints.

In short, an UR-planted PARC has a single PARC as its only attachment, and since this attachment is prevented from being a blank or a paint, the single attachment at its root has to be another sort of structure, that which we call a lobe.

To express the description of a PARC in terms of its nodes, each node can be specified in the fashion of a functional expression, letting a citation of the generic function name "\(\operatorname{Node}\)" be followed by a list of arguments that enumerates the attachments of the node in question, and letting a citation of the generic function name "\(\operatorname{Lobe}\)" be followed by a list of arguments that details the accoutrements of the lobe in question. Thus, one can write expressions of the following forms:

\(1.\!\) \(\operatorname{Node}^0\) \(=\!\) \(\operatorname{Node}()\)
    \(=\!\) a node with no attachments.
  \(\operatorname{Node}_{j=1}^k C_j\) \(=\!\) \(\operatorname{Node} (C_1, \ldots, C_k)\)
    \(=\!\) a node with the attachments \(C_1, \ldots, C_k.\)
\(2.\!\) \(\operatorname{Lobe}^0\) \(=\!\) \(\operatorname{Lobe}()\)
    \(=\!\) a lobe with no accoutrements.
  \(\operatorname{Lobe}_{j=1}^k C_j\) \(=\!\) \(\operatorname{Lobe} (C_1, \ldots, C_k)\)
    \(=\!\) a lobe with the accoutrements \(C_1, \ldots, C_k.\)

Working from a structural description of the cactus language, or any suitable formal grammar for \(\mathfrak{C} (\mathfrak{P}),\!\) it is possible to give a recursive definition of the function called \(\operatorname{Parse}\) that maps each sentence in \(\operatorname{PARCE} (\mathfrak{P})\!\) to the corresponding graph in \(\operatorname{PARC} (\mathfrak{P}).\!\) One way to do this proceeds as follows:

  1. The parse of the concatenation \(\operatorname{Conc}_{j=1}^k\) of the \(k\!\) sentences \((s_j)_{j=1}^k\) is defined recursively as follows:
    1. \(\operatorname{Parse} (\operatorname{Conc}^0) ~=~ \operatorname{Node}^0.\)
    2. For \(k > 0,\!\)

      \(\operatorname{Parse} (\operatorname{Conc}_{j=1}^k s_j) ~=~ \operatorname{Node}_{j=1}^k \operatorname{Parse} (s_j).\)

  2. The parse of the surcatenation \(\operatorname{Surc}_{j=1}^k\) of the \(k\!\) sentences \((s_j)_{j=1}^k\) is defined recursively as follows:
    1. \(\operatorname{Parse} (\operatorname{Surc}^0) ~=~ \operatorname{Lobe}^0.\)
    2. For \(k > 0,\!\)

      \(\operatorname{Parse} (\operatorname{Surc}_{j=1}^k s_j) ~=~ \operatorname{Lobe}_{j=1}^k \operatorname{Parse} (s_j).\)

For ease of reference, Table 13 summarizes the mechanics of these parsing rules.


\(\text{Table 13.} ~~ \text{Algorithmic Translation Rules}\!\)
\(\text{Sentence in PARCE}\!\) \(\xrightarrow{\mathrm{Parse}}\!\) \(\text{Graph in PARC}\!\)
\(\mathrm{Conc}^0\!\) \(\xrightarrow{\mathrm{Parse}}\!\) \(\mathrm{Node}^0\!\)
\(\mathrm{Conc}_{j=1}^k s_j\!\) \(\xrightarrow{\mathrm{Parse}}\!\) \(\mathrm{Node}_{j=1}^k \mathrm{Parse} (s_j)\!\)
\(\mathrm{Surc}^0\!\) \(\xrightarrow{\mathrm{Parse}}\!\) \(\mathrm{Lobe}^0\!\)
\(\mathrm{Surc}_{j=1}^k s_j\!\) \(\xrightarrow{\mathrm{Parse}}\!\) \(\mathrm{Lobe}_{j=1}^k \mathrm{Parse} (s_j)\!\)


A substructure of a PARC is defined recursively as follows. Starting at the root node of the cactus \(C,\!\) any attachment is a substructure of \(C.\!\) If a substructure is a blank or a paint, then it constitutes a minimal substructure, meaning that no further substructures of \(C\!\) arise from it. If a substructure is a lobe, then each one of its accoutrements is also a substructure of \(C,\!\) and has to be examined for further substructures.

The concept of substructure can be used to define varieties of deletion and erasure operations that respect the structure of the abstract graph. For the purposes of this depiction, a blank symbol \(^{\backprime\backprime} ~ ^{\prime\prime}\) is treated as a primer, in other words, as a clear paint or a neutral tint. In effect, one is letting \(m_1 = p_0.\!\) In this frame of discussion, it is useful to make the following distinction:

  1. To delete a substructure is to replace it with an empty node, in effect, to reduce the whole structure to a trivial point.
  2. To erase a substructure is to replace it with a blank symbol, in effect, to paint it out of the picture or to overwrite it.

A bare PARC, loosely referred to as a bare cactus, is a PARC on the empty palette \(\mathfrak{P} = \varnothing.\) In other veins, a bare cactus can be described in several different ways, depending on how the form arises in practice.

  1. Leaning on the definition of a bare PARCE, a bare PARC can be described as the kind of a parse graph that results from parsing a bare cactus expression, in other words, as the kind of a graph that issues from the requirements of processing a sentence of the bare cactus language \(\mathfrak{C}^0 = \operatorname{PARCE}^0.\)
  2. To express it more in its own terms, a bare PARC can be defined by tracing the recursive definition of a generic PARC, but then by detaching an independent form of description from the source of that analogy. The method is sufficiently sketched as follows:
    1. A bare PARC is a PARC whose attachments are limited to blanks and bare lobes.
    2. A bare lobe is a lobe whose accoutrements are limited to bare PARC's.
  3. In practice, a bare cactus is usually encountered in the process of analyzing or handling an arbitrary PARC, the circumstances of which frequently call for deleting or erasing all of its paints. In particular, this generally makes it easier to observe the various properties of its underlying graphical structure.

The Cactus Language : Semantics

Alas, and yet what are you, my written and painted thoughts! It is not long ago that you were still so many-coloured, young and malicious, so full of thorns and hidden spices you made me sneeze and laugh — and now? You have already taken off your novelty and some of you, I fear, are on the point of becoming truths: they already look so immortal, so pathetically righteous, so boring!

— Nietzsche, Beyond Good and Evil, [Nie-2, ¶ 296]

In this Subsection, I describe a particular semantics for the painted cactus language, telling what meanings I aim to attach to its bare syntactic forms. This supplies an interpretation for this parametric family of formal languages, but it is good to remember that it forms just one of many such interpretations that are conceivable and even viable. In deed, the distinction between the object domain and the sign domain can be observed in the fact that many languages can be deployed to depict the same set of objects and that any language worth its salt is bound to to give rise to many different forms of interpretive saliency.

In formal settings, it is common to speak of interpretation as if it created a direct connection between the signs of a formal language and the objects of the intended domain, in other words, as if it determined the denotative component of a sign relation. But a closer attention to what goes on reveals that the process of interpretation is more indirect, that what it does is to provide each sign of a prospectively meaningful source language with a translation into an already established target language, where already established means that its relationship to pragmatic objects is taken for granted at the moment in question.

With this in mind, it is clear that interpretation is an affair of signs that at best respects the objects of all of the signs that enter into it, and so it is the connotative aspect of semiotics that is at stake here. There is nothing wrong with my saying that I interpret a sentence of a formal language as a sign that refers to a function or to a proposition, so long as you understand that this reference is likely to be achieved by way of more familiar and perhaps less formal signs that you already take to denote those objects.

On entering a context where a logical interpretation is intended for the sentences of a formal language there are a few conventions that make it easier to make the translation from abstract syntactic forms to their intended semantic senses. Although these conventions are expressed in unnecessarily colorful terms, from a purely abstract point of view, they do provide a useful array of connotations that help to negotiate what is otherwise a difficult transition. This terminology is introduced as the need for it arises in the process of interpreting the cactus language.

The task of this Subsection is to specify a semantic function for the sentences of the cactus language \(\mathfrak{L} = \mathfrak{C}(\mathfrak{P}),\) in other words, to define a mapping that "interprets" each sentence of \(\mathfrak{C}(\mathfrak{P})\) as a sentence that says something, as a sentence that bears a meaning, in short, as a sentence that denotes a proposition, and thus as a sign of an indicator function. When the syntactic sentences of a formal language are given a referent significance in logical terms, for example, as denoting propositions or indicator functions, then each form of syntactic combination takes on a corresponding form of logical significance.

By way of providing a logical interpretation for the cactus language, I introduce a family of operators on indicator functions that are called propositional connectives, and I distinguish these from the associated family of syntactic combinations that are called sentential connectives, where the relationship between these two realms of connection is exactly that between objects and their signs. A propositional connective, as an entity of a well-defined functional and operational type, can be treated in every way as a logical or a mathematical object, and thus as the type of object that can be denoted by the corresponding form of syntactic entity, namely, the sentential connective that is appropriate to the case in question.

There are two basic types of connectives, called the blank connectives and the bound connectives, respectively, with one connective of each type for each natural number \(k = 0, 1, 2, 3, \ldots.\)

  1. The blank connective of \(k\!\) places is signified by the concatenation of the \(k\!\) sentences that fill those places.

    For the special case of \(k = 0,\!\) the blank connective is taken to be an empty string or a blank symbol — it does not matter which, since both are assigned the same denotation among propositions.

    For the generic case of \(k > 0,\!\) the blank connective takes the form \(s_1 \cdot \ldots \cdot s_k.\) In the type of data that is called a text, the use of the center dot \((\cdot)\) is generally supplanted by whatever number of spaces and line breaks serve to improve the readability of the resulting text.

  2. The bound connective of \(k\!\) places is signified by the surcatenation of the \(k\!\) sentences that fill those places.

    For the special case of \(k = 0,\!\) the bound connective is taken to be an empty closure — an expression enjoying one of the forms \(\underline{(} \underline{)}, \, \underline{(} ~ \underline{)}, \, \underline{(} ~~ \underline{)}, \, \ldots\) with any number of blank symbols between the parentheses — all of which are assigned the same logical denotation among propositions.

    For the generic case of \(k > 0,\!\) the bound connective takes the form \(\underline{(} s_1, \ldots, s_k \underline{)}.\)

At this point, there are actually two different dialects, scripts, or modes of presentation for the cactus language that need to be interpreted, in other words, that need to have a semantic function defined on their domains.

  1. There is the literal formal language of strings in \(\operatorname{PARCE} (\mathfrak{P}),\) the painted and rooted cactus expressions that constitute the language \(\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) \subseteq \mathfrak{A}^* = (\mathfrak{M} \cup \mathfrak{P})^*.\)
  2. There is the figurative formal language of graphs in \(\operatorname{PARC} (\mathfrak{P}),\) the painted and rooted cacti themselves, a parametric family of graphs or a species of computational data structures that is graphically analogous to the language of literal strings.

Of course, these two modalities of formal language, like written and spoken natural languages, are meant to have compatible interpretations, and so it is usually sufficient to give just the meanings of either one. All that remains is to provide a codomain or a target space for the intended semantic function, in other words, to supply a suitable range of logical meanings for the memberships of these languages to map into. Out of the many interpretations that are formally possible to arrange, one way of doing this proceeds by making the following definitions:

  1. The conjunction \(\operatorname{Conj}_j^J q_j\) of a set of propositions, \(\{ q_j : j \in J \},\) is a proposition that is true if and only if every one of the \(q_j\!\) is true.

    \(\operatorname{Conj}_j^J q_j\) is true  \(\Leftrightarrow\)  \(q_j\!\) is true for every \(j \in J.\)

  2. The surjunction \(\operatorname{Surj}_j^J q_j\) of a set of propositions, \(\{ q_j : j \in J \},\) is a proposition that is true if and only if exactly one of the \(q_j\!\) is untrue.

    \(\operatorname{Surj}_j^J q_j\) is true  \(\Leftrightarrow\)  \(q_j\!\) is untrue for unique \(j \in J.\)

If the number of propositions that are being joined together is finite, then the conjunction and the surjunction can be represented by means of sentential connectives, incorporating the sentences that represent these propositions into finite strings of symbols.

If \(J\!\) is finite, for instance, if \(J\!\) consists of the integers in the interval \(j = 1 ~\text{to}~ k,\) and if each proposition \(q_j\!\) is represented by a sentence \(s_j,\!\) then the following strategies of expression are open:

  1. The conjunction \(\operatorname{Conj}_j^J q_j\) can be represented by a sentence that is constructed by concatenating the \(s_j\!\) in the following fashion:

    \(\operatorname{Conj}_j^J q_j ~\leftrightsquigarrow~ s_1 s_2 \ldots s_k.\)

  2. The surjunction \(\operatorname{Surj}_j^J q_j\) can be represented by a sentence that is constructed by surcatenating the \(s_j\!\) in the following fashion:

    \(\operatorname{Surj}_j^J q_j ~\leftrightsquigarrow~ \underline{(} s_1, s_2, \ldots, s_k \underline{)}.\)

If one opts for a mode of interpretation that moves more directly from the parse graph of a sentence to the potential logical meaning of both the PARC and the PARCE, then the following specifications are in order:

A cactus rooted at a particular node is taken to represent what that node denotes, its logical denotation or its logical interpretation.

  1. The logical denotation of a node is the logical conjunction of that node's arguments, which are defined as the logical denotations of that node's attachments. The logical denotation of either a blank symbol or an empty node is the boolean value \(\underline{1} = \operatorname{true}.\) The logical denotation of the paint \(\mathfrak{p}_j\!\) is the proposition \(p_j,\!\) a proposition that is regarded as primitive, at least, with respect to the level of analysis that is represented in the current instance of \(\mathfrak{C} (\mathfrak{P}).\)
  2. The logical denotation of a lobe is the logical surjunction of that lobe's arguments, which are defined as the logical denotations of that lobe's accoutrements. As a corollary, the logical denotation of the parse graph of \(\underline{(} \underline{)},\) otherwise called a needle, is the boolean value \(\underline{0} = \operatorname{false}.\)

If one takes the point of view that PARCs and PARCEs amount to a pair of intertranslatable languages for the same domain of objects, then denotation brackets of the form \(\downharpoonleft \ldots \downharpoonright\) can be used to indicate the logical denotation \(\downharpoonleft C_j \downharpoonright\) of a cactus \(C_j\!\) or the logical denotation \(\downharpoonleft s_j \downharpoonright\) of a sentence \(s_j.\!\)

Tables 14 and 15 summarize the relations that serve to connect the formal language of sentences with the logical language of propositions. Between these two realms of expression there is a family of graphical data structures that arise in parsing the sentences and that serve to facilitate the performance of computations on the indicator functions. The graphical language supplies an intermediate form of representation between the formal sentences and the indicator functions, and the form of mediation that it provides is very useful in rendering the possible connections between the other two languages conceivable in fact, not to mention in carrying out the necessary translations on a practical basis. These Tables include this intermediate domain in their Central Columns. Between their First and Middle Columns they illustrate the mechanics of parsing the abstract sentences of the cactus language into the graphical data structures of the corresponding species. Between their Middle and Final Columns they summarize the semantics of interpreting the graphical forms of representation for the purposes of reasoning with propositions.


\(\text{Table 14.} ~~ \text{Semantic Translation : Functional Form}\!\)
\(\mathrm{Sentence}\!\) \(\xrightarrow[\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}]{\mathrm{Parse}}\!\) \(\mathrm{Graph}\!\) \(\xrightarrow[\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}]{\mathrm{Denotation}}\!\) \(\mathrm{Proposition}\!\)
\(s_j\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(C_j\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(q_j\!\)
\(\mathrm{Conc}^0\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\mathrm{Node}^0\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\underline{1}\!\)
\(\mathrm{Conc}^k_j s_j\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\mathrm{Node}^k_j C_j\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\mathrm{Conj}^k_j q_j\!\)
\(\mathrm{Surc}^0\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\mathrm{Lobe}^0\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\underline{0}\!\)
\(\mathrm{Surc}^k_j s_j~\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\mathrm{Lobe}^k_j C_j\!\) \(\xrightarrow{\mathrm{20:44, 2 August 2017 (UTC)20:44, 2 August 2017 (UTC)}}\!\) \(\mathrm{Surj}^k_j q_j\!\)


\(\text{Table 15.} ~~ \text{Semantic Translation : Equational Form}\!\)
\(\downharpoonleft \mathrm{Sentence} \downharpoonright\!\) \(\stackrel{\mathrm{Parse}}{=}\!\) \(\downharpoonleft \mathrm{Graph} \downharpoonright\!\) \(\stackrel{\mathrm{Denotation}}{=}\!\) \(\mathrm{Proposition}\!\)
\(\downharpoonleft s_j \downharpoonright\!\) \(=\!\) \(\downharpoonleft C_j \downharpoonright\!\) \(=\!\) \(q_j\!\)
\(\downharpoonleft \mathrm{Conc}^0 \downharpoonright\!\) \(=\!\) \(\downharpoonleft \mathrm{Node}^0 \downharpoonright\!\) \(=\!\) \(\underline{1}\!\)
\(\downharpoonleft \mathrm{Conc}^k_j s_j \downharpoonright\!\) \(=\!\) \(\downharpoonleft \mathrm{Node}^k_j C_j \downharpoonright\!\) \(=\!\) \(\mathrm{Conj}^k_j q_j\!\)
\(\downharpoonleft \mathrm{Surc}^0 \downharpoonright\!\) \(=\!\) \(\downharpoonleft \mathrm{Lobe}^0 \downharpoonright\!\) \(=\!\) \(\underline{0}\!\)
\(\downharpoonleft \mathrm{Surc}^k_j s_j \downharpoonright\!\) \(=\!\) \(\downharpoonleft \mathrm{Lobe}^k_j C_j \downharpoonright\!\) \(=\!\) \(\mathrm{Surj}^k_j q_j\!\)


Aside from their common topic, the two Tables present slightly different ways of conceptualizing the operations that go to establish their maps. Table 14 records the functional associations that connect each domain with the next, taking the triplings of a sentence \(s_j,\!\) a cactus \(C_j,\!\) and a proposition \(q_j\!\) as basic data, and fixing the rest by recursion on these. Table 15 records these associations in the form of equations, treating sentences and graphs as alternative kinds of signs, and generalizing the denotation bracket operator to indicate the proposition that either denotes. It should be clear at this point that either scheme of translation puts the sentences, the graphs, and the propositions that it associates with each other roughly in the roles of the signs, the interpretants, and the objects, respectively, whose triples define an appropriate sign relation. Indeed, the "roughly" can be made "exactly" as soon as the domains of a suitable sign relation are specified precisely.

A good way to illustrate the action of the conjunction and surjunction operators is to demonstrate how they can be used to construct the boolean functions on any finite number of variables. Let us begin by doing this for the first three cases, \(k = 0, 1, 2.\!\)

A boolean function \(F^{(0)}\!\) on \(0\!\) variables is just an element of the boolean domain \(\underline\mathbb{B} = \{ \underline{0}, \underline{1} \}.\) Table 16 shows several different ways of referring to these elements, just for the sake of consistency using the same format that will be used in subsequent Tables, no matter how degenerate it tends to appear in the initial case.


\(\text{Table 16.} ~~ \text{Boolean Functions on Zero Variables}\!\)
\(F\!\) \(F\!\) \(F()\!\) \(F\!\)
\(\underline{0}\!\) \(F_0^{(0)}\!\) \(\underline{0}\!\) \(\texttt{(~)}\!\)
\(\underline{1}\!\) \(F_1^{(0)}\!\) \(\underline{1}\!\) \(\texttt{((~))}\!\)


Column 1 lists each boolean element or boolean function under its ordinary constant name or under a succinct nickname, respectively.

Column 2 lists each boolean function in a style of function name \(F_j^{(k)}\!\) that is constructed as follows: The superscript \((k)\!\) gives the dimension of the functional domain, that is, the number of its functional variables, and the subscript \(j\!\) is a binary string that recapitulates the functional values, using the obvious translation of boolean values into binary values.

Column 3 lists the functional values for each boolean function, or possibly a boolean element appearing in the guise of a function, for each combination of its domain values.

Column 4 shows the usual expressions of these elements in the cactus language, conforming to the practice of omitting the underlines in display formats. Here I illustrate also the convention of using the expression \(^{\backprime\backprime} ((~)) ^{\prime\prime}\) as a visible stand-in for the expression of the logical value \(\operatorname{true},\) a value that is minimally represented by a blank expression that tends to elude our giving it much notice in the context of more demonstrative texts.

Table 17 presents the boolean functions on one variable, \(F^{(1)} : \underline\mathbb{B} \to \underline\mathbb{B},\) of which there are precisely four.


\(\text{Table 17.} ~~ \text{Boolean Functions on One Variable}\!\)
\(F\!\) \(F\!\) \(F(x)\!\) \(F\!\)
    \(F(\underline{1})\) \(F(\underline{0})\)  
\(F_0^{(1)}\!\) \(F_{00}^{(1)}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\texttt{(~)}\!\)
\(F_1^{(1)}\!\) \(F_{01}^{(1)}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\texttt{(} x \texttt{)}\!\)
\(F_2^{(1)}\!\) \(F_{10}^{(1)}~\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(x\!\)
\(F_3^{(1)}\!\) \(F_{11}^{(1)}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\texttt{((~))}\!\)


Here, Column 1 codes the contents of Column 2 in a more concise form, compressing the lists of boolean values, recorded as bits in the subscript string, into their decimal equivalents. Naturally, the boolean constants reprise themselves in this new setting as constant functions on one variable. Thus, one has the synonymous expressions for constant functions that are expressed in the next two chains of equations:

\(\begin{matrix} F_0^{(1)} & = & F_{00}^{(1)} & = & \underline{0} ~:~ \underline\mathbb{B} \to \underline\mathbb{B} \\ \\ F_3^{(1)} & = & F_{11}^{(1)} & = & \underline{1} ~:~ \underline\mathbb{B} \to \underline\mathbb{B} \end{matrix}\)

As for the rest, the other two functions are easily recognized as corresponding to the one-place logical connectives, or the monadic operators on \(\underline\mathbb{B}.\) Thus, the function \(F_1^{(1)} = F_{01}^{(1)}\) is recognizable as the negation operation, and the function \(F_2^{(1)} = F_{10}^{(1)}\) is obviously the identity operation.

Table 18 presents the boolean functions on two variables, \(F^{(2)} : \underline\mathbb{B}^2 \to \underline\mathbb{B},\) of which there are precisely sixteen.


\(\text{Table 18.} ~~ \text{Boolean Functions on Two Variables}\!\)
\(F\!\) \(F\!\) \(F(x, y)\!\) \(F\!\)
    \(F(\underline{1}, \underline{1})\) \(F(\underline{1}, \underline{0})\) \(F(\underline{0}, \underline{1})\) \(F(\underline{0}, \underline{0})\)  
\(F_{0}^{(2)}\!\) \(F_{0000}^{(2)}~\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\texttt{(~)}\!\)
\(F_{1}^{(2)}\!\) \(F_{0001}^{(2)}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\texttt{(} x \texttt{)(} y \texttt{)}\!\)
\(F_{2}^{(2)}\!\) \(F_{0010}^{(2)}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\texttt{(} x \texttt{)} y\!\)
\(F_{3}^{(2)}\!\) \(F_{0011}^{(2)}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\texttt{(} x \texttt{)}\!\)
\(F_{4}^{(2)}\!\) \(F_{0100}^{(2)}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(x \texttt{(} y \texttt{)}\!\)
\(F_{5}^{(2)}\!\) \(F_{0101}^{(2)}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\texttt{(} y \texttt{)}\!\)
\(F_{6}^{(2)}\!\) \(F_{0110}^{(2)}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\texttt{(} x \texttt{,} y \texttt{)}\!\)
\(F_{7}^{(2)}\!\) \(F_{0111}^{(2)}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\texttt{(} x y \texttt{)}\!\)
\(F_{8}^{(2)}\!\) \(F_{1000}^{(2)}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(x y\!\)
\(F_{9}^{(2)}\!\) \(F_{1001}^{(2)}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\texttt{((} x \texttt{,} y \texttt{))}\!\)
\(F_{10}^{(2)}\!\) \(F_{1010}^{(2)}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(y\!\)
\(F_{11}^{(2)}\!\) \(F_{1011}^{(2)}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\texttt{(} x \texttt{(} y \texttt{))}\!\)
\(F_{12}^{(2)}\!\) \(F_{1100}^{(2)}~\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{0}\!\) \(x\!\)
\(F_{13}^{(2)}\!\) \(F_{1101}^{(2)}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\underline{1}\!\) \(\texttt{((} x \texttt{)} y \texttt{)}\!\)
\(F_{14}^{(2)}\!\) \(F_{1110}^{(2)}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{0}\!\) \(\texttt{((} x \texttt{)(} y \texttt{))}\!\)
\(F_{15}^{(2)}\!\) \(F_{1111}^{(2)}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\underline{1}\!\) \(\texttt{((~))}\!\)


As before, all of the boolean functions of fewer variables are subsumed in this Table, though under a set of alternative names and possibly different interpretations. Just to acknowledge a few of the more notable pseudonyms:

The constant function \(\underline{0} ~:~ \underline\mathbb{B}^2 \to \underline\mathbb{B}\) appears under the name \(F_{0}^{(2)}.\)
The constant function \(\underline{1} ~:~ \underline\mathbb{B}^2 \to \underline\mathbb{B}\) appears under the name \(F_{15}^{(2)}.\)
The negation and identity of the first variable are \(F_{3}^{(2)}\) and \(F_{12}^{(2)},\) respectively.
The negation and identity of the second variable are \(F_{5}^{(2)}\) and \(F_{10}^{(2)},\) respectively.
The logical conjunction is given by the function \(F_{8}^{(2)} (x, y) = x \cdot y.\)
The logical disjunction is given by the function \(F_{14}^{(2)} (x, y) = \underline{((} ~x~ \underline{)(} ~y~ \underline{))}.\)

Functions expressing the conditionals, implications, or if-then statements are given in the following ways:

\[[x \Rightarrow y] = F_{11}^{(2)} (x, y) = \underline{(} ~x~ \underline{(} ~y~ \underline{))} = [\operatorname{not}~ x ~\operatorname{without}~ y].\]

\[[x \Leftarrow y] = F_{13}^{(2)} (x, y) = \underline{((} ~x~ \underline{)} ~y~ \underline{)} = [\operatorname{not}~ y ~\operatorname{without}~ x].\]

The function that corresponds to the biconditional, the equivalence, or the if and only statement is exhibited in the following fashion:

\[[x \Leftrightarrow y] = [x = y] = F_{9}^{(2)} (x, y) = \underline{((} ~x~,~y~ \underline{))}.\]

Finally, there is a boolean function that is logically associated with the exclusive disjunction, inequivalence, or not equals statement, algebraically associated with the binary sum operation, and geometrically associated with the symmetric difference of sets. This function is given by:

\[[x \neq y] = [x + y] = F_{6}^{(2)} (x, y) = \underline{(} ~x~,~y~ \underline{)}.\]

Let me now address one last question that may have occurred to some. What has happened, in this suggested scheme of functional reasoning, to the distinction that is quite pointedly made by careful logicians between (1) the connectives called conditionals and symbolized by the signs \((\rightarrow)\) and \((\leftarrow),\) and (2) the assertions called implications and symbolized by the signs \((\Rightarrow)\) and \((\Leftarrow)\), and, in a related question: What has happened to the distinction that is equally insistently made between (3) the connective called the biconditional and signified by the sign \((\leftrightarrow)\) and (4) the assertion that is called an equivalence and signified by the sign \((\Leftrightarrow)\)? My answer is this: For my part, I am deliberately avoiding making these distinctions at the level of syntax, preferring to treat them instead as distinctions in the use of boolean functions, turning on whether the function is mentioned directly and used to compute values on arguments, or whether its inverse is being invoked to indicate the fibers of truth or untruth under the propositional function in question.

Stretching Exercises

The arrays of boolean connections described above, namely, the boolean functions \(F^{(k)} : \underline\mathbb{B}^k \to \underline\mathbb{B},\) for \(k\!\) in \(\{ 0, 1, 2 \},\!\) supply enough material to demonstrate the use of the stretch operation in a variety of concrete cases.

For example, suppose that \(F\!\) is a connection of the form \(F : \underline\mathbb{B}^2 \to \underline\mathbb{B},\) that is, any one of the sixteen possibilities in Table 18, while \(p\!\) and \(q\!\) are propositions of the form \(p, q : X \to \underline\mathbb{B},\) that is, propositions about things in the universe \(X,\!\) or else the indicators of sets contained in \(X.\!\)

Then one has the imagination \(\underline{f} = (f_1, f_2) = (p, q) : (X \to \underline\mathbb{B})^2,\) and the stretch of the connection \(F\!\) to \(\underline{f}\!\) on \(X\!\) amounts to a proposition \(F^\$ (p, q) : X \to \underline\mathbb{B}\) that may be read as the stretch of \(F\!\) to \(p\!\) and \(q.\!\) If one is concerned with many different propositions about things in \(X,\!\) or if one is abstractly indifferent to the particular choices for \(p\!\) and \(q,\!\) then one may detach the operator \(F^\$ : (X \to \underline\mathbb{B}))^2 \to (X \to \underline\mathbb{B})),\) called the stretch of \(F\!\) over \(X,\!\) and consider it in isolation from any concrete application.

When the cactus notation is used to represent boolean functions, a single \(\$\) sign at the end of the expression is enough to remind the reader that the connections are meant to be stretched to several propositions on a universe \(X.\!\)

For example, take the connection \(F : \underline\mathbb{B}^2 \to \underline\mathbb{B}\) such that:

\[F(x, y) ~=~ F_{6}^{(2)} (x, y) ~=~ \underline{(}~x~,~y~\underline{)}\!\]

The connection in question is a boolean function on the variables \(x, y\!\) that returns a value of \(\underline{1}\) just when just one of the pair \(x, y\!\) is not equal to \(\underline{1},\) or what amounts to the same thing, just when just one of the pair \(x, y\!\) is equal to \(\underline{1}.\) There is clearly an isomorphism between this connection, viewed as an operation on the boolean domain \(\underline\mathbb{B} = \{ \underline{0}, \underline{1} \},\) and the dyadic operation on binary values \(x, y \in \mathbb{B} = \operatorname{GF}(2)\!\) that is otherwise known as \(x + y.\!\)

The same connection \(F : \underline\mathbb{B}^2 \to \underline\mathbb{B}\) can also be read as a proposition about things in the universe \(X = \underline\mathbb{B}^2.\) If \(s\!\) is a sentence that denotes the proposition \(F,\!\) then the corresponding assertion says exactly what one states in uttering the sentence \(^{\backprime\backprime} \, x ~\operatorname{is~not~equal~to}~ y \, ^{\prime\prime}.\) In such a case, one has \(\downharpoonleft s \downharpoonright \, = F,\) and all of the following expressions are ordinarily taken as equivalent descriptions of the same set:

\(\begin{array}{lll} [| \downharpoonleft s \downharpoonright |] & = & [| F |] \\[6pt] & = & F^{-1} (\underline{1}) \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ s ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ F(x, y) = \underline{1} ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ F(x, y) ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ \underline{(}~x~,~y~\underline{)} = \underline{1} ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ \underline{(}~x~,~y~\underline{)} ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x ~\operatorname{exclusive~or}~ y ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ \operatorname{just~one~true~of}~ x, y ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x ~\operatorname{not~equal~to}~ y ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x \nLeftrightarrow y ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x \neq y ~\} \\[6pt] & = & \{~ (x, y) \in \underline\mathbb{B}^2 ~:~ x + y ~\}. \end{array}\)

Notice the distinction, that I continue to maintain at this point, between the logical values \(\{ \operatorname{falsehood}, \operatorname{truth} \}\) and the algebraic values \(\{ 0, 1 \}.\!\) This makes it legitimate to write a sentence directly into the righthand side of a set-builder expression, for instance, weaving the sentence \(s\!\) or the sentence \(^{\backprime\backprime} \, x ~\operatorname{is~not~equal~to}~ y \, ^{\prime\prime}\) into the context \(^{\backprime\backprime} \, \{ (x, y) \in \underline{B}^2 : \ldots \} \, ^{\prime\prime},\) thereby obtaining the corresponding expressions listed above. It also allows us to assert the proposition \(F(x, y)\!\) in a more direct way, without detouring through the equation \(F(x, y) = \underline{1},\) since it already has a value in \(\{ \operatorname{falsehood}, \operatorname{true} \},\) and thus can be taken as tantamount to an actual sentence.

If the appropriate safeguards can be kept in mind, avoiding all danger of confusing propositions with sentences and sentences with assertions, then the marks of these distinctions need not be forced to clutter the account of the more substantive indications, that is, the ones that really matter. If this level of understanding can be achieved, then it may be possible to relax these restrictions, along with the absolute dichotomy between algebraic and logical values, which tends to inhibit the flexibility of interpretation.

This covers the properties of the connection \(F(x, y) = \underline{(}~x~,~y~\underline{)},\) treated as a proposition about things in the universe \(X = \underline\mathbb{B}^2.\) Staying with this same connection, it is time to demonstrate how it can be "stretched" to form an operator on arbitrary propositions.

To continue the exercise, let \(p\!\) and \(q\!\) be arbitrary propositions about things in the universe \(X,\!\) that is, maps of the form \(p, q : X \to \underline\mathbb{B},\) and suppose that \(p, q\!\) are indicator functions of the sets \(P, Q \subseteq X,\) respectively. In other words, we have the following data:

\(\begin{matrix} p & = & \upharpoonleft P \upharpoonright & : & X \to \underline\mathbb{B} \\ \\ q & = & \upharpoonleft Q \upharpoonright & : & X \to \underline\mathbb{B} \\ \\ (p, q) & = & (\upharpoonleft P \upharpoonright, \upharpoonleft Q \upharpoonright) & : & (X \to \underline\mathbb{B})^2 \\ \end{matrix}\)

Then one has an operator \(F^\$,\) the stretch of the connection \(F\!\) over \(X,\!\) and a proposition \(F^\$ (p, q),\) the stretch of \(F\!\) to \((p, q)\!\) on \(X,\!\) with the following properties:

\(\begin{array}{ccccl} F^\$ & = & \underline{(} \ldots, \ldots \underline{)}^\$ & : & (X \to \underline\mathbb{B})^2 \to (X \to \underline\mathbb{B}) \\ \\ F^\$ (p, q) & = & \underline{(}~p~,~q~\underline{)}^\$ & : & X \to \underline\mathbb{B} \\ \end{array}\)

As a result, the application of the proposition \(F^\$ (p, q)\) to each \(x \in X\) returns a logical value in \(\underline\mathbb{B},\) all in accord with the following equations:

\(\begin{matrix} F^\$ (p, q)(x) & = & \underline{(}~p~,~q~\underline{)}^\$ (x) & \in & \underline\mathbb{B} \\ \\ \Updownarrow & & \Updownarrow \\ \\ F(p(x), q(x)) & = & \underline{(}~p(x)~,~q(x)~\underline{)} & \in & \underline\mathbb{B} \\ \end{matrix}\)

For each choice of propositions \(p\!\) and \(q\!\) about things in \(X,\!\) the stretch of \(F\!\) to \(p\!\) and \(q\!\) on \(X\!\) is just another proposition about things in \(X,\!\) a simple proposition in its own right, no matter how complex its current expression or its present construction as \(F^\$ (p, q) = \underline{(}~p~,~q~\underline{)}^\$\) makes it appear in relation to \(p\!\) and \(q.\!\) Like any other proposition about things in \(X,\!\) it indicates a subset of \(X,\!\) namely, the fiber that is variously described in the following ways:

\(\begin{array}{lll} [| F^\$ (p, q) |] & = & [| \underline{(}~p~,~q~\underline{)}^\$ |] \\[6pt] & = & (F^\$ (p, q))^{-1} (\underline{1}) \\[6pt] & = & \{~ x \in X ~:~ F^\$ (p, q)(x) ~\} \\[6pt] & = & \{~ x \in X ~:~ \underline{(}~p~,~q~\underline{)}^\$ (x) ~\} \\[6pt] & = & \{~ x \in X ~:~ \underline{(}~p(x)~,~q(x)~\underline{)} ~\} \\[6pt] & = & \{~ x \in X ~:~ p(x) + q(x) ~\} \\[6pt] & = & \{~ x \in X ~:~ p(x) \neq q(x) ~\} \\[6pt] & = & \{~ x \in X ~:~ \upharpoonleft P \upharpoonright (x) ~\neq~ \upharpoonleft Q \upharpoonright (x) ~\} \\[6pt] & = & \{~ x \in X ~:~ x \in P ~\nLeftrightarrow~ x \in Q ~\} \\[6pt] & = & \{~ x \in X ~:~ x \in P\!-\!Q ~\operatorname{or}~ x \in Q\!-\!P ~\} \\[6pt] & = & \{~ x \in X ~:~ x \in P\!-\!Q ~\cup~ Q\!-\!P ~\} \\[6pt] & = & \{~ x \in X ~:~ x \in P + Q ~\} \\[6pt] & = & P + Q ~\subseteq~ X \\[6pt] & = & [|p|] + [|q|] ~\subseteq~ X \end{array}\)

References

  • Bernstein, Herbert J. (1987), "Idols of Modern Science and The Reconstruction of Knowledge", pp. 37–68 in Marcus G. Raskin and Herbert J. Bernstein, New Ways of Knowing : The Sciences, Society, and Reconstructive Knowledge, Rowman and Littlefield, Totowa, NJ, 1987.
  • Denning, P.J., Dennis, J.B., and Qualitz, J.E. (1978), Machines, Languages, and Computation, Prentice-Hall, Englewood Cliffs, NJ.
  • Nietzsche, Friedrich, Beyond Good and Evil : Prelude to a Philosophy of the Future, R.J. Hollingdale (trans.), Michael Tanner (intro.), Penguin Books, London, UK, 1973, 1990.
  • Raskin, Marcus G., and Bernstein, Herbert J. (1987, eds.), New Ways of Knowing : The Sciences, Society, and Reconstructive Knowledge, Rowman and Littlefield, Totowa, NJ.

Document History

The Cactus Patch

| Subject:  Inquiry Driven Systems : An Inquiry Into Inquiry
| Contact:  Jon Awbrey
| Version:  Draft 8.70
| Created:  23 Jun 1996
| Revised:  06 Jan 2002
| Advisor:  M.A. Zohdy
| Setting:  Oakland University, Rochester, Michigan, USA
| Excerpt:  Section 1.3.10 (Recurring Themes)
| Excerpt:  Subsections 1.3.10.8 - 1.3.10.13

Aug 2000 • Extensions Of Logical Graphs

CG List • Lost Links

  1. http://www.virtual-earth.de/CG/cg-list/old/msg03351.html
  2. http://www.virtual-earth.de/CG/cg-list/old/msg03352.html
  3. http://www.virtual-earth.de/CG/cg-list/old/msg03353.html
  4. http://www.virtual-earth.de/CG/cg-list/old/msg03354.html
  5. http://www.virtual-earth.de/CG/cg-list/old/msg03376.html
  6. http://www.virtual-earth.de/CG/cg-list/old/msg03379.html
  7. http://www.virtual-earth.de/CG/cg-list/old/msg03381.html

CG List • New Archive

  1. http://web.archive.org/web/20030723202219/http://mars.virtual-earth.de/pipermail/cg/2000q3/003592.html
  2. http://web.archive.org/web/20030723202341/http://mars.virtual-earth.de/pipermail/cg/2000q3/003593.html
  3. http://web.archive.org/web/20030723202516/http://mars.virtual-earth.de/pipermail/cg/2000q3/003595.html

CG List • Old Archive

  1. http://web.archive.org/web/20020321115639/http://www.virtual-earth.de/CG/cg-list/msg03352.html
  2. http://web.archive.org/web/20020321120331/http://www.virtual-earth.de/CG/cg-list/msg03354.html
  3. http://web.archive.org/web/20020321223131/http://www.virtual-earth.de/CG/cg-list/msg03376.html
  4. http://web.archive.org/web/20020129134132/http://www.virtual-earth.de/CG/cg-list/msg03381.html

Sep 2000 • Zeroth Order Logic

  1. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01246.html
  2. http://web.archive.org/web/20080905054059/http://suo.ieee.org/email/msg01251.html
  3. http://web.archive.org/web/20070223033521/http://suo.ieee.org/email/msg01380.html
  4. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01406.html
  5. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01546.html
  6. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01561.html
  7. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01670.html
  8. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01966.html
  9. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01985.html
  10. http://web.archive.org/web/20070401102902/http://suo.ieee.org/email/msg01988.html

Oct 2000 • All Liar, No Paradox

  1. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg01739.html

Nov 2000 • Sowa's Top Level Categories

What Language To Use

  1. http://web.archive.org/web/20070320012929/http://suo.ieee.org/email/msg01956.html

Zeroth Order Logic

  1. http://web.archive.org/web/20070320012940/http://suo.ieee.org/email/msg01966.html

TLC In KIF

  1. http://web.archive.org/web/20081204195421/http://suo.ieee.org/ontology/msg00048.html
  2. http://web.archive.org/web/20070320014557/http://suo.ieee.org/ontology/msg00051.html

Dec 2000 • Sequential Interactions Generating Hypotheses

  1. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg02607.html
  2. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg02608.html
  3. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg03183.html

Jan 2001 • Differential Analytic Turing Automata

DATA • Arisbe List

  1. http://web.archive.org/web/20061013224128/http://stderr.org/pipermail/arisbe/2001-January/000182.html
  2. http://web.archive.org/web/20061013224814/http://stderr.org/pipermail/arisbe/2001-January/000200.html

DATA • Ontology List

  1. http://web.archive.org/web/20041021223934/http://suo.ieee.org/ontology/msg00596.html
  2. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg00618.html

Mar 2001 • Propositional Equation Reasoning Systems

PERS • Arisbe List

  1. http://web.archive.org/web/20150107210011/http://stderr.org/pipermail/arisbe/2001-March/000380.html
  2. http://web.archive.org/web/20050920031758/http://stderr.org/pipermail/arisbe/2001-April/000407.html
  3. http://web.archive.org/web/20051202010243/http://stderr.org/pipermail/arisbe/2001-April/000409.html
  4. http://web.archive.org/web/20051202074355/http://stderr.org/pipermail/arisbe/2001-April/000411.html
  5. http://web.archive.org/web/20051202021217/http://stderr.org/pipermail/arisbe/2001-April/000412.html
  6. http://web.archive.org/web/20051201225716/http://stderr.org/pipermail/arisbe/2001-April/000413.html
  7. http://web.archive.org/web/20051202001736/http://stderr.org/pipermail/arisbe/2001-April/000416.html
  8. http://web.archive.org/web/20051202053817/http://stderr.org/pipermail/arisbe/2001-April/000417.html
  9. http://web.archive.org/web/20051202013458/http://stderr.org/pipermail/arisbe/2001-April/000421.html
  10. http://web.archive.org/web/20051202013024/http://stderr.org/pipermail/arisbe/2001-April/000427.html
  11. http://web.archive.org/web/20051202032812/http://stderr.org/pipermail/arisbe/2001-April/000428.html
  12. http://web.archive.org/web/20051201225109/http://stderr.org/pipermail/arisbe/2001-April/000430.html
  13. http://web.archive.org/web/20050908023250/http://stderr.org/pipermail/arisbe/2001-April/000432.html
  14. http://web.archive.org/web/20051202002952/http://stderr.org/pipermail/arisbe/2001-April/000433.html
  15. http://web.archive.org/web/20051201220336/http://stderr.org/pipermail/arisbe/2001-April/000434.html
  16. http://web.archive.org/web/20050906215058/http://stderr.org/pipermail/arisbe/2001-April/000435.html

PERS • Arisbe List • Discussion

  1. http://web.archive.org/web/20150107212003/http://stderr.org/pipermail/arisbe/2001-April/000397.html

PERS • Ontology List

  1. http://web.archive.org/web/20070326233418/http://suo.ieee.org/ontology/msg01779.html
  2. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg01897.html
  3. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02005.html
  4. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02011.html
  5. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02014.html
  6. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02015.html
  7. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02024.html
  8. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02046.html
  9. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02047.html
  10. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02068.html
  11. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02102.html
  12. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02109.html
  13. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02117.html
  14. http://web.archive.org/web/20040116001230/http://suo.ieee.org/ontology/msg02125.html
  15. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02128.html
  16. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02134.html
  17. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg02138.html

PERS • SUO List

  1. http://web.archive.org/web/20140423181000/http://suo.ieee.org/email/msg04187.html
  2. http://web.archive.org/web/20070922193822/http://suo.ieee.org/email/msg04305.html
  3. http://web.archive.org/web/20071007170752/http://suo.ieee.org/email/msg04413.html
  4. http://web.archive.org/web/20070121063018/http://suo.ieee.org/email/msg04419.html
  5. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04422.html
  6. http://web.archive.org/web/20070305132316/http://suo.ieee.org/email/msg04423.html
  7. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04432.html
  8. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04454.html
  9. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04455.html
  10. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04476.html
  11. http://web.archive.org/web/20060718091105/http://suo.ieee.org/email/msg04510.html
  12. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04517.html
  13. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04525.html
  14. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04533.html
  15. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04536.html
  16. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg04542.html
  17. http://web.archive.org/web/20050824231950/http://suo.ieee.org/email/msg04546.html

Jul 2001 • Reflective Extension Of Logical Graphs

RefLog • Arisbe List

  1. http://web.archive.org/web/20150109141000/http://stderr.org/pipermail/arisbe/2001-July/000711.html

RefLog • SUO List

  1. http://web.archive.org/web/20070705085032/http://suo.ieee.org/email/msg05694.html

Dec 2001 • Functional Conception Of Quantificational Logic

FunLog • Arisbe List

  1. http://web.archive.org/web/20141005034614/http://stderr.org/pipermail/arisbe/2001-December/001212.html
  2. http://web.archive.org/web/20141005034615/http://stderr.org/pipermail/arisbe/2001-December/001213.html
  3. http://web.archive.org/web/20051202034557/http://stderr.org/pipermail/arisbe/2001-December/001216.html
  4. http://web.archive.org/web/20051202074331/http://stderr.org/pipermail/arisbe/2001-December/001221.html
  5. http://web.archive.org/web/20051201235028/http://stderr.org/pipermail/arisbe/2001-December/001222.html
  6. http://web.archive.org/web/20051202052037/http://stderr.org/pipermail/arisbe/2001-December/001223.html
  7. http://web.archive.org/web/20050827214411/http://stderr.org/pipermail/arisbe/2001-December/001224.html
  8. http://web.archive.org/web/20051202092500/http://stderr.org/pipermail/arisbe/2001-December/001225.html
  9. http://web.archive.org/web/20051202051942/http://stderr.org/pipermail/arisbe/2001-December/001226.html
  10. http://web.archive.org/web/20050425140213/http://stderr.org/pipermail/arisbe/2001-December/001227.html

FunLog • Ontology List

  1. http://web.archive.org/web/20110608022546/http://suo.ieee.org/ontology/msg03562.html
  2. http://web.archive.org/web/20110608022712/http://suo.ieee.org/ontology/msg03563.html
  3. http://web.archive.org/web/20110608023312/http://suo.ieee.org/ontology/msg03564.html
  4. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03565.html
  5. http://web.archive.org/web/20070812011325/http://suo.ieee.org/ontology/msg03569.html
  6. http://web.archive.org/web/20110608023228/http://suo.ieee.org/ontology/msg03570.html
  7. http://web.archive.org/web/20110608022616/http://suo.ieee.org/ontology/msg03568.html
  8. http://web.archive.org/web/20110608023557/http://suo.ieee.org/ontology/msg03572.html
  9. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03577.html
  10. http://web.archive.org/web/20070317021141/http://suo.ieee.org/ontology/msg03578.html
  11. http://web.archive.org/web/20110608021549/http://suo.ieee.org/ontology/msg03579.html
  12. http://web.archive.org/web/20110608021332/http://suo.ieee.org/ontology/msg03580.html
  13. http://web.archive.org/web/20110608020250/http://suo.ieee.org/ontology/msg03581.html
  14. http://web.archive.org/web/20110608021344/http://suo.ieee.org/ontology/msg03582.html
  15. http://web.archive.org/web/20110608021557/http://suo.ieee.org/ontology/msg03583.html
  16. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg04247.html

Dec 2001 • Cactus Language

Cactus Town Cartoons • Arisbe List

  1. http://web.archive.org/web/20050825005438/http://stderr.org/pipermail/arisbe/2001-December/001214.html
  2. http://web.archive.org/web/20051202101235/http://stderr.org/pipermail/arisbe/2001-December/001217.html

Cactus Town Cartoons • Ontology List

  1. http://web.archive.org/web/20110608023426/http://suo.ieee.org/ontology/msg03567.html
  2. http://web.archive.org/web/20110608024449/http://suo.ieee.org/ontology/msg03571.html

Jan 2002 • Zeroth Order Theories

ZOT • Arisbe List

  1. http://web.archive.org/web/20150109042401/http://stderr.org/pipermail/arisbe/2002-January/001293.html
  2. http://web.archive.org/web/20150109042402/http://stderr.org/pipermail/arisbe/2002-January/001294.html
  3. http://web.archive.org/web/20050503213326/http://stderr.org/pipermail/arisbe/2002-January/001295.html
  4. http://web.archive.org/web/20050503213330/http://stderr.org/pipermail/arisbe/2002-January/001296.html
  5. http://web.archive.org/web/20050504070444/http://stderr.org/pipermail/arisbe/2002-January/001299.html
  6. http://web.archive.org/web/20050504070430/http://stderr.org/pipermail/arisbe/2002-January/001300.html
  7. http://web.archive.org/web/20050504070700/http://stderr.org/pipermail/arisbe/2002-January/001301.html
  8. http://web.archive.org/web/20050504070704/http://stderr.org/pipermail/arisbe/2002-January/001302.html
  9. http://web.archive.org/web/20050504070712/http://stderr.org/pipermail/arisbe/2002-January/001304.html
  10. http://web.archive.org/web/20050504070717/http://stderr.org/pipermail/arisbe/2002-January/001305.html
  11. http://web.archive.org/web/20050504070722/http://stderr.org/pipermail/arisbe/2002-January/001306.html
  12. http://web.archive.org/web/20050504070726/http://stderr.org/pipermail/arisbe/2002-January/001308.html
  13. http://web.archive.org/web/20050504070730/http://stderr.org/pipermail/arisbe/2002-January/001309.html
  14. http://web.archive.org/web/20050504070434/http://stderr.org/pipermail/arisbe/2002-January/001310.html
  15. http://web.archive.org/web/20050504070742/http://stderr.org/pipermail/arisbe/2002-January/001313.html
  16. http://web.archive.org/web/20050504070746/http://stderr.org/pipermail/arisbe/2002-January/001314.html
  17. http://web.archive.org/web/20050504070438/http://stderr.org/pipermail/arisbe/2002-January/001315.html
  18. http://web.archive.org/web/20050504070540/http://stderr.org/pipermail/arisbe/2002-January/001316.html
  19. http://web.archive.org/web/20050504070750/http://stderr.org/pipermail/arisbe/2002-January/001317.html

ZOT • Arisbe List • Discussion

  1. http://web.archive.org/web/20050503213334/http://stderr.org/pipermail/arisbe/2002-January/001297.html
  2. http://web.archive.org/web/20050504070656/http://stderr.org/pipermail/arisbe/2002-January/001298.html
  3. http://web.archive.org/web/20050504070708/http://stderr.org/pipermail/arisbe/2002-January/001303.html
  4. http://web.archive.org/web/20050504070544/http://stderr.org/pipermail/arisbe/2002-January/001307.html
  5. http://web.archive.org/web/20050504070734/http://stderr.org/pipermail/arisbe/2002-January/001311.html
  6. http://web.archive.org/web/20050504070738/http://stderr.org/pipermail/arisbe/2002-January/001312.html
  7. http://web.archive.org/web/20050504070755/http://stderr.org/pipermail/arisbe/2002-January/001318.html

ZOT • Ontology List

  1. http://web.archive.org/web/20070323210742/http://suo.ieee.org/ontology/msg03680.html
  2. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03681.html
  3. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03682.html
  4. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03683.html
  5. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03691.html
  6. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03693.html
  7. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03695.html
  8. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03696.html
  9. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03701.html
  10. http://web.archive.org/web/20070329211521/http://suo.ieee.org/ontology/msg03702.html
  11. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03703.html
  12. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03706.html
  13. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03707.html
  14. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03708.html
  15. http://web.archive.org/web/20080620074722/http://suo.ieee.org/ontology/msg03712.html
  16. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03715.html
  17. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03716.html
  18. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03717.html
  19. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03718.html
  20. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03721.html
  21. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03722.html
  22. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03723.html
  23. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03724.html

ZOT • Ontology List • Discussion

  1. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03684.html
  2. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03685.html
  3. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03686.html
  4. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03687.html
  5. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03689.html
  6. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03690.html
  7. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03694.html
  8. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03697.html
  9. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03698.html
  10. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03699.html
  11. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03700.html
  12. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03704.html
  13. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03705.html
  14. http://web.archive.org/web/20070330093628/http://suo.ieee.org/ontology/msg03709.html
  15. http://web.archive.org/web/20080705071714/http://suo.ieee.org/ontology/msg03710.html
  16. http://web.archive.org/web/20080620010020/http://suo.ieee.org/ontology/msg03711.html
  17. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03713.html
  18. http://web.archive.org/web/20080620074749/http://suo.ieee.org/ontology/msg03714.html
  19. http://web.archive.org/web/20061005100254/http://suo.ieee.org/ontology/msg03719.html
  20. http://web.archive.org/web/20070705085032/http://suo.ieee.org/ontology/msg03720.html

Mar 2003 • Theme One Program • Logical Cacti

  1. http://web.archive.org/web/20081007043317/http://stderr.org/pipermail/inquiry/2003-March/000114.html
  2. http://web.archive.org/web/20080908075558/http://stderr.org/pipermail/inquiry/2003-March/000115.html
  3. http://web.archive.org/web/20080908080336/http://stderr.org/pipermail/inquiry/2003-March/000116.html

Feb 2005 • Theme One Program • Logical Cacti

  1. http://web.archive.org/web/20150109152359/http://stderr.org/pipermail/inquiry/2005-February/002360.html
  2. http://web.archive.org/web/20150109152401/http://stderr.org/pipermail/inquiry/2005-February/002361.html
  3. http://web.archive.org/web/20061013233259/http://stderr.org/pipermail/inquiry/2005-February/002362.html
  4. http://web.archive.org/web/20081121103109/http://stderr.org/pipermail/inquiry/2005-February/002363.html