Feeds:
Posts
Comments

By using the term ecology, I mean the study of the interaction of people with their environment: the environment of human awareness and knowledge.

I think that most people, feel that they are aware of their surroundings.  Psychologists say that because you feel as though you are aware, you assume everyone else is as aware as you; more or less.  Unfortunately, it does not turn out that way.  There are a host of ill effects because  there seems to be very little in our awareness that any few people can agree upon.  The lack of shared knowledge, the lack of shared intelligence, have an affect on the different level or type or kind of awareness in societies of people everywhere.  If everyone were (explicitly) conscious of one and the same thing, then we could say that everyone is conscious of such and such.  But we cannot make such a statement or claim in this day and age.   A day and age of modern communications, computers and “open information” mind you.

Nonetheless human beings are modelers in this world or environment in that we build or construct models of it that suit us or satisfy us either by explaining or predicting the circumstances in which we find ourselves.  I should say that I take it for granted that there are both good and bad models.  I want to introduce you to a good model of the organism of intelligence (mentioned in my last post) that each of us use, even though most of us are not very conscious of it.  I expect that anyone can tell a good model from a bad one.  A good model is one that stirs or moves your awareness. It affects you in such a way as you are disposed, obliged even, to pay closer attention, as it obliges one to think more exactly about someone or something; it is one that warrants becoming more aware of it;  conscious of it, learning it: ultimately using it for enlightenment and for gain.

A model M is equivalent to a knowledge K. M=K because we employ models in making predictions about certain attributes just as we employ our knowledge. The term “attribute” is used here as a noun in an ordinary way to signify a quality or feature regarded as a characteristic or inherent part of someone or something. Every environment has attributes that are characteristic of it.

For example, the ecosystem is an environment that has the attributes of air, water, earth and fire. The goal is to find just those attributes (and no more) that are enough to quantify the valuable or significant changes that make a substantial difference; affect our surroundings in some way. That is, to generate or induce knowledge and awareness we must perform a transformation: we must transform (what is recognized to be) an attribute of the environment into a personal or individual affect. That may sound strange, so let me explain it a little further.

In the case of the ecosystem, the attributes air, water, earth and fire can affect us, and one might readily imagine how the presence or absence of water or air can induce different states of mind. In any case, they may be the cause of some serious condition that could affect any one of us; imagine the situation where there is no air to breath. This quality makes air a good attribute of this environment (the ecosystem) because we can readily imagine and predict how we could be affected given some arbitrary change in the situation. But: — are these attributes sufficient and all that is necessary to predict all possible changes in the environment that might affect us?

Imagine now, how difficult it must be for scientists, for anyone, to build a model of the environment of human knowledge, awareness and consciousness. In some circles of research, that is what AI and AGI engineers are trying, have been trying to do. It is true the engineers and programmers have not been up to the daunting task of it. Yet that does not diminish the fact that it is what needs to be done in order to produce an AGI, after all: we need to be able to model our own situational awareness.  By doing so, we may become better equipped to anticipate and reduce the affects of unwanted and harmful eventualities of which many people are all too aware.

For example, economists create models of economies with certain attributes and premises. For better or worse, this is done in order to deduce conclusions about possible eventualities. Economic models are useful as tools for judging which alternative outcomes seem reasonable or likely. In such cases the model is being used for prediction. Thus the model is part of some knowledge about the environment.

The model embodies the knowledge because it is itself a capacity for prediction. Thus, a model M can be considered to be fully equivalent to a knowledge. Therefore we can assume here that a model is synonymous with a knowledge. More specifically, it appears that a qualitatively relative definition of knowledge is warranted: “A Knowledge K is a capacity to predict the value which an attribute of the environment will take under specified conditions of context.”

Now let’s talk about people (sapients) and frame a model of their environment, that is, the environment of their awareness; of which they are aware (sapient). We can assume that everyone’s awareness changes in regular and predictable ways and each person has some knowledge that allows them to predict the value of attributes in their own awareness. Here, as you see, an awareness is equivalent to the environment in which we abide. We are intuitively surrounded by or abiding in the environs of sapience.

Before I begin the example let me reveal that I have a knowledge of the attributes of a denotative awareness that includes and subsumes all possible connotative environments. I will say there are eleven attributes to this environment of awareness but I will only introduce two of them we call “Self” and “Others” in this example. Like all the attributes of this rather explicit awareness, these two attributes, Self and Others, correspond with the real entities and their activities, self and others, in the world of ordinary affairs and situations. I am only using these two in order to keep the explanation simple and real and because that is all that is necessary to demonstrate the meaning of intelligence, which I will now define as: the organism or mechanism of the attributes of the environment to affect awareness.

So, to be clear, I am not going to give the complete specification of that organism or mechanism here, but I will show you how two of the attributes of the environment I have clearly in mind “affect” both my predictions and yours.  Incidentally, let me also define a “mind” as a (psychical) state space (e.g. abstract and mental space).  So we begin with an assertion: Besides my own self, there are others in my environment; the environment in which I exist and of which I am aware.

I embody the organism we call  intelligence (as do you)  and I have a knowledge K to predict that the value of a single measurement of the attribute Others, equivalent to and connotative of “wife” will be Gloria, just in case I am asked about it. This prediction is observed to be a transformation of the state space of the attribute Others, just like the state space of the attribute Self.  Under the specified conditions and in the context of my own environment, the state space is transformed, by my own knowledge K to be equivalent to my name=Ken. Under the same specified conditions of context: the connotative context “my wife” is connected to the denotative context (observable yet normally left tacit or unemphasized) by taking successive measurements (e.g. making interpretations) of these explicitly shared attributes of the environment of my awareness. I believe that once consumed, that much ought to become clear and self-evident, that is: I take it as being axiomatic.

I can also predict that additional measurements of the attributes Self and Others will yield different values equivalent to the connotative appearance of several other self-organized entities, things or activities, that become salient to my own environment from time to time. In this way (and only in this way) my Knowledge K is different than your knowledge T. It is peculiar to my thoughts and perceptions in the context of the environment situated where I live, i.e; to my awareness of that environment. You will have a similar situation –your own “context” (the particular circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed) of the environment of your own awareness. We don’t know each others knowledge or awareness. We (may only implicitly) know and share the explicit attributes of such a (sapient) awareness.

That is to say that I live in the same environment (of general awareness and sapience) as you. And I have a knowledge K of Self and Others, as attributes of this environment that (the relevance or significance of which) you may only now be becoming aware of. Both Self and Others are clearly attributes in our shared awareness. In fact, they are attributes of a universal environment for homo sapients. Remember that a knowledge T, K, …, or M is a capacity to predict the value which an attribute of the environment will take under specified conditions of context. Everyone has their own name, knowledge (whether implicit and explicit) and their own conditions of context. This is the private knowledge held inside them and perhaps also by relatives and friends.

Now we are able to make some observations and see some of the implications that flow from what has been stated above. We can intuit, for instance, that a wholesome knowledge K is evidenced whenever an organism produces information or reduces a priori uncertainty about its environment. I realize this is incomplete, though it demonstrates that (connotative and social) knowledge, text and all computer data is synthesized from (the transformation of) valid attributes A, which cannot be construed as being contained in or patterned by (computer) data nor by modern language.

Any invariant or regular and unitary attribute A (whereby individuals are distinguished) ought be seen as a continuity to be treated as valid– and used as a handy and trustworthy rubric for making or producing transformations (in the state space of a mind) applied in a context of the environment.  Each measurement produces a single valuation, that could be the same or different at any moment and from place to place –only appearing to be impossibly chaotic or complex.  For those that understand such things, such an attribute may be considered a correspondence.  This correspondence may be formalized as a functional mapping of the form A: Ɵ → Ɵ where Ɵ is the (denotative) state space of the environment mapped to the (connotative) state space of the environment.  We found more than a dozen types or configurations of functional mappings that are applied in variant connotative contexts.

So, to conclude: an environment of human awareness can be understood simply as the denotative and connotative surroundings and conditions in which the organism of the attributes (and capacity of independent awareness with a knowledge) operates, is asserted and is applied.

The good news is that now that we know that it is the organism of the attributes of the environment of awareness, consciousness, that is both explicit and universal (not connotative belief,  knowledge or perception or conception –which are all relatively defined) we can get down to resolving differences while  accommodating everyone.  To be specific, we can seek better understanding and control over perceptual and conceptual states of awareness in a decidedly invariant environment (awareness) of continuous change, where intelligence is any organism or mechanism of the attributes of an environment that affects such awareness and consciousness.

______

We can also define semantics as the correspondence of both the denotative and connotative states of conception to the set of all possible functions given the attributes of the environment.  Now, if you want to know more details you will need to put me on a retainer and pay me.

The world is lacking an operational definition of intelligence that can lead to more exact thinking and to computer systems that help people to think more clearly and effectively. A good operational definition ought to be:

  1. Specific enough to be implanted as a procedure one that can be easily and readily followed.
  2. Motivational, manageable, measurable such that it leads to invention, progress, successful outcome.
  3. Attainable such that any baby can use the organism to sense and control entities and activities in its world or environment.
  4. Relevant, in that it is determinate of what is to become significant, or;
  5. Timely, and
  6. Salient

This definition (stated below) addresses two questions:

  • Where do we get the intelligence to deal with a growing, changing reality?
  • How does intelligence work to make changes in our favor?

Most researchers agree that human intelligence is observed in behavior, in particular, in language and through speech acts. The Sapir-Whorf theory of linguistic relativity, was summarized by the semanticist Stuart Chase, when he stated:

“First, that all higher levels of thinking are dependent on language. Second, that the structure of the language one habitually uses influences the manner in which one understands his environment. The picture of the universe shifts from tongue to tongue.”

Restating this linguistic theory as a systems theory and in terms of analytic and computational engineering, notational engineer Jeffry Long wrote:

“First, that all abstract thinking is dependent upon the existence or invention of notational systems. Second, that the underlying ontological inventions of the notational system one habitually uses influences the manner in which one understands his environment. Acquiring literacy in a major notation causes us to add a new dimension to our picture of the universe.”

Based on twenty-seven years of intimate experience, I can restate Tammam Adi’s theory of semantics based on Classical Arabic, in this way:

First, living in the world is a growing, expanding experience or (ontogenic) process in which we make things (speech, nouns, names; things, artifacts, etc.). The words of language are made of abstract structures referencing bits or segments of this growing/making reality that we construct and utilize for common edification and understanding.

Second, the growth in common and social sense, along with modern languages, rests on ontogenetic intelligence in the organism of mind and on the success of its notational system: its elementary (ontogenic) processes and semantic rules, and its recognizable symbols (e.g. alphabets) and system of writing. Collectively, we call these “ontological inventions” for making progress.

Thirdly, word structure is composed of clusters or configurations of ontological inventions involving and representing both real and abstract entities and activities, arranged in such way as to be productive (of making sense, meaning, things) of understanding.

With Adi’s theory of (algebraic, axiomatic) semantics, it is possible to specify the ordinary conditions and ontogenic controls of sapience in the following concise and formalized way:

There is a self-organizing mechanism (regulating schemata) comprising:

  1. the polarity of an abstract entity, representing engagement conditions, (G) distinguishing the involvement and participation of oneself and others, (G={Self, Others}) in;
  2. a symmetrical relationship (R) crossing the polarity of an abstract procedure, representing an ontogenic orientation and boundary conditions
    (T={Open, Closed}, and R=T X G) for;
  3. a set of invariant and elementary processes
    (P={assignment, manifestation, containment}) being structured by the abstract entity, using the polar procedure for growing and making (sense, understanding, artifacts etc.) and;
  4. which schematic arrangement of such entities and activities generates symbolic and semantic operations (syntactically) carried out or produced (i.e. interpreted) by enacting them (via speech-acts, etc.).

We call this intelligence and we say: “Intelligence is the organism of a mind uniting (abstract and real) entities and activities in such a way that they are productive of regular changes from the beginning until the end.”


The Semantics of Qualia

The Wikipedia entry defines Quala thus:

Qualia (play /ˈkwɑːliə/ or /ˈkwliə/; singular form: quale (Latin pronunciation: [ˈkwaːle]) is a term used in philosophy to refer to individual instances of subjectiveconscious experience. The term derives from a Latinword meaning for “what sort” or “what kind.” Examples of qualia are the pain of a headache, the taste of wine, the experience of taking a recreational drug, or the perceived redness of an evening sky.
One might argue on this evidence that the definition applies only to some subjective qualities of a macro and  external experience while the most subjective experience of the organism must be, can only be, that internally generated experience of the individual self.  The quale of inner-experience cannot be a “macro” quantity, symbol or component such as the amount of pain or even the word, or the uncountable shades of red.  I can personally attest that one may know pain without also knowing how to interpersonally express or symbolize it.
 
I am not alone in believing that “qualia,” if it be an identifiable sort or kind of particular — salient to the awareness and consciousness, must instead be a micro, molecular or morphogenetic quantity representable in an associative network of firmly grounded states, (grounded in physical laws and causality).
 
I am aware, for example that my own inner-experience is conditioned by the homeostasis of the structure and function of my central nervous system; (not only the brain) the brain and its sensors along with the metabolism.  The objects of my inner experience are felt and reflected upon because I am emotionally invested in being here and now and in being me (the present particular “I am”).  
 
This emotional investment (from which one feels things) forms a feedback loop caused by the modal transformations of exogenous matters of the ecosystem and interpersonal realities into the conscious endogenous energy of self-realized experience.  It ought go without saying that I am also emotionally invested in the modern social world, (I have been raised with an American and interpersonal worldview) and I am socially, professionally and politically engaged in interactions with others.
 
A worldview is more than just a belief, opinion or perspective.  A worldview is a framework of ideas and beliefs through which an individual, group or culture interpret their conditions of existence.  I have more recently been developing the idea that modern ethnographic worldview is not an invention or a construction, rather it is an expression of poiesis: a creation or production of that which is named by the combining roots of organism.  
 
The expression of which must be seen in light of both morphogenic and “ontogenic” properties in that there are a set of semantic rules that govern ontogenesis (i.e., growth of the morphogenic fields of language from the simple to far more complex forms of expression).  The macro field of “human reality” is seen as an expression of this biogenic field of organismic poiesis, rather than as a social, cultural, literary or political construct, or any other ethnographic construction.  
 
Poietic semantics operates (in intelligent people) by unifying and focusing intuitive cognitive processes (onto rudimentary elements and operations of poiesis and organismic function) and by regulating interprocess interactions and individual (endogenous semiotic) rulemaking.  I can vouch for the idea that the uptake, adoption and retention of a poietic worldview affects associative thinking in intelligent people (it anchors them; it gives them an objective and transformative hand-hold in a sea of assumptions) from more than thirty-years of personal experience.  
 
A poietic worldview engenders (in its learner) an exactness in the immediate conception of the elements and operations of poiesis (i.e. it is a concretion of Daniel Kahneman’s system-1 type thinking (i.e it is not AI nor analytic/reductionist)). It synthesizes the components, elements and influences of associative thinking, making such thinking that much more concrete and reliable.
 
Here is a short video overview I prepared recently that can be shared and downloaded.  
 

If words are just labels with arbitrary meaning and if everything is relative and all theories are tolerated, what is there left to hold societies, or anything else, together? What is there to keep it all from flying apart; God, love, magnetism? Given security we cannot afford, without any certainty and only our assumptions in our pockets: what is the future of mutual understanding?

How can we transcend words, definitions and extensions of words, and more words, definitions and extensions of words, and both real and petty arguments caught in endless loops.    Like the philosopher Edmund Husserl, I believe that beneath the changing flux of human experience and awareness, there are certain invariant structures of consciousness.  Such structures appear to be ethnographically grounded and necessary to mutual understanding between members in a society.

We may inquire into what is necessary to the achievement of a mutual understanding.  We can begin by allowing that human “understanding” is a collection of mental and physical (psychophysical) and ethnographically-oriented processes grounded in cognates originating in and concerning human nature.

Why “cognates” instead of “words;” isn’t that another word? A cognate may be defined as any one of a number of objects or entities allied in origin or nature. Irrespective of their grammatical role in language, for example, the entities referred to as “self” and “others” are functional examples of “psychophysical cognates” that find their origins in thought, (i.e. as abstractions) and in all the evolutionary conditions of human nature, essentially; –in all of what matters to human being.

It is nothing other than salience or relevance of self and others to the domain of human knowledge that remains invariant. Some may scoff at this at first reading –finding it silly to claim that the salience of self’ and others’ knowledge is what makes it significant; but this is more profound than that. It is not a cop out either, as we have a precise and thoroughly tested and published mathematical model that holds out promise of being a sound basis for an ethnomethodological framework.

Relevant to this essay is the fact that a language includes the apparatus for composing and encoding knowledge, and for recording the extensions of invariant cognates of knowledge and understanding and their salient configurations as they are decoded, adopted, retained, rearranged, reassessed, redeveloped (or changed) over time. An important observation is that knowledge is not reduced to salience or relevance, it becomes so. This is why reductionist methods of AI have not and will not work.

Reductionism is characterized by dissection and separation of parts. Linguists, for example, dissect language into parts of speech and parse or translate the grammar of each sentence in a language into true/false assertions and propositions. When my colleague, computer scientist Tom Adi, and I began his investigation into language in the early 1980’s we came from the point of view of what language actually accomplishes.

This approach to investigating language provided findings that support an ethnographic philosophy of language. As such, it is concerned with four central problems: the nature of preserving meaning and transmitting knowledge, the use of language to accomplish such goals, language cognition, and the relationship between language, information and reality.  According to  Adi’s theory, a language can be defined as a unity–  a synthesis of cognates confirmed in experience by the matters at hand:

  1. A language is the image or projection of a synthesis of cognates in a domain of interactions by which the comprehensive recognition of perceptible objects and sensible (and successful) activities becomes possible; leading to awareness and general consciousness.
  2. In human languages, such cognates are represented by phonemes (in spoken languages) and morphemes (in written ones). These are used to compose and order the knowledge necessary to consciousness and mutual understanding.

This ethnographic view of language is my own philosophy synthesized from my understanding, practice and work with Adi’s theories and scientific observations, axioms and propositions, i.e. his new “science of relevancy” (i.e., a way to analyze the relations in a domain of knowledge representations (given language as inputs, e.g. text) to determine something relevant (the output) to matters at hand). As such it is concerned with four central problems: the nature of meaning and making sense, the purpose of language use, language cognition, and the relationship between language, information and reality.

Adi and I did not make use of reductionist techniques or materialistic or linguistic dogma in arriving at Adi’s axioms and propositions or in developing applicable computer models and algorithms, mainly because it was simply not applicable.  This is obviously a quite different philosophy of language than what is generally accepted today. It is not the way language is normally approached and studied by linguists, psychologists and logicians.

We take it as self-evident that human language, which ranges over a domain of human interactions, augments (adds to) human knowledge and expands cognition, awareness and consciousness (mutual understanding). Perhaps you do too? As I was there to learn of Adi’s theory first hand, I can personally attest to this claim.

Using a few propositions and a selected procedure from Adi’s theory, I am able to induce and explain the sensibility and perceptibility of expressions in languages in which I am not a speaker. Because all human language must range over this domain of interactions, any language can be translated into any other language ranging over the same domain. When Adi and I started working together in the early eighties that was our hypothesis; it is what we set out to investigate, beginning in the area of developing automatic language translation systems.

We noticed that there were large differences in the ways translators and interpreters choose their words and use alternative phrases and idioms. Their choices (which they often kept on closely guarded index cards) appeared to be based on subtle though perceptible differences in translating the reference and sensibility of a text or message along with the words.

Looking to the linguistic literature, we found that there is no procedural or computational theory of such ethnomethodological practices among translators, or in the classical traditions of interpretation. Language translation on computers is largely about word for word or phrase for phrase replacement. It tries to make the output sensible but it is emphatically not about making matters perceptible –that is left to people checking the machine translation. This sense of making matters perceptible also characterizes the difference between what interpreters and translators do. We had as an objective the development of intelligent technology and we wanted our system to be like interpreters –making matters more perceptible and sensible.

We decided to begin our own search for a theory of meaning that could be a foundation for translating not only the language people use but what the people actually mean to say. Having the capacity to speak and converse in a dozen languages helped to understand the ethnographic and “knowledge representation” problem, and being a computer scientist, made Tom Adi uniquely capable of performing this semantic study. I commissioned the completion of study.

Thusly, Adi does not begin his investigation into language from the point of view of the parts of speech or grammar. Having a sound idea of what language actually accomplishes. (i.e. it helps us synthesize knowledge in our domain of interactions; it helps us make sense of natural processes, objects and events). Adi sought to determine exactly how it is accomplished. We began looking for a language in which to begin a study because how this synthesis is accomplished must be determined, with all possible precision, empirically — i.e. by way of observation and experiment.

Neither of us had any preference for which language became the basis of the study, yet, based on discussions with linguists and colleagues, we held out this idea of “a perfect text” as an exemplar to begin with: a perfect text is one in which the language and the meaning it conveys is perfectly clear and completely unambiguous. Adi expected to find natural laws using such a text. He initially focused his efforts onto exactly how language might be synthesized according to natural laws and in light of Einstein’s relativity and the modern standard model of physics.

Adi studied textbooks about the analysis of the nature of the hydrogen atom, speculations over the smooth surface of water and texts on chemical bonding as well; reading the summaries and overviews written by Russian physicists and German chemists. He reasoned that language must behave in ways similar to atoms; in the way they bond and form new or changed bonds in chemical reactions.  He felt that smoothness at the surface of water and the continuity in language must be related and reflected on ways that establish this fact, i.e.,  he sought processes that are somehow similar to the operations and laws of particle physics and in chemistry. This means language is to be seen as a natural rather than a social phenomena.  The processes of  atomic and chemical bonding drives biophysical processes– and the operation or behavior of all of it is well-defined –according to natural laws.

Adi began looking for natural laws and processes that somehow regulate the ways people use language to interpret something and unify it in their own awareness and intuition, and he found them. His observations and experiments were later published (in Semiotics and Intelligent Systems Development, 2007) as a set of proofs to a semantic theory of ancient Arabic. By the time of that publication, we had already rendered the theory into an axiomatic model of language and we further developed in situ methods and computer algorithms for synthesizing ethnographic “knowledge-types” (Plato’s eide) from text and messages written in any of the English, French or German languages.  (See this peer-reviewed paper for more information.)

The major difference between the work of cognitive scientists and linguists, and Adi, is the frame of inquiry. Adi asks: what task does language have — what problem must it solve — in order to accomplish what it does. It is the formulation and analysis of this task which is the starting point and primary focus of Adi’s investigation.

Language is often cast in terms of modern communications science while the problem language solves is a memory problem not a communications problem. This is regrettable on many levels though there is no need to dwell on that here.  What we found is that the cognates of families of human languages organize, encode and range over a kind of permanent memory space, accommodating the definite domain of human knowledge while being constrained by the more indeterminate domain of human interactions (ethnographic activities).

The advent of the phonetic alphabet gave the world its most efficacious form of interpersonal memory –a solution space. Phonetic alphabets represent a synthesis of elementary processes and conditions (i.e. laws, semantics, poiesis: to make something determinate) ranging (via Adi’s micro-syntax) over the domain of human interactions. The phonetic alphabet was the world’s first recording technology: A world-famous device for more permanently recording the dimensions of mutual understanding or human consciousness. This is done by encoding it within the long term memory or name-space of human language.

In summary, human language appears to be a recording system. It provides the means and methods to encode (and to access) knowledge from the domain of human interactions for all generations. It was because of this realization and formulation that Adi’s semantic study was successful and we immediately derived useful operational knowledge that we could and did turn into state-of-the-art technology.

It is true that the industry is hooked on analytics. Don’t you know that analysis and synthesis go hand-in-hand? Where are the developers, the entrepreneurs, the organizers? Which of these are you? Don’t you believe people need ways to synthesize perceptible knowledge salient to matters at hand?

Solving for the Meaning of x

What is “meaning” in questions such as: what is the meaning of life? It is the same as asking what is the truly real significance of life. Any answer is only theoretical.  Intuitively, any answer must be universal.  The truly real significance must, by definition, be significant for everyone.

That makes the notion appear to be either exaggerated or rather improbable.  The universality of such a theory of meaning would rest on the multitude of “real” things that are perceived by the theory as salient, pertinent properties and relations in “real life” and to humanity in general.  It would have to include everything we can imagine in experience.  How could it be possible?

This would also make it necessary to correspond with every “real” experience, in just enough (and no more) dimensions, necessary to make such experience “really” meaningful.  Intuitively, it must capture or cover any continuous or discrete distributions or extensions of “real” natural structure, elements or processes, in three dimensions of space and one dimension of real presence or immediate existence x.

It is very complex but not impossible.  On the one hand, one cannot help but wonder how to deal with such complexity.  On the other hand, we notice that very young children do it. Four-year old children seemingly adapt to complexity, with very little problem.  It is sophistication and obfuscation that comes later in life with which they have problems.  At four, children are already able to tell the differences in sensible and nonsensical distributions and extension of reality,  irrespective of whether they are the continuous or discrete variety.

These continuous or discrete distributions and extensions bear some additional explanation mainly due to the overarching significance to this context. First, they establish a direct correspondence with our most immediate reality. For every time we open our eyes, we see a real distribution of colored shapes.  Such a real distribution is nature’s way of communicating its messages to consciousness, via real patterns.

Second, perceived distribution patterns directly suggest the most fundamental ontological concept in theoretical physics: a field configuration, which in the simplest example of a scalar field can be likened to a field of variable light intensity.  That life is intense and that meaning is intense is not something one ought to have to prove to anyone. I will come back to intensity in another post, as I want to continue commenting on presence or real and immediate existence x. We must, in practice and in effect, solve for the real meaning of x as you see.

Meaning in this case, so defined, is literally the significance of truth, or more appropriately, what one interprets as significant or true within the dimensions of intense messages or information pertaining to real life as specified above. So, we must begin, undoubtedly, by defining what true is, then proceeding to the next step, we ought define the elements and structure to one’s interpretation of this truly significant nature of life. I did it a little backwards in this respect and this has always created a bit of a confusion that I did not see until recently.

One begins any such analysis by examining a subject’s real elements and structures. For the subject of truth, one also searches the literature where it is well represented. Such a search conducted on the subject of truth brings a broad range of ideas. To try and make a taxonomy of ideas from the varied opinion found there would turn out to be an exercise in incoherence, But it ought be acceptable to reference some theories and practices that have been adopted.

Ibn Al-Haytham, who is credited with the introduction of the Scientific Method in the 10th century A.D., believed, “Finding the truth is difficult and the road to it is rough. For the truths are plunged in obscurity” (Pines, 1986, Ibn al-Haytham’s critique of Ptolemy. In Studies in Arabic versions of Greek texts and in medevial science, Vol. II. Leiden, The Netherlands: Brill. p. 436). While truths are obscured and obfuscated; there can be no doubt that the truth does exist and the truth is there to be found by seekers. I do not accept views or opinions that the  average layman is too stupid or are otherwise not equipped to figure it out by themselves.

The Modern Correspondence Theory of Truth.

While looking for the truth it helps to know what shape it takes or what it may look like when one happens upon it or finds it lying around and exposed to the light. According to some: truth looks like correspondence between one thing or element and another, Scientist have long held a correspondence theory of truth. This theory of truth is at its core an ontological thesis.

It means that a belief (a sort of wispy, ephemeral, mostly psychological notion) is called true if, and only if, there exists an appropriate entity—a fact—to which it corresponds. If there is no such entity, the belief is false. So you see, as we fixate on the “truth of a belief” –a psychological notion such as a thought of something —to be sure —but some concrete thing, nonetheless, we see that one thing —a belief— corresponds to another thing —another entity called a fact. The point here, is that both facts and beliefs are existing, real entities — even though they may also be considered to be psychological or mental notions — beliefs, ideas –they– are reality.

While beliefs are wholly or entirely psychological notions, facts are taken to be much stronger entities. Facts, as far as neoclassical correspondence theory is concerned, are concrete entities in their own right. Facts are taken to be composed of particulars and properties and relations or universals, at least. But universality has turned out to be elusive and the notion is problematic for those who hold personal or human beliefs to be at the bottom of truth.

Modern theories speak to “propositions” which may not be any more real, after all. As Russell later says, propositions seem to be at best “curious shadowy things” in addition to facts. (Russell, Bertrand, 1956, “The philosophy of logical atomism”, in Logic and Knowledge, R. C. Marsh, ed., London: George Allen and Unwin, 177-281. Originally published in The Monist in 1918. , p. 223) If only he were around here now; one can only wonder how he might feel or rephrase.

In my view, the key features of the “realism” of correspondence theory are:

  1. The world presents itself as “objective fact” or as “a collection of objective facts” independently of the (subjective) ways we think about the world or describe or propose the world to be.
  2. Our (subjective) thoughts are about the objective fact of that world as represented by our claims (facts) which, presumably, ought be objective.

(Wright (1992) quoted at the SEP offers a nice statement of this way of thinking about realism.) This sort of realism together with representationalism is rampant in the high tech industry.  Nonetheless, these theses are seen to imply that our claims (facts) are objectively true or false, depending on the state of affairs actually expressing or unfolding in the world.

Despite the fact of one’s perspective, metaphysics or ideals, the world that we represent in our thoughts or language is a socially objective world. (This form of realism may be restricted to some social or human subject-matter, or range of discourse, but for simplicity, we will talk only about its global form as related to realism above.)

The coherence theory of truth is not much different than the correspondence theory in respect to this context. Put simply, in the coherence theory of truth: a belief is true when we are able to incorporate it in an orderly and logical manner into a larger and presumably more complex web or system (sic) of beliefs.

In the spirit of American pragmatics almost every political administration since Reagan has used the coherence theory of truth to guide national strategy, foreign policy and international affairs. The selling of the War in Iraq to the American people, is a study in the application of the coherence theory of truth to America’s state of affairs as a  hegemonic leader in the world.

For many of the philosophers who argue in defense of the coherence theory of truth, they have understood “Ultimate Truth” as the whole of reality. To Spinoza, ultimate truth is the ultimate reality of a rationally ordered system that is God. To Hegel, truth is a rationally integrated system in which everything is contained. To the American Bush dynasty, in particular, to W.: truth is what the leaders of their new world order say that it is.  To Adi, containment is only one of the elementary processes at work creating, enacting (causing) and (re)enacting reality.

Modern scientists break the first rule of their own skepticism by being absolutely certain of information theory.

Let me be more specific.  Modern researchers have settled on a logical definition of truth as a semantic correspondence by adopting Shannon’s communications theory as “information” theory. Those object-oriented computer programmers who use logic and mathematics; understand truth as a Boolean table and as correspondence as per Alfred Tarski’s theory of semantics.

Modern computer engineers have adopted Shannon’s probabilities as “information theory” even though, on the face of it: the probabilities that form such an important part in Shannon’s theory are very different from messages; which stand for the kinds of things we most normally associate with objects. However, to his credit, the probabilities on which Shannon based his theory were all based on objective counting of relative frequencies of definite outcomes.

Shannon’s predecessor, Samuel Morse, based his communication theory, which enhanced the speed and efficiency with which messages could be transmitted, on studying frequently used letters. It is the communications theory I learned while serving in the United States Army. It was established by counting things — objects in the world — the numbers of different letter-type in the printer’s box.

When I entered the computer industry in 1978, I was somewhat astonished that Shannon’s theory of communications was already established in the field of information science — before word processors and “word” processing were common. I confirmed that belief by joining with information scientists for awhile, as a member of the American Society of Information Science (ASIS).

While at ASIS, I found out that Shannon’s probabilities also have an origin in things much like Morse code: although they in no way ought be considered to be symbols that stand for things. Instead, Shannon’s probabilities stand for proportions of things in a given environment.

This is just as true of observationally determined quantum probabilities (from which Shannon borrowed on the advice of the polymath John Von Neumann) as it is for the frequencies of words in typical English, or the numbers of different trees in a forest, or; the countable blades of grass on my southern lawn.

Neither Morse Code, nor Shannon’s Communications theory, nor any “information” theory, directly addresses the “truth” of things in or out of an environment –save Adi’s. The closest any computer theory or program gets to “interpretation” is by interpreting the logical correspondence of statements in respect to other statements — both with respect to an undefined or unknown “meaning” — the truth or significance or unfolding of the thing in the world. It takes two uncertainties to make up one certainty according to Shannon and Von Neumann– who had two bits of uncertainty, 1 and 0, searching for, or construing, a unity.

That is not us. That is not our scientific program. Our program was not to construe a unity, or “it” from “bit.”  That is the program of the industry, because, almost like clocks, everyone in industry marches in lock step by step, tick by tock, take-stock.

Adi began with the assumption that there is an overarching unity to “it.” He then studied how a distribution of signs of “it” (i.e., symbols that make up human languages describing “it”) manages to remain true to the unity of “it,” despite constant change. Such change, it can be argued, arrives in the guise or form of uneven or unequal adoption, selection, and retention factors, as seen in the overwhelming evidence of a continuous “morphogenesis” in as much as the formation, change and meaning of wordsfacts and other things, over eons.

To determine how people interpret the intensity and sensibility or “information” projected with language by means of speech acts (with messages, composed of words) — Adi investigated the sounds of symbols used to compose a real human language when most people were inventing artificial, specialized, logical and less general languages.  Adi chose to study the unambiguous sounds of Classical Arabic that have remained unchanged for 1400 years until present day.  That sound affects what we see is in no way some incidental trivia or minutia.

At the least, it helps truth break free of being bound to mere correspondence, a relegation reminiscent of mime or mimicry. Adi’s findings set truth free,  liberates truth, to soar to heights more amenable — such as high-fidelity,–  than those that burn out in the heated brilliance of spectacular failure.  In fact, in early implementations of our software we had an overt relevance measure called “fidelity” that users could set and adjust.  It speaks to the core of equilibrium that permeates this approach to conceptual modelling, analysis, searching for relevance and significance, subject and topic classification and practical forms of text analytics in general.

Tom Adi’s semantic theory interprets the intensity, gradient trajectory and causal sensibility of an idea presumably communicated as information in the speech acts of people. This “measure” of Adi’s (or we may call it “Adi’s Measure”) can be understood as a measure of the increase in the magnitude (intensity) of a property of psychological intension. (e.g., like a temperature or pressure change or change in concentration) observable in passing from one point or moment to another. Thus, while invisible, it can be perceived as the rate of such a change.

In my view, it is in the action of amplitude, signifying a change from conceptual, cognitive or imaginative will or possibility, to implementation or actualization in terminal reality. Computationally, it is and can be used as a vector formed by the operator ∇ acting on a scalar function at a given point in a scalar field. It has been implemented in an algorithm as an operating principle, resonating —   acting/reacting (revolving, evolving) as a rule, i.e.; being an operator: conditioning, i.e., coordinating/re-coordinating,  a larger metric system or modelling mechanism (e.g., Readware; text analytics, in general).

I mention this to contrast Adi’s work with that of Shannon, who, in order to frame information according to his theory of communications, did a thorough statistical analysis of ONLY the English language. After that analysis, Shannon defined information as entropy or uncertainty on the advice of Von Neumann.  The communications of information (an outcome) involves things which Shannon called messages and probabilities for those things. Both elements were represented abstractly by Shannon: the things as symbols (binary numbers) and probability simply as a decimal number.

So you see, Shannon’s information represents abstract values based on a statistical study of English. Adi’s, information, on the other hand, represents sensible and elementary natural processes that are selected, adopted and retained for particular use within conventional language –as a mediating agency– in an interpersonal or social act of communications. Adi’s information is based upon a diachronic study of the Arabic language and the confirming study in fourteen additional languages, including modern English, German and French, Japanese and Russian, all having suffered the indisputable and undeniable effects of language change — both different from and independent of the evolution of language, or the non-evolution, as-it-were, of Classical Arabic.

Adi’s theory is a wholly different treatment of language, meaning and information than either Shannon or Morse attempted or carried out on their own merits. It is also a different treatment of language than information statistics gives, as it represents the generation of salient and indispensable rules in something said or projected using language. It is different from NLP or Natural Language Processing which depend (heavily) on the ideas of uncertainty and probability.

A “concept search” in Adi’s calculation and my estimation, is not a search in the traditional sense of matching keys in a long tail of key information.  A “concept search” seeks mathematical fidelity, resonance or equilibrium and symmetry (e.g., invariance under transformation) between a problem (query for information) and possible solutions (i.e., “responses” to the request for information) in a stated frame or window (context) on a given information space (document stack, database).  A search is conducted by moving the window (e.g., the periscope) over the entirety of the information space in a scanning or probing motion.  While it ought be obvious, we had to “prove” that this approach works, which we did in outstanding form, in NIST and DARPA reviewed performances.

Adi’s theory is not entirely free of uncertainty as it is, after all, only theoretical. But it brings a new functionality, a doctrinal functionality, to the pursuit of certainty by way of a corresponding reduction of doubt. That is really good news. In any case, this is a theory that deserves and warrants consideration as a modern information theory that stands in stark contrast to the accepted norm or status-quo.

Time To Reconsider and Resolve.

The political, religious and aristocratic coloring characterizing the present state of humanity subverts the validity of human cultures by monopolizing and maintaining “trust in the system”. One needn’t disrupt or subvert the present politico-economic systems in order to illuminate the space of language and correct the course and currency of the relevant orders.

What compels me and motivates the subject is the admonition: — the public interest requires doing today those things that men of intelligence and goodwill would wish, 5 or 10 years hence, had been done.  My concern is with humanity and a future filled with depression fueled by the imminent collapse of public institutions founded on misplaced “trust in the system.”

Modern politico-economic systems, or orders –as they are called, are the means by which the state and its enterprising exponents are given to the misplaced trust by maintaining the “ignorant bliss” of its culture. The way societies are controlled is with an insidious method of thought control inherited by modern heads of state and economy.  This method is so insidious that almost all of the present dilettante, public and professional members of society have been beguiled, falling victim to its perplexing effects.

One of the more serious side-effects of this mortal condition is a repressive and subversive mentality that exists at all levels in nearly all public societies, even in democracies. The foundation of this condition, now afflicting almost all of humanity, originates in the monopolization of the trust and validity of the word by the political systems and orders of history whose succession has been written into the permanent space of language.  Modern leaders are found to be conniving a “new world order” to strengthen the monopoly.

Political, academic and economic leaders along with the exponents of each of the political states and systems support each other; often while neglecting and hindering the progress, intellectual development and well-being of the people they pledge to govern, educate and support. This is done in order to increase the power and wealth of the members of the order or “insiders.”

Monopolization of the word has led to rampant skepticism and fracturing use and dependence on metaphor and politically-charged dialog. This serves only the desire for the accumulation of wealth in the hands of the few leaders of the political state and economy. Proponents of the currency of the system have been beguiled by its charms as exponents of their own wealth and reputation.

The insidious effect, hardly unintentional, is to curb, retard or otherwise control the progress of humanity which is the cause of considerable suffering and undefinable harm — disarming the people by preoccupying, burdening and binding them with their own concerns for their welfare and well-being and the tiresome rule of mediocrity. That leaders are skilled negotiators of their politico-economic domain comes sharply into focus in the manifestation of unbridled greed in their financing and banking exponents.

Monopolization of the word by institutions and the political establishment is perpetuated in the following ways:

The most vocal critics of political repressions are outsiders that are easily silenced by charges of not having the proper training or credentials. The method is to show evidence of being quoted out of context or having misgivings that spring from radical purpose, ignorance or simple misunderstanding.

The reasoning of insiders and experts has a character different than that of the critic. This reasoning is far more obtuse, often employing as evidence the thing it needs to prove; as was done in the justification to make war in Iraq. In most literary and social sciences, such as psychology, linguistics and communications, information and computer science, important aspects of language and literature — sensibility, generous tastes, wide experience — are subverted by outright speculation or speculative model-building.

Poststructuralism is self-defeating, offers no explanation as to what language really is, and its proponents seem purposefully willing to act as if they are ignorant of the fact that the relevant institutions are in need of reform.  While theories of “language” and “semantics” are widely quoted by critics or writers in the new media and tenaciously entrenched in universities, it is largely a mediocrity of unexplained standards of political devising: a local currency of wealth and reputation.

All this constitutes a tiresome and convoluted prose that exists in academic disciplines, law and literature and in the use of technical vocabulary in private and public communities.  This make evaluation of the discipline or community difficult, making course correction nigh impossible and rendering it unfathomable to find and establish fundamentally correct processes and criterion for their evaluation, thus:

  • civil understanding of what is happening is diverted by political aims or greed.
  • civil enforcement has become indifferent, perhaps intentionally so.
  • the very notion of a safe and secure future, without terrorism or inhumanity, is blocked and shut out by ignorance, immediate skepticism and outright disbelief.

The vernacular of the exploding political administrations along with that of the public and popular press quantitatively increases the numbers of words.  They press the words into temporal media, aiming to monopolize them, while apparently qualitatively emptying the words of substance and content through excessive dispersion.  The cost of collecting, storing, processing and publishing the indifferent, often empty or senseless, words — is born by society. While enterprising insiders increase their own lot in the currency of wealth and reputation, they add far too little to the qualitative and permanent space of language.

Appreciation of a culture’s art is coincidentally assigned a temporal index of its value in the local currency of the day, often subverting the permanent qualities such work ought project.

Enough? Is it time to think about the permanence to the future of humanity instead of the currency of wealth and reputation?  Ought not a writer taking up the pen aim to enliven the permanent space of language?  If there exists ways to liberate society from the bonds of indifference, needn’t there already be criteria and processes for resolving how to do it?

The problem is whole populations have lost their humanity due to indoctrination into a (false) sense of security where people have been lured into misplacing their trust. Modern generations seem to have lost the capacity for sustained attention along with the ability for processing exact thought for themselves.

The permanent space exclusive to humanity and no other living creature on earth has been successfully hidden by covering it in a “modern mind” some thing which many people seem to have lost in the pages of a forgotten history while they have actually lost authority over their own thinking. This myth of modern mind is so insidious it infects the speech of everyone without exception even though no one can say what or where that thing we call “mind” resides or what it is, exactly.  Its method is perplexing, the consequences of which demeans humanity.

While I cannot claim to know what it is, I know that the academic discipline of philosophy and psychiatry invokes states of consciousness that do not exist, and perhaps that is where the mind was hatched, for to “have a mind” is surely nothing but a dubious figure of speech.

That no one can say where such a thing as mind goes when a person supposed to have one dies, or even manifest or display one for all to see, should be salient to even the dimmest of human intellects.  Yet the popular concept of a thing or space called “a mind” persists. It persists because it serves those who would monopolize trust in the validity of the “word” (logos) to increase the currency of their wealth and reputation.

In order to confiscate the space of language from the people, authorities invented the fiction of the modern and popular mind so that the more permanent space of language could be obfuscated by the temporal politico-economic order of political peers and their banking exponents. If this so-called modern and popular mind exists at all, it as only as a phantom — controlled (curbed like the dog that it is) by exponents holding fast and firm to a misplaced trust in the temporal usurper’s of authority over humanity.

Do people need to consider their own humanity in order for humanity to become more humane?

Why or why not?

Albert Einstein wrote: “Computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together they are powerful beyond imagination.”

The partnership between human beings and computers is long and enduring and there are so many examples of just how powerful the influence of computers really is. This was especially true after the debut of the personal computer, and again after the debut of the Internet that gets us connected today.

When spreadsheets came out we became better tabulators. When word-processing and spell-checkers arrived we became better writers. The widespread use of relational databases made it easier to collect, store and manage information making us more intelligent about larger collections of data.

Over the decades of computing the costs of storing data have dropped to nearly nothing.  In many cases storing data on the Internet is free.  The costs of collecting data has dropped significantly.  There was a time, not so long ago, that the 300 baud modem was the most common way to connect or be “on-line” with another computer.  The costs to download 10 megabytes over long distance telephone lines was not inexpensive.  Now people connect to the Internet over public wireless networks in most cities. It is offered free by many business establishments. People now download a thousand times the amount of data moved in 1985.

But something went wrong. The five basic means and capabilities needed for intelligence are collection, storage, retrieval, analysis and dissemination. We have systems of collection, storage, retrieval and dissemination but the systems we do have for analysis are not generally something anyone can run on their personal computer.  Even if we can run them on a desktop pc, they are complex systems that require significant expertise to make them work well in limited areas of specialization.

Analyzing the patterns and ordering the data helps us learn about the world and obtain to better and more complete theories.  Albert Einstien wrote:  “Concepts that have proven useful in ordering things easily achieve authority over us that we forget their earthy origins and accept them as unalterable givens.  Thus they might come to be stamped as “necessities of thought,” “a priori givens,” etc.  The path of scientific progress is often made impassable for a long time by such errors.  Therefore it is by no means an idle game if we become practiced in analyzing long-held  commonplace concepts and showing the circumstances on which their justification and usefulness depend, and how they have grown up,  individually, out of the givens of experience.  Thus, their excessive authority will be broken.  They will be removed if they cannot be properly legitimated, corrected if their correlation with  given things be far too superfluous, or replaced if a new system can be established that we prefer for any reason.”

Yet, still, here and now as we are in the twenty-first century we are lacking knowledge of those things that are given in our individual, private, and our public, social experience.  There is no model, no theory by which we can know, count and measure the givens of experience.  Einstein also wrote that: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple as possible without having to surrender the adequate representation of a single datum of experience.”

So, it is a fair question to ask after the adequate representation to the givens of experience.  It is reported that in a letter to his son, Einstein wrote that: “Life is like riding a bicycle.  To keep your balance you must keep moving.”

Isn’t it time to move on to a new way of thinking about intelligence and our means and capability to alter the structure and order of our independent, yet collective reality?  This video below defines simple basic and abstract elements of thinking that could make it possible for computers to do more intelligent analysis in much simpler ways, and to help us become better thinkers in the process.

The Semiotics of Creativity

This post follows on my last introduction to an objective point of view and it continues exposing Adi’s semantics and the objects of the metalanguage he developed to help explain the relation between language, thought and basic or fundamental existence.

In this post I will charaterize, once again, the idea of conception.  Instead of using a psychological or psychoanalytic language as I have in the past, I will return to the physical theme that guided early research, after finding support for these ideas in Bohm’s book On Creativity (mentioned previously), to introduce the semiotics of creativity.  In this context, semiotics is seen as a system for the interpretation of symbols and creativity is simply the ability or power to create and to conceive (e.g., to form or devise a concept).

In what follows, I will show how the symbols of language are steeped in the creative forces of Nature so that we may extract the flavor and meaning of life.

As I have reported elsewhere in this blog,  computer scientists and linguists are fond of propositional theories that turn beliefs into statements and assertions that can be aggregated into data.  So, it has been difficult showing computer scientists, logicians and programmers, that there are other ways to process meaning.  What is called ‘semantics’ in the computer industry is the epistemological truth or correspondence between such stated beliefs or assertions.  This is all good, even rational, yet somehow ‘artificial’.  This has been demonstrated in the past and more recently with game-playing computers.

The ‘epistemological methods’  do not account for the ‘natural causes’ of human perception or the production of belief. This may be hard to grasp fully, yet, one intuitively knows that their ability to act or judge (also seen as an action) is subject to physical forces and conditions, arising from within and without, and to the passage of time.  Dr. Tom Adi discovered the essential nature of these physical powers and creative forces while looking for semantics in samples of a historically consistent language.

The semantic logic of the poietic-side (generative) use of language derives from physical processes: Upon enduring (more-often after appreciating) the forces and powers behind prominent events — take one that evokes a familiar, if not pleasing, sensation X — a Speaker S may find they can fashion physical gestures and symbols and actual procedures (moving towards, away / forward, backward, etc.) into mental tools. Such tools are used for projecting the idea (the configuration or arrangement of objects and procedures) that causes X, where the sophistication and use of such tools increases over time. Children often learn repetitively; by simulating or causing a physical procedure (influencing X) to reoccur.

Such ’physical procedure’ (explained more fully below) may be carried out in the imagination or for real. There is nothing mysterious about sensation X. It is defined according to practice as a palpable feeling or perception resulting from something that happens to or comes into contact with the body. It is physical nature that all living organisms have a proprioceptive sense; one that relates to the stimuli connected with the position and movement of the body. These stimuli, produced within the organism, are sensations that cause further reaction or response. Most people have witnessed a flower turn its petals to the sun.

The sensation that moves the flower is produced from within, from a sense of the extent, direction and force of impinging stimuli, i.e., For the flower, the ‘meaning’ is in the orientation of the flower in respect to the natural forces moving it to take the ‘right’ position. Moral and other distinctions holding mind and body apart are unnecessary to one’s proprioceptive sense of the position, location and relevant extent of objects and forces in one’s immediate presence.

While all living beings have a proprioceptive sense of being at their discretion, (to avoid running into things, face in the right direction, or simply satisfy their role, etc.) humans beings also have limited dominion over the creative forces of nature to go along with their animal instincts. It is human nature to uncover or discover the physical nature that causes one’s experience. One can use or abuse these powers and act in many ways, though mainly, one acts to change the future and one may act as if the future is irrelevant. The liberty and power to judge plays a major role.

As everyone does or should know very well, we cannot pass physical nature from ourselves to others, we can only project our own sensations as ‘sense-data’ — the idea that something (in the surrounding environment) affects us or causes X. We expect others can “feel” the same way or “see” or “sense” the “controlling presences” (often, even without quite knowing them ourselves).

The meaning in this sense-data is gathered up in the symbols we use to project the idea that causes sensation X. Others have to ‘get’ or apprehend the idea that produces sensation X.  To ‘have meaning’ is to be capable of causing sensation X to arise. Any useful sign must indicate a physical procedure: the forces and conditions that characterize the extent (limits and relevance) of objects in respect to a perceptible position or location and relevant extent of sensation X that a Speaker S desires to be produced in a Listener L.

Plainly, what is called the idea (here) is the position and power — of the particular configuration of being, forces and conditions — that produces sensation X and causes the anticipated reaction in an individual. The problem today is that the meaning of ideas, — the bearing of such forces and conditions — can be confusing, tacit, vague or ambiguous; hidden behind a plethora of speculative, metaphorical or subjective references projected using ordinary speech-Acts A.

Now let us turn our sights onto that ‘physical procedure’ and characterize the forces and conditions involved in the creation of meaning and the production of significance. A focal interpretation of such forces of production P and conditions of existence R is at-hand.

The formulation that follows derives from Adi’s theory of semantics, where the abstract objects of Adi’s metalanguage objectify natural operations, forces and conditions. These sets of objects, defined below in mathematical terms, construct a conceptual polar coordinate system given folks share a proprioceptive sense of being (a body in motion, oriented in space and time).

While a skeptic might accept a claim that humans are specs on a rock hurtling through space, being a body in motion in space and time is only slightly more abstract and ‘being human’ claims little more. It claims the need for knowing one’s position or location, power and relevant extent, in respect to other states and objects in the same dimension. Adi’s arrangement interprets the limits to the natural system of objects, forces and states present to interpersonal experience from a proprioceptive point or value.

Computationally, any sequence, function, or sum of a series (such as a series of sounds or phonemes, i.e., signs) can be determined to be progressively approaching or receding from this point or value, i.e.; its bearings can be determined.  If meaning is determined to be the property of something existing, said or done to impact one’s sensations  — as it appears to be — this functionality appears critical to predicting significance or pertinence and relevance.

It has been difficult for most people to understand how the positions of arbitrary objects and vague forces and conditions can be characterized or calculated from language. Many linguists quickly dismiss the whole idea as radical, incomprehensible or impossible, out of hand. It does not make them ‘right’.

Language is widely considered to be like a map of the territory of reality.  People use maps to get and set their bearings. People use language to navigate the world of other people and their opinions, along with other objects, things and feelings. Now that you have been introduced to this point of view, I urge the reader to think critically about what follows in connection with the examples that are included at the end of this characterization of Adi’s semantic objects.

While these forces and conditions are taken to be axiomatic, the implications can be barely perceptible. So I will first characterize the sets of (real) forces and conditions emanating from or impinging on the senses.Then I will present Adi’s semantic matrix where, essentially, thought and action, theory and practice, meet. The intersections of the matrix are overlaid with examples of legitimate workaday representations. Here first are the objects and sets comprising Adi’s semantic metalanguage; focused on the semantics of creativity (the ability to create):

Based upon semantic findings from a study of Classical Arabic, we assume there exists a changeless and universal content to life, a set of creative forces P, necessary to the body of conception, order and change in life:

P= { p(i) | i = 1, 2, 3 } =  {assignment, manifestation, containment}.

Supervening on these forces are a symmetrical set G of psychosomatic states: G={self,others}, symbolizing unity and plurality, and; a symmetrical set T of biophysical states: T={open,closed}, symbolizing propagation and restriction. When the objects of these sets are crossed, they reveal a fixed (and rich) set of conditions R that marshal the forces P into elementary (and evolutionary) processes or procedures:

R = T x G = { r(j) | j = 1 to 4 }                                                                                   =  {(closed, self),(open, self),(closed, others),(open, others)}.

The objects organized by ‘self’ and ‘others’ are seen as categorical beings objectifying engagement conditions present at all human and social events (wherever these entities are in relevant configurations in the same dimension). The states ‘open’ and ‘closed’ also organize categorical beings. Instantiations of these states objectify boundary conditions. Some may associate these categorical beings with Whitehead’s “controlling presences”. A natural symmetry holds between these objects and conditions R and objects organized by them. Symmetry is found at the root of life itself.

The former conditions objectify natural bonds formed from sensations of attraction and engagement.  This asserts nothing more than that the bare abstractions ‘self’ and ‘others’ stripped of any other associations yet afford a (concrete) sense of attraction and engagement (with unity and plurality) necessary to the formation of bonds.  The latter conditions afford a sense of the scope and constraint of present boundaries (e.g., the scope of space, distance and the constraint of time).

In essence, there are two-sides to each state of being influencing the bonds and organizing the bodies in motion or flux and present at any event.  The intersection of the conditions R with the set of forces P objectifies the valence of binding, unifying and organizing significant objects, forces and conditions into procedural states of being.

The selection and formulation of physical procedures — composed in respect to R of P — determines the type of polarity in the relationships R that ensue; whether applying or acting on the creative force of nature as implied by words and language. Adi derived four perceptible types of orientations from the crossing of boundary and engagement conditions. The valence of relationships R affords a sense of choice or bias; giving direction to, or unfolding: inward, outward, or being jointly or disjointly engaged.

The elementary processes, ‘Assignment’, ‘Manifestation’, and ‘Containment’, comprising the set of physical forces P within our dominion, are easily recognized as the creative forces of change when transformed into physical procedures and participatory acts of assigning, manifesting and containing; a capability to change the future in accordance with the conditions of existence R, described above.

Each speaker S marshals these forces and conditions in order to educe (to develop or bring out the latency of X, i.e., the potential of) the idea. The syntactic arrangement of consonant sounds encode symbolic processes that project the physical processes bearing on X.  It is here that there is harmonious agreement (semantics) or fidelity (or not).

Consequent to this view, a speaker S should (naturally) choose words and use language (speech-acts) A in such a way as to designate those physical forces P and (identify) the objects, states and relationships R that bear upon (or will have relevance and bearing to) Speaker S or Listener L or both S and L –from an objective point of view that S and L can and do share.  This prediction was tested by constructing a conceptual search engine (commercialized as Readware) that transforms arbitrary sequences of text and inquiries into values according to this theory. The search engine showed outstanding performance in tests that measure relevance, recall and precision in text retrieval programs. It also passed reading aptitude tests.

The results show that we can indeed construct a general point of view that thereafter predicts relevance and significance in matters presented to that objective viewpoint, one that can be readily implemented in computer logic.  A proprioceptive point of view proves to be an objective point of view; a view that is psychologically sensible to both S and L and that includes a sense of the internal unity of self-awareness and the external plurality of others, as well as a sense of the states of propagation and restriction, as categorical beings in and of themselves.  See the table below for examples.

The logic of the esthesic-side (aesthetic) understanding of language is explained as follows: in order to educe sensation X Listener (/reader) L filters the idea from within the projected sense-data –while decoding speech-act A.  If the idea is apprehended, its meaning is represented by the bearing of the forces of P and R to X; in which case we say that the meaning is induced in L, i.e., it causes the intended sensation X to actually or figuratively occur to L (i.e., appear to represent or symbolize a relevant form of physical power or influence). In such a case the idea and its meaning can/will cause sensation X to occur.  See the examples in the table below:

The Semantic Matrix of Creative Praxis

(the idea of conception)

Finding a Shared Point of View

The comments to my last post have prompted this one.  I have often been confronted with disagreement; much more so than others. Being outspoken on the subjects of meaning and relevance accounts for some of the excess.  It seems as if most disagreement is rooted in a confused sense of relevance and meaning in the world.

I believe most people would agree that social and political problems hinge on a fundamental difference in points of view.  It is the same as saying that everyone sees or perceives of things differently.  It is the exemplification of the screen of relativity; that everything in the world is relative.  If everything in the world is relative in this sense of ‘being’ relative, it is relative first and foremost to the point of view of the observer.   This begs the question: Is it even possible that there is a shared and (relatively) objective point of view about existence?

If one cares to look into the literature of relativity and objectivity, it is fair to say that there is substantial confusion among academics.  The first cause of confusion, in my view and experience, is that people seem to forget they are subject to the physical and biological nature of being here and that confuses their thoughts and actions. Related to that is the way people lose touch with the nature of being in existence. It is pretty common to say that someone has lost their perspective. It could also be that they lose their point of view.  It takes critical thinking to find it again.

An important first step of critical thinking is to establish a point of view, for example.  If we are talking about meaning that people share, we also need a shared point of view. I will call this shared point of view “a proprioceptive insight to being in existence” and remark that it is an objective viewpoint that applies equally to everyone.

A proprioceptive insight relates to the stimuli connected with the position and movement of the body that are produced and perceived within an organism. We are concerned here with individuals, and the production or creation of meaning within human beings.  I will point out that when we include the “social context” in connection with such production, in addition to meaning connected with or reacting upon the position and movement of the human body, one finds the symbols and objects of language, and the leaders, cultures and institutions of human societies, numbering among the stimuli.

Proprioception is seen by some scientists, psychologists mainly, as one of the common senses.  In my view, all people develop their own proprioceptive insight that nonetheless centers on their own existence.  It is due to this fact, that everyone, in essence, everyone in a body, already shares the same point of view towards external objects and sense-data.  Some people are more aware of this than others.

Dancer’s, for example, exemplify a highly-developed if not exceptional and professional insight into the proprioceptive sense of their own bodies and the relationships of the movement and positions of their limbs –in a formal sense– according to the design of movement.  They posses a keen ability to recognize, or they acquire sensory knowledge of, the position and location and orientation and movement of the body and its parts. In order to create her work of art, using all her physical capability and know-how, the professional dancer strives to interpret the movement designed by the choreographer with the finest technical precision and detail and most obvious fidelity.

Few people will have the proprioception of a dancer and still fewer know or admit to such an objective view of the world –even while it is an essential element in the formulation of one’s knowledge; one that can be uncovered with critical thinking. It may be due to this sort of ‘forgetfulness’ and the adoption of contrary viewpoints that people lose sight of what is relevant, significant and decisive. So this post will examine the unfolding of meaning from an objective or proprioceptive point of view. Along the way, hopefully, we will see how a person can be easily misled. Note that I am composing this from experience that I will mention at the end.

To understand why this forgetfulness affects society, let’s start with what is learned in early childhood.

During the early months and years of their lives, children begin their learning by occupying themselves with apprehending the extent of the sensations to all parts of their bodies. In each child’s life, eventually one’s insight or knowledge of proprioception is extended to sensing external or projected objects and happenings (simple occurrences, events or beings in and of themselves) in relation to the position and location of the body or its parts. “Ultimately all observation, scientific or popular, consists in the determination of the spatial relation of the bodily organs of the observer to the location of ‘projected’ sense-data.” — Alfred North Whitehead in Symbolism Its Meaning and Effect (Barbour-Page Lectures — retrievable here. Note that: All quotations in this post are taken from this source.)

In this age of modernity, many people seem to wallow in immediate sense-data and their own inhibitions and diversions while they act as if the future is irrelevant and ignore salient facts that may determine their own fate. These people will often treat the sheer conditions of existence as accidental, or as something indiscernible, ineffable and unimportant; when exactly the opposite is true. As Alfred North Whitehead tells it: “The bonds of causal efficacy arise from without us. They disclose the character of the world from which we issue, an inescapable condition round which we shape ourselves. The bonds of presentational immediacy arise from within us, and are subject to intensifications and inhibitions and diversions according as we accept their challenge or reject it.

Whitehead goes on in his lecture to talk about and define causal efficacy and presentational immediacy at some length; I would urge my readers to take it in using the link above, it is not so hard to follow. Here and now, I want to focus on the conflict that arises from these bonds, the effects it has on the individual and society, and the forgetfulness that increases doubt and uncertainty. It is the conflict here that is also at the root of the failure to resolve substantial public and political issues; such as can be seen in the problem of terrorism.

Politicians have created this problem of terrorism that binds us to activities that do too little to eradicate it. The very notion of terrorism spawns its own form of presentational immediacy that causes the senses to be hijacked –in that one’s own attention is steered away from the possibility of resolution– being faced with a vague yet terrifying unknown clouds the senses with emotional anger or fear. This is the case in America, where many Americans gladly accept the erosion of civil liberties, once guaranteed by its Constitution, as necessary to defend against the inevitability of a terrorist attack.

There is as little resolve to defend against the erosion of civil liberties as there is to deprive terrorism of its existence in this world. As Whitehead defines it: “Irresolution in action arises from consciousness of a somewhat distant relevant future, combined with inability to evaluate its precise type. If we were not conscious of relevance, why is there irresolution in a sudden crisis?” For too many people, superstition, uncertainty or doubt is indubitably and simultaneously a part of the definiteness of the present; it affects people: making them unwitting pawns of the would-be “controlling presences” that lurk behind the projected sense-data –the presentation of terrorism in the popular press and in politically-charged rhetoric, for example.  But let’s not get hung up on politics.

Whitehead wrote: “The reason why the projected sense-data are in general used as symbol, is that they are handy, definite, and manageable. We can see, or not see, as we like: we can hear, or not hear. There are limits to this handiness of the sense-data: but they are emphatically the manageable elements in our perceptions of the world.

Note that much of the projected sense-data are symbols in some deeper sense, e.g., as politicians, religious leaders, experts, can be used as symbols in and of themselves; as well as the propositions, facts and information we get from or about experts, politicians and religious zealots in the news and on the Internet, for example. Most of what we take as symbol is generated from the immediate sense-data –such as one’s own symbolic conception of terrorism– and it takes its place among the manageable elements of one’s experience. We can surely hear and see as we like according to choice and free speech, but as Whitehead warned there are limits to this handiness.

When these symbols come to represent the inevitability of the way things are –to be taken as the controlling presences of now and the future– they have been taken too far. Referring to the manageable character and definiteness to the presentational immediacy of projected-sense data used as symbol, Whitehead tells us that: “The sense of controlling presences has the contrary character: it is unmanageable, vague, and ill-defined. But for all their vagueness, for all their lack of definition, these controlling presences, these sources of power, these things with an inner life, with their own richness of content, these beings, with the destiny of the world hidden in their natures, are what we want to know about.

Some people tend to take, or rather mistake, trending topics, popular knowledge and celebrity as what they want to know about –it is because of this feeling, perhaps, that celebrity is important to them. The trouble is that, for some, the mistaken person or object of desire joins the controlling presences in their lives.  Rap artists and comedians become role models. Dissidents and zealots command the press and the public attention. Neither politicians nor athletes can escape their celebrity.

Yet: “As we cross a road busy with traffic, we see the colour of the cars, their shapes, the gay colours of their occupants; but at the moment we are absorbed in using this immediate show as a symbol for the forces determining the immediate future.” Neither politicians, artists nor experts gets involved in this immediate task. How then can they rise to the occasion of being among the controlling presences in one’s own life?  Whitehead tells us by explaining that: “We enjoy the symbol, but we also penetrate to the meaning. The symbols do not create their meaning: the meaning, in the form of actual effective beings reacting upon us, exists for us in its own right. But the symbols discover this meaning for us.

Confronted with a need to cross a highway, the symbolic definition of each element of the projected sense-data is not as weighty as the relevance of the immediate future and the accord between the immediate goal and the natural forces –those regarded as controlling or regulating the phenomena. The need, the lack of a traffic signal, the sequence of moving vehicles, their speed, and the makes and models of the cars, along with their descriptions and occupants, uncovers or shows much of that meaning in the weight of the relationships symbolizing the present configuration of ‘projected sense-data’.

The projected sense-data co-mingles with the objects of presentational immediacy and one’s own sense of the familiar. Emotional desire moves us to immerse ourselves in determining the relevance of the immediate future to the wholeness of the present and the efficacy of our intention. If it were otherwise, if we were delving into the accurate definitions because the projected sense-data were unfamiliar, as is the norm with computers, the relevance of the immediate future would necessarily be inhibited –perhaps with devastating consequences.

In human beings, unlike machines, all possibilities are potentials as we act from the proprioceptive sense of our own being in relation to this confrontation with reality and the forces determining whether we make the passage safely or not. The emotions that move us, these forces and future possibilities, coalesce into a unified state of relevance at the precise moment of resolution.  This unfolding of meaning –the apprehension or grasp of it, in and of itself– provides all that is essential.

Now I don’t really expect many reader’s to get my meaning, and all of sudden become capable of perceiving the unfolding of meaning; that otherwise, and for some people, all happens in a flash. Those people who have experienced such occasions, can recall and think about the salient features.  Whitehead wrote that: “Certain emotions, such as anger and terror, are apt to inhibit the apprehension of sense-data; but they wholly depend upon a vivid apprehension of the relevance of immediate past to the present, and of the present to the future. Again an inhibition of familiar sense-data provokes the terrifying sense of vague presences, effective for good or evil over our fate.

In the case of crossing the busy highway: the cars, the road, the state, the occupants, past experience, the present, everything –all the ‘projected sense-data’ — is condensed into points or bodies in a space and time that is (all-at-once) intrinsically connected to our own proprioceptive being and location. What has happened, what is happening and what will happen next are each relevant and each commands its own body of being in the projected sense-data. “Our relationships to these bodies are precisely our reactions to them. The projection of our sensations is nothing else than the illustration of the world in partial accordance with the systematic scheme, in space and in time, to which these reactions conform.”

I hope my readers will bear in mind that the projection of our sensations is both real and imaginary, and they too take refuge in, and stay true to, the systematic scheme of existence, in space and in time, that is the changeless and unbounded wholeness and efficacy to creation.

__________________________________________________________
Together with Tom Adi, I went looking for “meaning” beginning in the early 1980’s, or rather, we went looking for what constitutes meaning. I believe we not only found what constitutes relevance and meaning or semantics in natural language, Tom found natural laws to the wholeness and accord that exists between causal efficacy and presentational immediacy. In my view, Adi’s elementary processes are the same entity as Whitehead’s controlling presences.

Beginning with the assumption that all bodies (abstract as well as concrete bodies) are in motion, according to physical laws, and; using a polar coordinate system for making measurements of orientation, distance and length from a center point; we tested Adi’s semantic theory and procedures thoroughly. First, an algebraic language was created using Adi’s elementary processes and conditions of existence as its abstract/mental objects.

These processes, called Assignment, Manifestation and Containment, and their conditions of existence are most recently explained here.  We also developed algorithmic methods for reasoning about this “relation of meaning” between the words or symbols of text expressions. During this exercise we learned more about these elementary processes and the conditions of existence.  It is fair to say we are still learning today as we have only broken the surface.

We transformed the language, mathematical apparatus and methods into computer software (Readware) to test the reasoning and new theories of semantics, language learning and cognition.  We submitted the software to repetitive, formal and informal capability testing in text analysis, classification and text retrieval use cases –where relevance, recall and precision is measured. Performance testing was conducted from 1987 until 2007 in which it passed all tests with exceptional margins.

Some of the work has been peer reviewed and published in scientific journals and books; this report is in the public domain.  Yet, it takes a proprioceptive sensibility to make use of the functions. It also takes critical thinking to understand this work, and to understand the sense of meaning and the conditions of the existence from which we all issue.

Consider the nature of conceptual vs. data processing.

Data are elements of conception.  A conceptual element of human insight or imagination is not data. A conceptual element or concept is symbolic of human insight and fancy, i.e.; it is a function of creative thought –of engaging the imagination, the intellect and the creative force of existence in symbolic and physical processes of creation and its renewal.

A creative process is thereby directive and a concept is no arbitrary symbol. A concept represents the unification of symbolic processes of conception: the interplay and engagement of the intellect and imagination and psychological and physiological processes in the creative processes and conditions of conception; in the activity of perceiving and experiencing creation.

A concept can thus be seen as a part of the larger totality of Creation. Such a totality engages not only of the intellect and imagination but also of the harmonious order, essence and totalities, or coherent wholeness, of subsequently experienced (and socially distributed) psychological, physiological and creative processes and conditions.

As I showed in my last post: The essence of the order, structure and the coherent wholeness of the creative processes and conditions are condensed and objectified by way of shared conceptual insight. Such objects are often perceived, copied, reflected upon and instituted as the names of things, and used as words and expressions in the language.

Consider these long-lived conceptual institutions: Beauty. Justice. Liberty.

In the foreword to David Bohm’s book On Creativity, editor Lee Nichol writes:

We have found, developed and formally tested that language and the objective terms in which conceptual processes can be (computed) understood and measured.  While the independence assumption has led AI into torpor, a new interdependence assumption coupled with conceptual processing and critical thinking can lead to a new era of creative computing.

Creativity, not intelligence, is the hallmark of humanity.  However, the prevailing view is that the concepts and insight to creativity cannot be computationally defined and that creative thought is vaporous and empty of any substance. The power of thought or of concepts to engender creative actions in human beings remains shrouded in religious or mystical superstition.

We need assistance and support though, to change that view and help to usher in a new era of intelligent progress and creative achievement.