By using the term ecology, I mean the study of the interaction of people with their environment: the environment of human awareness and knowledge.

I think that most people, feel that they are aware of their surroundings.  Psychologists say that because you feel as though you are aware, you assume everyone else is as aware as you; more or less.  Unfortunately, it does not turn out that way.  There are a host of ill effects because  there seems to be very little in our awareness that any few people can agree upon.  The lack of shared knowledge, the lack of shared intelligence, have an affect on the different level or type or kind of awareness in societies of people everywhere.  If everyone were (explicitly) conscious of one and the same thing, then we could say that everyone is conscious of such and such.  But we cannot make such a statement or claim in this day and age.   A day and age of modern communications, computers and “open information” mind you.

Nonetheless human beings are modelers in this world or environment in that we build or construct models of it that suit us or satisfy us either by explaining or predicting the circumstances in which we find ourselves.  I should say that I take it for granted that there are both good and bad models.  I want to introduce you to a good model of the organism of intelligence (mentioned in my last post) that each of us use, even though most of us are not very conscious of it.  I expect that anyone can tell a good model from a bad one.  A good model is one that stirs or moves your awareness. It affects you in such a way as you are disposed, obliged even, to pay closer attention, as it obliges one to think more exactly about someone or something; it is one that warrants becoming more aware of it;  conscious of it, learning it: ultimately using it for enlightenment and for gain.

A model M is equivalent to a knowledge K. M=K because we employ models in making predictions about certain attributes just as we employ our knowledge. The term “attribute” is used here as a noun in an ordinary way to signify a quality or feature regarded as a characteristic or inherent part of someone or something. Every environment has attributes that are characteristic of it.

For example, the ecosystem is an environment that has the attributes of air, water, earth and fire. The goal is to find just those attributes (and no more) that are enough to quantify the valuable or significant changes that make a substantial difference; affect our surroundings in some way. That is, to generate or induce knowledge and awareness we must perform a transformation: we must transform (what is recognized to be) an attribute of the environment into a personal or individual affect. That may sound strange, so let me explain it a little further.

In the case of the ecosystem, the attributes air, water, earth and fire can affect us, and one might readily imagine how the presence or absence of water or air can induce different states of mind. In any case, they may be the cause of some serious condition that could affect any one of us; imagine the situation where there is no air to breath. This quality makes air a good attribute of this environment (the ecosystem) because we can readily imagine and predict how we could be affected given some arbitrary change in the situation. But: — are these attributes sufficient and all that is necessary to predict all possible changes in the environment that might affect us?

Imagine now, how difficult it must be for scientists, for anyone, to build a model of the environment of human knowledge, awareness and consciousness. In some circles of research, that is what AI and AGI engineers are trying, have been trying to do. It is true the engineers and programmers have not been up to the daunting task of it. Yet that does not diminish the fact that it is what needs to be done in order to produce an AGI, after all: we need to be able to model our own situational awareness.  By doing so, we may become better equipped to anticipate and reduce the affects of unwanted and harmful eventualities of which many people are all too aware.

For example, economists create models of economies with certain attributes and premises. For better or worse, this is done in order to deduce conclusions about possible eventualities. Economic models are useful as tools for judging which alternative outcomes seem reasonable or likely. In such cases the model is being used for prediction. Thus the model is part of some knowledge about the environment.

The model embodies the knowledge because it is itself a capacity for prediction. Thus, a model M can be considered to be fully equivalent to a knowledge. Therefore we can assume here that a model is synonymous with a knowledge. More specifically, it appears that a qualitatively relative definition of knowledge is warranted: “A Knowledge K is a capacity to predict the value which an attribute of the environment will take under specified conditions of context.”

Now let’s talk about people (sapients) and frame a model of their environment, that is, the environment of their awareness; of which they are aware (sapient). We can assume that everyone’s awareness changes in regular and predictable ways and each person has some knowledge that allows them to predict the value of attributes in their own awareness. Here, as you see, an awareness is equivalent to the environment in which we abide. We are intuitively surrounded by or abiding in the environs of sapience.

Before I begin the example let me reveal that I have a knowledge of the attributes of a denotative awareness that includes and subsumes all possible connotative environments. I will say there are eleven attributes to this environment of awareness but I will only introduce two of them we call “Self” and “Others” in this example. Like all the attributes of this rather explicit awareness, these two attributes, Self and Others, correspond with the real entities and their activities, self and others, in the world of ordinary affairs and situations. I am only using these two in order to keep the explanation simple and real and because that is all that is necessary to demonstrate the meaning of intelligence, which I will now define as: the organism or mechanism of the attributes of the environment to affect awareness.

So, to be clear, I am not going to give the complete specification of that organism or mechanism here, but I will show you how two of the attributes of the environment I have clearly in mind “affect” both my predictions and yours.  Incidentally, let me also define a “mind” as a (psychical) state space (e.g. abstract and mental space).  So we begin with an assertion: Besides my own self, there are others in my environment; the environment in which I exist and of which I am aware.

I embody the organism we call  intelligence (as do you)  and I have a knowledge K to predict that the value of a single measurement of the attribute Others, equivalent to and connotative of “wife” will be Gloria, just in case I am asked about it. This prediction is observed to be a transformation of the state space of the attribute Others, just like the state space of the attribute Self.  Under the specified conditions and in the context of my own environment, the state space is transformed, by my own knowledge K to be equivalent to my name=Ken. Under the same specified conditions of context: the connotative context “my wife” is connected to the denotative context (observable yet normally left tacit or unemphasized) by taking successive measurements (e.g. making interpretations) of these explicitly shared attributes of the environment of my awareness. I believe that once consumed, that much ought to become clear and self-evident, that is: I take it as being axiomatic.

I can also predict that additional measurements of the attributes Self and Others will yield different values equivalent to the connotative appearance of several other self-organized entities, things or activities, that become salient to my own environment from time to time. In this way (and only in this way) my Knowledge K is different than your knowledge T. It is peculiar to my thoughts and perceptions in the context of the environment situated where I live, i.e; to my awareness of that environment. You will have a similar situation –your own “context” (the particular circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed) of the environment of your own awareness. We don’t know each others knowledge or awareness. We (may only implicitly) know and share the explicit attributes of such a (sapient) awareness.

That is to say that I live in the same environment (of general awareness and sapience) as you. And I have a knowledge K of Self and Others, as attributes of this environment that (the relevance or significance of which) you may only now be becoming aware of. Both Self and Others are clearly attributes in our shared awareness. In fact, they are attributes of a universal environment for homo sapients. Remember that a knowledge T, K, …, or M is a capacity to predict the value which an attribute of the environment will take under specified conditions of context. Everyone has their own name, knowledge (whether implicit and explicit) and their own conditions of context. This is the private knowledge held inside them and perhaps also by relatives and friends.

Now we are able to make some observations and see some of the implications that flow from what has been stated above. We can intuit, for instance, that a wholesome knowledge K is evidenced whenever an organism produces information or reduces a priori uncertainty about its environment. I realize this is incomplete, though it demonstrates that (connotative and social) knowledge, text and all computer data is synthesized from (the transformation of) valid attributes A, which cannot be construed as being contained in or patterned by (computer) data nor by modern language.

Any invariant or regular and unitary attribute A (whereby individuals are distinguished) ought be seen as a continuity to be treated as valid– and used as a handy and trustworthy rubric for making or producing transformations (in the state space of a mind) applied in a context of the environment.  Each measurement produces a single valuation, that could be the same or different at any moment and from place to place –only appearing to be impossibly chaotic or complex.  For those that understand such things, such an attribute may be considered a correspondence.  This correspondence may be formalized as a functional mapping of the form A: Ɵ → Ɵ where Ɵ is the (denotative) state space of the environment mapped to the (connotative) state space of the environment.  We found more than a dozen types or configurations of functional mappings that are applied in variant connotative contexts.

So, to conclude: an environment of human awareness can be understood simply as the denotative and connotative surroundings and conditions in which the organism of the attributes (and capacity of independent awareness with a knowledge) operates, is asserted and is applied.

The good news is that now that we know that it is the organism of the attributes of the environment of awareness, consciousness, that is both explicit and universal (not connotative belief,  knowledge or perception or conception –which are all relatively defined) we can get down to resolving differences while  accommodating everyone.  To be specific, we can seek better understanding and control over perceptual and conceptual states of awareness in a decidedly invariant environment (awareness) of continuous change, where intelligence is any organism or mechanism of the attributes of an environment that affects such awareness and consciousness.


We can also define semantics as the correspondence of both the denotative and connotative states of conception to the set of all possible functions given the attributes of the environment.  Now, if you want to know more details you will need to put me on a retainer and pay me.

The world is lacking an operational definition of intelligence that can lead to more exact thinking and to computer systems that help people to think more clearly and effectively. A good operational definition ought to be:

  1. Specific enough to be implanted as a procedure one that can be easily and readily followed.
  2. Motivational, manageable, measurable such that it leads to invention, progress, successful outcome.
  3. Attainable such that any baby can use the organism to sense and control entities and activities in its world or environment.
  4. Relevant, in that it is determinate of what is to become significant, or;
  5. Timely, and
  6. Salient

This definition (stated below) addresses two questions:

  • Where do we get the intelligence to deal with a growing, changing reality?
  • How does intelligence work to make changes in our favor?

Most researchers agree that human intelligence is observed in behavior, in particular, in language and through speech acts. The Sapir-Whorf theory of linguistic relativity, was summarized by the semanticist Stuart Chase, when he stated:

“First, that all higher levels of thinking are dependent on language. Second, that the structure of the language one habitually uses influences the manner in which one understands his environment. The picture of the universe shifts from tongue to tongue.”

Restating this linguistic theory as a systems theory and in terms of analytic and computational engineering, notational engineer Jeffry Long wrote:

“First, that all abstract thinking is dependent upon the existence or invention of notational systems. Second, that the underlying ontological inventions of the notational system one habitually uses influences the manner in which one understands his environment. Acquiring literacy in a major notation causes us to add a new dimension to our picture of the universe.”

Based on twenty-seven years of intimate experience, I can restate Tammam Adi’s theory of semantics based on Classical Arabic, in this way:

First, living in the world is a growing, expanding experience or (ontogenic) process in which we make things (speech, nouns, names; things, artifacts, etc.). The words of language are made of abstract structures referencing bits or segments of this growing/making reality that we construct and utilize for common edification and understanding.

Second, the growth in common and social sense, along with modern languages, rests on ontogenetic intelligence in the organism of mind and on the success of its notational system: its elementary (ontogenic) processes and semantic rules, and its recognizable symbols (e.g. alphabets) and system of writing. Collectively, we call these “ontological inventions” for making progress.

Thirdly, word structure is composed of clusters or configurations of ontological inventions involving and representing both real and abstract entities and activities, arranged in such way as to be productive (of making sense, meaning, things) of understanding.

With Adi’s theory of (algebraic, axiomatic) semantics, it is possible to specify the ordinary conditions and ontogenic controls of sapience in the following concise and formalized way:

There is a self-organizing mechanism (regulating schemata) comprising:

  1. the polarity of an abstract entity, representing engagement conditions, (G) distinguishing the involvement and participation of oneself and others, (G={Self, Others}) in;
  2. a symmetrical relationship (R) crossing the polarity of an abstract procedure, representing an ontogenic orientation and boundary conditions
    (T={Open, Closed}, and R=T X G) for;
  3. a set of invariant and elementary processes
    (P={assignment, manifestation, containment}) being structured by the abstract entity, using the polar procedure for growing and making (sense, understanding, artifacts etc.) and;
  4. which schematic arrangement of such entities and activities generates symbolic and semantic operations (syntactically) carried out or produced (i.e. interpreted) by enacting them (via speech-acts, etc.).

We call this intelligence and we say: “Intelligence is the organism of a mind uniting (abstract and real) entities and activities in such a way that they are productive of regular changes from the beginning until the end.”

The Semantics of Qualia

The Wikipedia entry defines Quala thus:

Qualia (play /ˈkwɑːliə/ or /ˈkwliə/; singular form: quale (Latin pronunciation: [ˈkwaːle]) is a term used in philosophy to refer to individual instances of subjectiveconscious experience. The term derives from a Latinword meaning for “what sort” or “what kind.” Examples of qualia are the pain of a headache, the taste of wine, the experience of taking a recreational drug, or the perceived redness of an evening sky.
One might argue on this evidence that the definition applies only to some subjective qualities of a macro and  external experience while the most subjective experience of the organism must be, can only be, that internally generated experience of the individual self.  The quale of inner-experience cannot be a “macro” quantity, symbol or component such as the amount of pain or even the word, or the uncountable shades of red.  I can personally attest that one may know pain without also knowing how to interpersonally express or symbolize it.
I am not alone in believing that “qualia,” if it be an identifiable sort or kind of particular — salient to the awareness and consciousness, must instead be a micro, molecular or morphogenetic quantity representable in an associative network of firmly grounded states, (grounded in physical laws and causality).
I am aware, for example that my own inner-experience is conditioned by the homeostasis of the structure and function of my central nervous system; (not only the brain) the brain and its sensors along with the metabolism.  The objects of my inner experience are felt and reflected upon because I am emotionally invested in being here and now and in being me (the present particular “I am”).  
This emotional investment (from which one feels things) forms a feedback loop caused by the modal transformations of exogenous matters of the ecosystem and interpersonal realities into the conscious endogenous energy of self-realized experience.  It ought go without saying that I am also emotionally invested in the modern social world, (I have been raised with an American and interpersonal worldview) and I am socially, professionally and politically engaged in interactions with others.
A worldview is more than just a belief, opinion or perspective.  A worldview is a framework of ideas and beliefs through which an individual, group or culture interpret their conditions of existence.  I have more recently been developing the idea that modern ethnographic worldview is not an invention or a construction, rather it is an expression of poiesis: a creation or production of that which is named by the combining roots of organism.  
The expression of which must be seen in light of both morphogenic and “ontogenic” properties in that there are a set of semantic rules that govern ontogenesis (i.e., growth of the morphogenic fields of language from the simple to far more complex forms of expression).  The macro field of “human reality” is seen as an expression of this biogenic field of organismic poiesis, rather than as a social, cultural, literary or political construct, or any other ethnographic construction.  
Poietic semantics operates (in intelligent people) by unifying and focusing intuitive cognitive processes (onto rudimentary elements and operations of poiesis and organismic function) and by regulating interprocess interactions and individual (endogenous semiotic) rulemaking.  I can vouch for the idea that the uptake, adoption and retention of a poietic worldview affects associative thinking in intelligent people (it anchors them; it gives them an objective and transformative hand-hold in a sea of assumptions) from more than thirty-years of personal experience.  
A poietic worldview engenders (in its learner) an exactness in the immediate conception of the elements and operations of poiesis (i.e. it is a concretion of Daniel Kahneman’s system-1 type thinking (i.e it is not AI nor analytic/reductionist)). It synthesizes the components, elements and influences of associative thinking, making such thinking that much more concrete and reliable.
Here is a short video overview I prepared recently that can be shared and downloaded.  

If words are just labels with arbitrary meaning and if everything is relative and all theories are tolerated, what is there left to hold societies, or anything else, together? What is there to keep it all from flying apart; God, love, magnetism? Given security we cannot afford, without any certainty and only our assumptions in our pockets: what is the future of mutual understanding?

How can we transcend words, definitions and extensions of words, and more words, definitions and extensions of words, and both real and petty arguments caught in endless loops.    Like the philosopher Edmund Husserl, I believe that beneath the changing flux of human experience and awareness, there are certain invariant structures of consciousness.  Such structures appear to be ethnographically grounded and necessary to mutual understanding between members in a society.

We may inquire into what is necessary to the achievement of a mutual understanding.  We can begin by allowing that human “understanding” is a collection of mental and physical (psychophysical) and ethnographically-oriented processes grounded in cognates originating in and concerning human nature.

Why “cognates” instead of “words;” isn’t that another word? A cognate may be defined as any one of a number of objects or entities allied in origin or nature. Irrespective of their grammatical role in language, for example, the entities referred to as “self” and “others” are functional examples of “psychophysical cognates” that find their origins in thought, (i.e. as abstractions) and in all the evolutionary conditions of human nature, essentially; –in all of what matters to human being.

It is nothing other than salience or relevance of self and others to the domain of human knowledge that remains invariant. Some may scoff at this at first reading –finding it silly to claim that the salience of self’ and others’ knowledge is what makes it significant; but this is more profound than that. It is not a cop out either, as we have a precise and thoroughly tested and published mathematical model that holds out promise of being a sound basis for an ethnomethodological framework.

Relevant to this essay is the fact that a language includes the apparatus for composing and encoding knowledge, and for recording the extensions of invariant cognates of knowledge and understanding and their salient configurations as they are decoded, adopted, retained, rearranged, reassessed, redeveloped (or changed) over time. An important observation is that knowledge is not reduced to salience or relevance, it becomes so. This is why reductionist methods of AI have not and will not work.

Reductionism is characterized by dissection and separation of parts. Linguists, for example, dissect language into parts of speech and parse or translate the grammar of each sentence in a language into true/false assertions and propositions. When my colleague, computer scientist Tom Adi, and I began his investigation into language in the early 1980’s we came from the point of view of what language actually accomplishes.

This approach to investigating language provided findings that support an ethnographic philosophy of language. As such, it is concerned with four central problems: the nature of preserving meaning and transmitting knowledge, the use of language to accomplish such goals, language cognition, and the relationship between language, information and reality.  According to  Adi’s theory, a language can be defined as a unity–  a synthesis of cognates confirmed in experience by the matters at hand:

  1. A language is the image or projection of a synthesis of cognates in a domain of interactions by which the comprehensive recognition of perceptible objects and sensible (and successful) activities becomes possible; leading to awareness and general consciousness.
  2. In human languages, such cognates are represented by phonemes (in spoken languages) and morphemes (in written ones). These are used to compose and order the knowledge necessary to consciousness and mutual understanding.

This ethnographic view of language is my own philosophy synthesized from my understanding, practice and work with Adi’s theories and scientific observations, axioms and propositions, i.e. his new “science of relevancy” (i.e., a way to analyze the relations in a domain of knowledge representations (given language as inputs, e.g. text) to determine something relevant (the output) to matters at hand). As such it is concerned with four central problems: the nature of meaning and making sense, the purpose of language use, language cognition, and the relationship between language, information and reality.

Adi and I did not make use of reductionist techniques or materialistic or linguistic dogma in arriving at Adi’s axioms and propositions or in developing applicable computer models and algorithms, mainly because it was simply not applicable.  This is obviously a quite different philosophy of language than what is generally accepted today. It is not the way language is normally approached and studied by linguists, psychologists and logicians.

We take it as self-evident that human language, which ranges over a domain of human interactions, augments (adds to) human knowledge and expands cognition, awareness and consciousness (mutual understanding). Perhaps you do too? As I was there to learn of Adi’s theory first hand, I can personally attest to this claim.

Using a few propositions and a selected procedure from Adi’s theory, I am able to induce and explain the sensibility and perceptibility of expressions in languages in which I am not a speaker. Because all human language must range over this domain of interactions, any language can be translated into any other language ranging over the same domain. When Adi and I started working together in the early eighties that was our hypothesis; it is what we set out to investigate, beginning in the area of developing automatic language translation systems.

We noticed that there were large differences in the ways translators and interpreters choose their words and use alternative phrases and idioms. Their choices (which they often kept on closely guarded index cards) appeared to be based on subtle though perceptible differences in translating the reference and sensibility of a text or message along with the words.

Looking to the linguistic literature, we found that there is no procedural or computational theory of such ethnomethodological practices among translators, or in the classical traditions of interpretation. Language translation on computers is largely about word for word or phrase for phrase replacement. It tries to make the output sensible but it is emphatically not about making matters perceptible –that is left to people checking the machine translation. This sense of making matters perceptible also characterizes the difference between what interpreters and translators do. We had as an objective the development of intelligent technology and we wanted our system to be like interpreters –making matters more perceptible and sensible.

We decided to begin our own search for a theory of meaning that could be a foundation for translating not only the language people use but what the people actually mean to say. Having the capacity to speak and converse in a dozen languages helped to understand the ethnographic and “knowledge representation” problem, and being a computer scientist, made Tom Adi uniquely capable of performing this semantic study. I commissioned the completion of study.

Thusly, Adi does not begin his investigation into language from the point of view of the parts of speech or grammar. Having a sound idea of what language actually accomplishes. (i.e. it helps us synthesize knowledge in our domain of interactions; it helps us make sense of natural processes, objects and events). Adi sought to determine exactly how it is accomplished. We began looking for a language in which to begin a study because how this synthesis is accomplished must be determined, with all possible precision, empirically — i.e. by way of observation and experiment.

Neither of us had any preference for which language became the basis of the study, yet, based on discussions with linguists and colleagues, we held out this idea of “a perfect text” as an exemplar to begin with: a perfect text is one in which the language and the meaning it conveys is perfectly clear and completely unambiguous. Adi expected to find natural laws using such a text. He initially focused his efforts onto exactly how language might be synthesized according to natural laws and in light of Einstein’s relativity and the modern standard model of physics.

Adi studied textbooks about the analysis of the nature of the hydrogen atom, speculations over the smooth surface of water and texts on chemical bonding as well; reading the summaries and overviews written by Russian physicists and German chemists. He reasoned that language must behave in ways similar to atoms; in the way they bond and form new or changed bonds in chemical reactions.  He felt that smoothness at the surface of water and the continuity in language must be related and reflected on ways that establish this fact, i.e.,  he sought processes that are somehow similar to the operations and laws of particle physics and in chemistry. This means language is to be seen as a natural rather than a social phenomena.  The processes of  atomic and chemical bonding drives biophysical processes– and the operation or behavior of all of it is well-defined –according to natural laws.

Adi began looking for natural laws and processes that somehow regulate the ways people use language to interpret something and unify it in their own awareness and intuition, and he found them. His observations and experiments were later published (in Semiotics and Intelligent Systems Development, 2007) as a set of proofs to a semantic theory of ancient Arabic. By the time of that publication, we had already rendered the theory into an axiomatic model of language and we further developed in situ methods and computer algorithms for synthesizing ethnographic “knowledge-types” (Plato’s eide) from text and messages written in any of the English, French or German languages.  (See this peer-reviewed paper for more information.)

The major difference between the work of cognitive scientists and linguists, and Adi, is the frame of inquiry. Adi asks: what task does language have — what problem must it solve — in order to accomplish what it does. It is the formulation and analysis of this task which is the starting point and primary focus of Adi’s investigation.

Language is often cast in terms of modern communications science while the problem language solves is a memory problem not a communications problem. This is regrettable on many levels though there is no need to dwell on that here.  What we found is that the cognates of families of human languages organize, encode and range over a kind of permanent memory space, accommodating the definite domain of human knowledge while being constrained by the more indeterminate domain of human interactions (ethnographic activities).

The advent of the phonetic alphabet gave the world its most efficacious form of interpersonal memory –a solution space. Phonetic alphabets represent a synthesis of elementary processes and conditions (i.e. laws, semantics, poiesis: to make something determinate) ranging (via Adi’s micro-syntax) over the domain of human interactions. The phonetic alphabet was the world’s first recording technology: A world-famous device for more permanently recording the dimensions of mutual understanding or human consciousness. This is done by encoding it within the long term memory or name-space of human language.

In summary, human language appears to be a recording system. It provides the means and methods to encode (and to access) knowledge from the domain of human interactions for all generations. It was because of this realization and formulation that Adi’s semantic study was successful and we immediately derived useful operational knowledge that we could and did turn into state-of-the-art technology.

It is true that the industry is hooked on analytics. Don’t you know that analysis and synthesis go hand-in-hand? Where are the developers, the entrepreneurs, the organizers? Which of these are you? Don’t you believe people need ways to synthesize perceptible knowledge salient to matters at hand?

Solving for the Meaning of x

What is “meaning” in questions such as: what is the meaning of life? It is the same as asking what is the truly real significance of life. Any answer is only theoretical.  Intuitively, any answer must be universal.  The truly real significance must, by definition, be significant for everyone.

That makes the notion appear to be either exaggerated or rather improbable.  The universality of such a theory of meaning would rest on the multitude of “real” things that are perceived by the theory as salient, pertinent properties and relations in “real life” and to humanity in general.  It would have to include everything we can imagine in experience.  How could it be possible?

This would also make it necessary to correspond with every “real” experience, in just enough (and no more) dimensions, necessary to make such experience “really” meaningful.  Intuitively, it must capture or cover any continuous or discrete distributions or extensions of “real” natural structure, elements or processes, in three dimensions of space and one dimension of real presence or immediate existence x.

It is very complex but not impossible.  On the one hand, one cannot help but wonder how to deal with such complexity.  On the other hand, we notice that very young children do it. Four-year old children seemingly adapt to complexity, with very little problem.  It is sophistication and obfuscation that comes later in life with which they have problems.  At four, children are already able to tell the differences in sensible and nonsensical distributions and extension of reality,  irrespective of whether they are the continuous or discrete variety.

These continuous or discrete distributions and extensions bear some additional explanation mainly due to the overarching significance to this context. First, they establish a direct correspondence with our most immediate reality. For every time we open our eyes, we see a real distribution of colored shapes.  Such a real distribution is nature’s way of communicating its messages to consciousness, via real patterns.

Second, perceived distribution patterns directly suggest the most fundamental ontological concept in theoretical physics: a field configuration, which in the simplest example of a scalar field can be likened to a field of variable light intensity.  That life is intense and that meaning is intense is not something one ought to have to prove to anyone. I will come back to intensity in another post, as I want to continue commenting on presence or real and immediate existence x. We must, in practice and in effect, solve for the real meaning of x as you see.

Meaning in this case, so defined, is literally the significance of truth, or more appropriately, what one interprets as significant or true within the dimensions of intense messages or information pertaining to real life as specified above. So, we must begin, undoubtedly, by defining what true is, then proceeding to the next step, we ought define the elements and structure to one’s interpretation of this truly significant nature of life. I did it a little backwards in this respect and this has always created a bit of a confusion that I did not see until recently.

One begins any such analysis by examining a subject’s real elements and structures. For the subject of truth, one also searches the literature where it is well represented. Such a search conducted on the subject of truth brings a broad range of ideas. To try and make a taxonomy of ideas from the varied opinion found there would turn out to be an exercise in incoherence, But it ought be acceptable to reference some theories and practices that have been adopted.

Ibn Al-Haytham, who is credited with the introduction of the Scientific Method in the 10th century A.D., believed, “Finding the truth is difficult and the road to it is rough. For the truths are plunged in obscurity” (Pines, 1986, Ibn al-Haytham’s critique of Ptolemy. In Studies in Arabic versions of Greek texts and in medevial science, Vol. II. Leiden, The Netherlands: Brill. p. 436). While truths are obscured and obfuscated; there can be no doubt that the truth does exist and the truth is there to be found by seekers. I do not accept views or opinions that the  average layman is too stupid or are otherwise not equipped to figure it out by themselves.

The Modern Correspondence Theory of Truth.

While looking for the truth it helps to know what shape it takes or what it may look like when one happens upon it or finds it lying around and exposed to the light. According to some: truth looks like correspondence between one thing or element and another, Scientist have long held a correspondence theory of truth. This theory of truth is at its core an ontological thesis.

It means that a belief (a sort of wispy, ephemeral, mostly psychological notion) is called true if, and only if, there exists an appropriate entity—a fact—to which it corresponds. If there is no such entity, the belief is false. So you see, as we fixate on the “truth of a belief” –a psychological notion such as a thought of something —to be sure —but some concrete thing, nonetheless, we see that one thing —a belief— corresponds to another thing —another entity called a fact. The point here, is that both facts and beliefs are existing, real entities — even though they may also be considered to be psychological or mental notions — beliefs, ideas –they– are reality.

While beliefs are wholly or entirely psychological notions, facts are taken to be much stronger entities. Facts, as far as neoclassical correspondence theory is concerned, are concrete entities in their own right. Facts are taken to be composed of particulars and properties and relations or universals, at least. But universality has turned out to be elusive and the notion is problematic for those who hold personal or human beliefs to be at the bottom of truth.

Modern theories speak to “propositions” which may not be any more real, after all. As Russell later says, propositions seem to be at best “curious shadowy things” in addition to facts. (Russell, Bertrand, 1956, “The philosophy of logical atomism”, in Logic and Knowledge, R. C. Marsh, ed., London: George Allen and Unwin, 177-281. Originally published in The Monist in 1918. , p. 223) If only he were around here now; one can only wonder how he might feel or rephrase.

In my view, the key features of the “realism” of correspondence theory are:

  1. The world presents itself as “objective fact” or as “a collection of objective facts” independently of the (subjective) ways we think about the world or describe or propose the world to be.
  2. Our (subjective) thoughts are about the objective fact of that world as represented by our claims (facts) which, presumably, ought be objective.

(Wright (1992) quoted at the SEP offers a nice statement of this way of thinking about realism.) This sort of realism together with representationalism is rampant in the high tech industry.  Nonetheless, these theses are seen to imply that our claims (facts) are objectively true or false, depending on the state of affairs actually expressing or unfolding in the world.

Despite the fact of one’s perspective, metaphysics or ideals, the world that we represent in our thoughts or language is a socially objective world. (This form of realism may be restricted to some social or human subject-matter, or range of discourse, but for simplicity, we will talk only about its global form as related to realism above.)

The coherence theory of truth is not much different than the correspondence theory in respect to this context. Put simply, in the coherence theory of truth: a belief is true when we are able to incorporate it in an orderly and logical manner into a larger and presumably more complex web or system (sic) of beliefs.

In the spirit of American pragmatics almost every political administration since Reagan has used the coherence theory of truth to guide national strategy, foreign policy and international affairs. The selling of the War in Iraq to the American people, is a study in the application of the coherence theory of truth to America’s state of affairs as a  hegemonic leader in the world.

For many of the philosophers who argue in defense of the coherence theory of truth, they have understood “Ultimate Truth” as the whole of reality. To Spinoza, ultimate truth is the ultimate reality of a rationally ordered system that is God. To Hegel, truth is a rationally integrated system in which everything is contained. To the American Bush dynasty, in particular, to W.: truth is what the leaders of their new world order say that it is.  To Adi, containment is only one of the elementary processes at work creating, enacting (causing) and (re)enacting reality.

Modern scientists break the first rule of their own skepticism by being absolutely certain of information theory.

Let me be more specific.  Modern researchers have settled on a logical definition of truth as a semantic correspondence by adopting Shannon’s communications theory as “information” theory. Those object-oriented computer programmers who use logic and mathematics; understand truth as a Boolean table and as correspondence as per Alfred Tarski’s theory of semantics.

Modern computer engineers have adopted Shannon’s probabilities as “information theory” even though, on the face of it: the probabilities that form such an important part in Shannon’s theory are very different from messages; which stand for the kinds of things we most normally associate with objects. However, to his credit, the probabilities on which Shannon based his theory were all based on objective counting of relative frequencies of definite outcomes.

Shannon’s predecessor, Samuel Morse, based his communication theory, which enhanced the speed and efficiency with which messages could be transmitted, on studying frequently used letters. It is the communications theory I learned while serving in the United States Army. It was established by counting things — objects in the world — the numbers of different letter-type in the printer’s box.

When I entered the computer industry in 1978, I was somewhat astonished that Shannon’s theory of communications was already established in the field of information science — before word processors and “word” processing were common. I confirmed that belief by joining with information scientists for awhile, as a member of the American Society of Information Science (ASIS).

While at ASIS, I found out that Shannon’s probabilities also have an origin in things much like Morse code: although they in no way ought be considered to be symbols that stand for things. Instead, Shannon’s probabilities stand for proportions of things in a given environment.

This is just as true of observationally determined quantum probabilities (from which Shannon borrowed on the advice of the polymath John Von Neumann) as it is for the frequencies of words in typical English, or the numbers of different trees in a forest, or; the countable blades of grass on my southern lawn.

Neither Morse Code, nor Shannon’s Communications theory, nor any “information” theory, directly addresses the “truth” of things in or out of an environment –save Adi’s. The closest any computer theory or program gets to “interpretation” is by interpreting the logical correspondence of statements in respect to other statements — both with respect to an undefined or unknown “meaning” — the truth or significance or unfolding of the thing in the world. It takes two uncertainties to make up one certainty according to Shannon and Von Neumann– who had two bits of uncertainty, 1 and 0, searching for, or construing, a unity.

That is not us. That is not our scientific program. Our program was not to construe a unity, or “it” from “bit.”  That is the program of the industry, because, almost like clocks, everyone in industry marches in lock step by step, tick by tock, take-stock.

Adi began with the assumption that there is an overarching unity to “it.” He then studied how a distribution of signs of “it” (i.e., symbols that make up human languages describing “it”) manages to remain true to the unity of “it,” despite constant change. Such change, it can be argued, arrives in the guise or form of uneven or unequal adoption, selection, and retention factors, as seen in the overwhelming evidence of a continuous “morphogenesis” in as much as the formation, change and meaning of wordsfacts and other things, over eons.

To determine how people interpret the intensity and sensibility or “information” projected with language by means of speech acts (with messages, composed of words) — Adi investigated the sounds of symbols used to compose a real human language when most people were inventing artificial, specialized, logical and less general languages.  Adi chose to study the unambiguous sounds of Classical Arabic that have remained unchanged for 1400 years until present day.  That sound affects what we see is in no way some incidental trivia or minutia.

At the least, it helps truth break free of being bound to mere correspondence, a relegation reminiscent of mime or mimicry. Adi’s findings set truth free,  liberates truth, to soar to heights more amenable — such as high-fidelity,–  than those that burn out in the heated brilliance of spectacular failure.  In fact, in early implementations of our software we had an overt relevance measure called “fidelity” that users could set and adjust.  It speaks to the core of equilibrium that permeates this approach to conceptual modelling, analysis, searching for relevance and significance, subject and topic classification and practical forms of text analytics in general.

Tom Adi’s semantic theory interprets the intensity, gradient trajectory and causal sensibility of an idea presumably communicated as information in the speech acts of people. This “measure” of Adi’s (or we may call it “Adi’s Measure”) can be understood as a measure of the increase in the magnitude (intensity) of a property of psychological intension. (e.g., like a temperature or pressure change or change in concentration) observable in passing from one point or moment to another. Thus, while invisible, it can be perceived as the rate of such a change.

In my view, it is in the action of amplitude, signifying a change from conceptual, cognitive or imaginative will or possibility, to implementation or actualization in terminal reality. Computationally, it is and can be used as a vector formed by the operator ∇ acting on a scalar function at a given point in a scalar field. It has been implemented in an algorithm as an operating principle, resonating —   acting/reacting (revolving, evolving) as a rule, i.e.; being an operator: conditioning, i.e., coordinating/re-coordinating,  a larger metric system or modelling mechanism (e.g., Readware; text analytics, in general).

I mention this to contrast Adi’s work with that of Shannon, who, in order to frame information according to his theory of communications, did a thorough statistical analysis of ONLY the English language. After that analysis, Shannon defined information as entropy or uncertainty on the advice of Von Neumann.  The communications of information (an outcome) involves things which Shannon called messages and probabilities for those things. Both elements were represented abstractly by Shannon: the things as symbols (binary numbers) and probability simply as a decimal number.

So you see, Shannon’s information represents abstract values based on a statistical study of English. Adi’s, information, on the other hand, represents sensible and elementary natural processes that are selected, adopted and retained for particular use within conventional language –as a mediating agency– in an interpersonal or social act of communications. Adi’s information is based upon a diachronic study of the Arabic language and the confirming study in fourteen additional languages, including modern English, German and French, Japanese and Russian, all having suffered the indisputable and undeniable effects of language change — both different from and independent of the evolution of language, or the non-evolution, as-it-were, of Classical Arabic.

Adi’s theory is a wholly different treatment of language, meaning and information than either Shannon or Morse attempted or carried out on their own merits. It is also a different treatment of language than information statistics gives, as it represents the generation of salient and indispensable rules in something said or projected using language. It is different from NLP or Natural Language Processing which depend (heavily) on the ideas of uncertainty and probability.

A “concept search” in Adi’s calculation and my estimation, is not a search in the traditional sense of matching keys in a long tail of key information.  A “concept search” seeks mathematical fidelity, resonance or equilibrium and symmetry (e.g., invariance under transformation) between a problem (query for information) and possible solutions (i.e., “responses” to the request for information) in a stated frame or window (context) on a given information space (document stack, database).  A search is conducted by moving the window (e.g., the periscope) over the entirety of the information space in a scanning or probing motion.  While it ought be obvious, we had to “prove” that this approach works, which we did in outstanding form, in NIST and DARPA reviewed performances.

Adi’s theory is not entirely free of uncertainty as it is, after all, only theoretical. But it brings a new functionality, a doctrinal functionality, to the pursuit of certainty by way of a corresponding reduction of doubt. That is really good news. In any case, this is a theory that deserves and warrants consideration as a modern information theory that stands in stark contrast to the accepted norm or status-quo.

Time To Reconsider and Resolve.

The political, religious and aristocratic coloring characterizing the present state of humanity subverts the validity of human cultures by monopolizing and maintaining “trust in the system”. One needn’t disrupt or subvert the present politico-economic systems in order to illuminate the space of language and correct the course and currency of the relevant orders.

What compels me and motivates the subject is the admonition: — the public interest requires doing today those things that men of intelligence and goodwill would wish, 5 or 10 years hence, had been done.  My concern is with humanity and a future filled with depression fueled by the imminent collapse of public institutions founded on misplaced “trust in the system.”

Modern politico-economic systems, or orders –as they are called, are the means by which the state and its enterprising exponents are given to the misplaced trust by maintaining the “ignorant bliss” of its culture. The way societies are controlled is with an insidious method of thought control inherited by modern heads of state and economy.  This method is so insidious that almost all of the present dilettante, public and professional members of society have been beguiled, falling victim to its perplexing effects.

One of the more serious side-effects of this mortal condition is a repressive and subversive mentality that exists at all levels in nearly all public societies, even in democracies. The foundation of this condition, now afflicting almost all of humanity, originates in the monopolization of the trust and validity of the word by the political systems and orders of history whose succession has been written into the permanent space of language.  Modern leaders are found to be conniving a “new world order” to strengthen the monopoly.

Political, academic and economic leaders along with the exponents of each of the political states and systems support each other; often while neglecting and hindering the progress, intellectual development and well-being of the people they pledge to govern, educate and support. This is done in order to increase the power and wealth of the members of the order or “insiders.”

Monopolization of the word has led to rampant skepticism and fracturing use and dependence on metaphor and politically-charged dialog. This serves only the desire for the accumulation of wealth in the hands of the few leaders of the political state and economy. Proponents of the currency of the system have been beguiled by its charms as exponents of their own wealth and reputation.

The insidious effect, hardly unintentional, is to curb, retard or otherwise control the progress of humanity which is the cause of considerable suffering and undefinable harm — disarming the people by preoccupying, burdening and binding them with their own concerns for their welfare and well-being and the tiresome rule of mediocrity. That leaders are skilled negotiators of their politico-economic domain comes sharply into focus in the manifestation of unbridled greed in their financing and banking exponents.

Monopolization of the word by institutions and the political establishment is perpetuated in the following ways:

The most vocal critics of political repressions are outsiders that are easily silenced by charges of not having the proper training or credentials. The method is to show evidence of being quoted out of context or having misgivings that spring from radical purpose, ignorance or simple misunderstanding.

The reasoning of insiders and experts has a character different than that of the critic. This reasoning is far more obtuse, often employing as evidence the thing it needs to prove; as was done in the justification to make war in Iraq. In most literary and social sciences, such as psychology, linguistics and communications, information and computer science, important aspects of language and literature — sensibility, generous tastes, wide experience — are subverted by outright speculation or speculative model-building.

Poststructuralism is self-defeating, offers no explanation as to what language really is, and its proponents seem purposefully willing to act as if they are ignorant of the fact that the relevant institutions are in need of reform.  While theories of “language” and “semantics” are widely quoted by critics or writers in the new media and tenaciously entrenched in universities, it is largely a mediocrity of unexplained standards of political devising: a local currency of wealth and reputation.

All this constitutes a tiresome and convoluted prose that exists in academic disciplines, law and literature and in the use of technical vocabulary in private and public communities.  This make evaluation of the discipline or community difficult, making course correction nigh impossible and rendering it unfathomable to find and establish fundamentally correct processes and criterion for their evaluation, thus:

  • civil understanding of what is happening is diverted by political aims or greed.
  • civil enforcement has become indifferent, perhaps intentionally so.
  • the very notion of a safe and secure future, without terrorism or inhumanity, is blocked and shut out by ignorance, immediate skepticism and outright disbelief.

The vernacular of the exploding political administrations along with that of the public and popular press quantitatively increases the numbers of words.  They press the words into temporal media, aiming to monopolize them, while apparently qualitatively emptying the words of substance and content through excessive dispersion.  The cost of collecting, storing, processing and publishing the indifferent, often empty or senseless, words — is born by society. While enterprising insiders increase their own lot in the currency of wealth and reputation, they add far too little to the qualitative and permanent space of language.

Appreciation of a culture’s art is coincidentally assigned a temporal index of its value in the local currency of the day, often subverting the permanent qualities such work ought project.

Enough? Is it time to think about the permanence to the future of humanity instead of the currency of wealth and reputation?  Ought not a writer taking up the pen aim to enliven the permanent space of language?  If there exists ways to liberate society from the bonds of indifference, needn’t there already be criteria and processes for resolving how to do it?

The problem is whole populations have lost their humanity due to indoctrination into a (false) sense of security where people have been lured into misplacing their trust. Modern generations seem to have lost the capacity for sustained attention along with the ability for processing exact thought for themselves.

The permanent space exclusive to humanity and no other living creature on earth has been successfully hidden by covering it in a “modern mind” some thing which many people seem to have lost in the pages of a forgotten history while they have actually lost authority over their own thinking. This myth of modern mind is so insidious it infects the speech of everyone without exception even though no one can say what or where that thing we call “mind” resides or what it is, exactly.  Its method is perplexing, the consequences of which demeans humanity.

While I cannot claim to know what it is, I know that the academic discipline of philosophy and psychiatry invokes states of consciousness that do not exist, and perhaps that is where the mind was hatched, for to “have a mind” is surely nothing but a dubious figure of speech.

That no one can say where such a thing as mind goes when a person supposed to have one dies, or even manifest or display one for all to see, should be salient to even the dimmest of human intellects.  Yet the popular concept of a thing or space called “a mind” persists. It persists because it serves those who would monopolize trust in the validity of the “word” (logos) to increase the currency of their wealth and reputation.

In order to confiscate the space of language from the people, authorities invented the fiction of the modern and popular mind so that the more permanent space of language could be obfuscated by the temporal politico-economic order of political peers and their banking exponents. If this so-called modern and popular mind exists at all, it as only as a phantom — controlled (curbed like the dog that it is) by exponents holding fast and firm to a misplaced trust in the temporal usurper’s of authority over humanity.

Do people need to consider their own humanity in order for humanity to become more humane?

Why or why not?

Albert Einstein wrote: “Computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together they are powerful beyond imagination.”

The partnership between human beings and computers is long and enduring and there are so many examples of just how powerful the influence of computers really is. This was especially true after the debut of the personal computer, and again after the debut of the Internet that gets us connected today.

When spreadsheets came out we became better tabulators. When word-processing and spell-checkers arrived we became better writers. The widespread use of relational databases made it easier to collect, store and manage information making us more intelligent about larger collections of data.

Over the decades of computing the costs of storing data have dropped to nearly nothing.  In many cases storing data on the Internet is free.  The costs of collecting data has dropped significantly.  There was a time, not so long ago, that the 300 baud modem was the most common way to connect or be “on-line” with another computer.  The costs to download 10 megabytes over long distance telephone lines was not inexpensive.  Now people connect to the Internet over public wireless networks in most cities. It is offered free by many business establishments. People now download a thousand times the amount of data moved in 1985.

But something went wrong. The five basic means and capabilities needed for intelligence are collection, storage, retrieval, analysis and dissemination. We have systems of collection, storage, retrieval and dissemination but the systems we do have for analysis are not generally something anyone can run on their personal computer.  Even if we can run them on a desktop pc, they are complex systems that require significant expertise to make them work well in limited areas of specialization.

Analyzing the patterns and ordering the data helps us learn about the world and obtain to better and more complete theories.  Albert Einstien wrote:  “Concepts that have proven useful in ordering things easily achieve authority over us that we forget their earthy origins and accept them as unalterable givens.  Thus they might come to be stamped as “necessities of thought,” “a priori givens,” etc.  The path of scientific progress is often made impassable for a long time by such errors.  Therefore it is by no means an idle game if we become practiced in analyzing long-held  commonplace concepts and showing the circumstances on which their justification and usefulness depend, and how they have grown up,  individually, out of the givens of experience.  Thus, their excessive authority will be broken.  They will be removed if they cannot be properly legitimated, corrected if their correlation with  given things be far too superfluous, or replaced if a new system can be established that we prefer for any reason.”

Yet, still, here and now as we are in the twenty-first century we are lacking knowledge of those things that are given in our individual, private, and our public, social experience.  There is no model, no theory by which we can know, count and measure the givens of experience.  Einstein also wrote that: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple as possible without having to surrender the adequate representation of a single datum of experience.”

So, it is a fair question to ask after the adequate representation to the givens of experience.  It is reported that in a letter to his son, Einstein wrote that: “Life is like riding a bicycle.  To keep your balance you must keep moving.”

Isn’t it time to move on to a new way of thinking about intelligence and our means and capability to alter the structure and order of our independent, yet collective reality?  This video below defines simple basic and abstract elements of thinking that could make it possible for computers to do more intelligent analysis in much simpler ways, and to help us become better thinkers in the process.