Archive for June, 2012

If words are just labels with arbitrary meaning and if everything is relative and all theories are tolerated, what is there left to hold societies, or anything else, together? What is there to keep it all from flying apart; God, love, magnetism? Given security we cannot afford, without any certainty and only our assumptions in our pockets: what is the future of mutual understanding?

How can we transcend words, definitions and extensions of words, and more words, definitions and extensions of words, and both real and petty arguments caught in endless loops.    Like the philosopher Edmund Husserl, I believe that beneath the changing flux of human experience and awareness, there are certain invariant structures of consciousness.  Such structures appear to be ethnographically grounded and necessary to mutual understanding between members in a society.

We may inquire into what is necessary to the achievement of a mutual understanding.  We can begin by allowing that human “understanding” is a collection of mental and physical (psychophysical) and ethnographically-oriented processes grounded in cognates originating in and concerning human nature.

Why “cognates” instead of “words;” isn’t that another word? A cognate may be defined as any one of a number of objects or entities allied in origin or nature. Irrespective of their grammatical role in language, for example, the entities referred to as “self” and “others” are functional examples of “psychophysical cognates” that find their origins in thought, (i.e. as abstractions) and in all the evolutionary conditions of human nature, essentially; –in all of what matters to human being.

It is nothing other than salience or relevance of self and others to the domain of human knowledge that remains invariant. Some may scoff at this at first reading –finding it silly to claim that the salience of self’ and others’ knowledge is what makes it significant; but this is more profound than that. It is not a cop out either, as we have a precise and thoroughly tested and published mathematical model that holds out promise of being a sound basis for an ethnomethodological framework.

Relevant to this essay is the fact that a language includes the apparatus for composing and encoding knowledge, and for recording the extensions of invariant cognates of knowledge and understanding and their salient configurations as they are decoded, adopted, retained, rearranged, reassessed, redeveloped (or changed) over time. An important observation is that knowledge is not reduced to salience or relevance, it becomes so. This is why reductionist methods of AI have not and will not work.

Reductionism is characterized by dissection and separation of parts. Linguists, for example, dissect language into parts of speech and parse or translate the grammar of each sentence in a language into true/false assertions and propositions. When my colleague, computer scientist Tom Adi, and I began his investigation into language in the early 1980’s we came from the point of view of what language actually accomplishes.

This approach to investigating language provided findings that support an ethnographic philosophy of language. As such, it is concerned with four central problems: the nature of preserving meaning and transmitting knowledge, the use of language to accomplish such goals, language cognition, and the relationship between language, information and reality.  According to  Adi’s theory, a language can be defined as a unity–  a synthesis of cognates confirmed in experience by the matters at hand:

  1. A language is the image or projection of a synthesis of cognates in a domain of interactions by which the comprehensive recognition of perceptible objects and sensible (and successful) activities becomes possible; leading to awareness and general consciousness.
  2. In human languages, such cognates are represented by phonemes (in spoken languages) and morphemes (in written ones). These are used to compose and order the knowledge necessary to consciousness and mutual understanding.

This ethnographic view of language is my own philosophy synthesized from my understanding, practice and work with Adi’s theories and scientific observations, axioms and propositions, i.e. his new “science of relevancy” (i.e., a way to analyze the relations in a domain of knowledge representations (given language as inputs, e.g. text) to determine something relevant (the output) to matters at hand). As such it is concerned with four central problems: the nature of meaning and making sense, the purpose of language use, language cognition, and the relationship between language, information and reality.

Adi and I did not make use of reductionist techniques or materialistic or linguistic dogma in arriving at Adi’s axioms and propositions or in developing applicable computer models and algorithms, mainly because it was simply not applicable.  This is obviously a quite different philosophy of language than what is generally accepted today. It is not the way language is normally approached and studied by linguists, psychologists and logicians.

We take it as self-evident that human language, which ranges over a domain of human interactions, augments (adds to) human knowledge and expands cognition, awareness and consciousness (mutual understanding). Perhaps you do too? As I was there to learn of Adi’s theory first hand, I can personally attest to this claim.

Using a few propositions and a selected procedure from Adi’s theory, I am able to induce and explain the sensibility and perceptibility of expressions in languages in which I am not a speaker. Because all human language must range over this domain of interactions, any language can be translated into any other language ranging over the same domain. When Adi and I started working together in the early eighties that was our hypothesis; it is what we set out to investigate, beginning in the area of developing automatic language translation systems.

We noticed that there were large differences in the ways translators and interpreters choose their words and use alternative phrases and idioms. Their choices (which they often kept on closely guarded index cards) appeared to be based on subtle though perceptible differences in translating the reference and sensibility of a text or message along with the words.

Looking to the linguistic literature, we found that there is no procedural or computational theory of such ethnomethodological practices among translators, or in the classical traditions of interpretation. Language translation on computers is largely about word for word or phrase for phrase replacement. It tries to make the output sensible but it is emphatically not about making matters perceptible –that is left to people checking the machine translation. This sense of making matters perceptible also characterizes the difference between what interpreters and translators do. We had as an objective the development of intelligent technology and we wanted our system to be like interpreters –making matters more perceptible and sensible.

We decided to begin our own search for a theory of meaning that could be a foundation for translating not only the language people use but what the people actually mean to say. Having the capacity to speak and converse in a dozen languages helped to understand the ethnographic and “knowledge representation” problem, and being a computer scientist, made Tom Adi uniquely capable of performing this semantic study. I commissioned the completion of study.

Thusly, Adi does not begin his investigation into language from the point of view of the parts of speech or grammar. Having a sound idea of what language actually accomplishes. (i.e. it helps us synthesize knowledge in our domain of interactions; it helps us make sense of natural processes, objects and events). Adi sought to determine exactly how it is accomplished. We began looking for a language in which to begin a study because how this synthesis is accomplished must be determined, with all possible precision, empirically — i.e. by way of observation and experiment.

Neither of us had any preference for which language became the basis of the study, yet, based on discussions with linguists and colleagues, we held out this idea of “a perfect text” as an exemplar to begin with: a perfect text is one in which the language and the meaning it conveys is perfectly clear and completely unambiguous. Adi expected to find natural laws using such a text. He initially focused his efforts onto exactly how language might be synthesized according to natural laws and in light of Einstein’s relativity and the modern standard model of physics.

Adi studied textbooks about the analysis of the nature of the hydrogen atom, speculations over the smooth surface of water and texts on chemical bonding as well; reading the summaries and overviews written by Russian physicists and German chemists. He reasoned that language must behave in ways similar to atoms; in the way they bond and form new or changed bonds in chemical reactions.  He felt that smoothness at the surface of water and the continuity in language must be related and reflected on ways that establish this fact, i.e.,  he sought processes that are somehow similar to the operations and laws of particle physics and in chemistry. This means language is to be seen as a natural rather than a social phenomena.  The processes of  atomic and chemical bonding drives biophysical processes– and the operation or behavior of all of it is well-defined –according to natural laws.

Adi began looking for natural laws and processes that somehow regulate the ways people use language to interpret something and unify it in their own awareness and intuition, and he found them. His observations and experiments were later published (in Semiotics and Intelligent Systems Development, 2007) as a set of proofs to a semantic theory of ancient Arabic. By the time of that publication, we had already rendered the theory into an axiomatic model of language and we further developed in situ methods and computer algorithms for synthesizing ethnographic “knowledge-types” (Plato’s eide) from text and messages written in any of the English, French or German languages.  (See this peer-reviewed paper for more information.)

The major difference between the work of cognitive scientists and linguists, and Adi, is the frame of inquiry. Adi asks: what task does language have — what problem must it solve — in order to accomplish what it does. It is the formulation and analysis of this task which is the starting point and primary focus of Adi’s investigation.

Language is often cast in terms of modern communications science while the problem language solves is a memory problem not a communications problem. This is regrettable on many levels though there is no need to dwell on that here.  What we found is that the cognates of families of human languages organize, encode and range over a kind of permanent memory space, accommodating the definite domain of human knowledge while being constrained by the more indeterminate domain of human interactions (ethnographic activities).

The advent of the phonetic alphabet gave the world its most efficacious form of interpersonal memory –a solution space. Phonetic alphabets represent a synthesis of elementary processes and conditions (i.e. laws, semantics, poiesis: to make something determinate) ranging (via Adi’s micro-syntax) over the domain of human interactions. The phonetic alphabet was the world’s first recording technology: A world-famous device for more permanently recording the dimensions of mutual understanding or human consciousness. This is done by encoding it within the long term memory or name-space of human language.

In summary, human language appears to be a recording system. It provides the means and methods to encode (and to access) knowledge from the domain of human interactions for all generations. It was because of this realization and formulation that Adi’s semantic study was successful and we immediately derived useful operational knowledge that we could and did turn into state-of-the-art technology.

It is true that the industry is hooked on analytics. Don’t you know that analysis and synthesis go hand-in-hand? Where are the developers, the entrepreneurs, the organizers? Which of these are you? Don’t you believe people need ways to synthesize perceptible knowledge salient to matters at hand?

Read Full Post »