Posts Tagged ‘semantic theory’

What is “meaning” in questions such as: what is the meaning of life? It is the same as asking what is the truly real significance of life. Any answer is only theoretical.  Intuitively, any answer must be universal.  The truly real significance must, by definition, be significant for everyone.

That makes the notion appear to be either exaggerated or rather improbable.  The universality of such a theory of meaning would rest on the multitude of “real” things that are perceived by the theory as salient, pertinent properties and relations in “real life” and to humanity in general.  It would have to include everything we can imagine in experience.  How could it be possible?

This would also make it necessary to correspond with every “real” experience, in just enough (and no more) dimensions, necessary to make such experience “really” meaningful.  Intuitively, it must capture or cover any continuous or discrete distributions or extensions of “real” natural structure, elements or processes, in three dimensions of space and one dimension of real presence or immediate existence x.

It is very complex but not impossible.  On the one hand, one cannot help but wonder how to deal with such complexity.  On the other hand, we notice that very young children do it. Four-year old children seemingly adapt to complexity, with very little problem.  It is sophistication and obfuscation that comes later in life with which they have problems.  At four, children are already able to tell the differences in sensible and nonsensical distributions and extension of reality,  irrespective of whether they are the continuous or discrete variety.

These continuous or discrete distributions and extensions bear some additional explanation mainly due to the overarching significance to this context. First, they establish a direct correspondence with our most immediate reality. For every time we open our eyes, we see a real distribution of colored shapes.  Such a real distribution is nature’s way of communicating its messages to consciousness, via real patterns.

Second, perceived distribution patterns directly suggest the most fundamental ontological concept in theoretical physics: a field configuration, which in the simplest example of a scalar field can be likened to a field of variable light intensity.  That life is intense and that meaning is intense is not something one ought to have to prove to anyone. I will come back to intensity in another post, as I want to continue commenting on presence or real and immediate existence x. We must, in practice and in effect, solve for the real meaning of x as you see.

Meaning in this case, so defined, is literally the significance of truth, or more appropriately, what one interprets as significant or true within the dimensions of intense messages or information pertaining to real life as specified above. So, we must begin, undoubtedly, by defining what true is, then proceeding to the next step, we ought define the elements and structure to one’s interpretation of this truly significant nature of life. I did it a little backwards in this respect and this has always created a bit of a confusion that I did not see until recently.

One begins any such analysis by examining a subject’s real elements and structures. For the subject of truth, one also searches the literature where it is well represented. Such a search conducted on the subject of truth brings a broad range of ideas. To try and make a taxonomy of ideas from the varied opinion found there would turn out to be an exercise in incoherence, But it ought be acceptable to reference some theories and practices that have been adopted.

Ibn Al-Haytham, who is credited with the introduction of the Scientific Method in the 10th century A.D., believed, “Finding the truth is difficult and the road to it is rough. For the truths are plunged in obscurity” (Pines, 1986, Ibn al-Haytham’s critique of Ptolemy. In Studies in Arabic versions of Greek texts and in medevial science, Vol. II. Leiden, The Netherlands: Brill. p. 436). While truths are obscured and obfuscated; there can be no doubt that the truth does exist and the truth is there to be found by seekers. I do not accept views or opinions that the  average layman is too stupid or are otherwise not equipped to figure it out by themselves.

The Modern Correspondence Theory of Truth.

While looking for the truth it helps to know what shape it takes or what it may look like when one happens upon it or finds it lying around and exposed to the light. According to some: truth looks like correspondence between one thing or element and another, Scientist have long held a correspondence theory of truth. This theory of truth is at its core an ontological thesis.

It means that a belief (a sort of wispy, ephemeral, mostly psychological notion) is called true if, and only if, there exists an appropriate entity—a fact—to which it corresponds. If there is no such entity, the belief is false. So you see, as we fixate on the “truth of a belief” –a psychological notion such as a thought of something —to be sure —but some concrete thing, nonetheless, we see that one thing —a belief— corresponds to another thing —another entity called a fact. The point here, is that both facts and beliefs are existing, real entities — even though they may also be considered to be psychological or mental notions — beliefs, ideas –they– are reality.

While beliefs are wholly or entirely psychological notions, facts are taken to be much stronger entities. Facts, as far as neoclassical correspondence theory is concerned, are concrete entities in their own right. Facts are taken to be composed of particulars and properties and relations or universals, at least. But universality has turned out to be elusive and the notion is problematic for those who hold personal or human beliefs to be at the bottom of truth.

Modern theories speak to “propositions” which may not be any more real, after all. As Russell later says, propositions seem to be at best “curious shadowy things” in addition to facts. (Russell, Bertrand, 1956, “The philosophy of logical atomism”, in Logic and Knowledge, R. C. Marsh, ed., London: George Allen and Unwin, 177-281. Originally published in The Monist in 1918. , p. 223) If only he were around here now; one can only wonder how he might feel or rephrase.

In my view, the key features of the “realism” of correspondence theory are:

  1. The world presents itself as “objective fact” or as “a collection of objective facts” independently of the (subjective) ways we think about the world or describe or propose the world to be.
  2. Our (subjective) thoughts are about the objective fact of that world as represented by our claims (facts) which, presumably, ought be objective.

(Wright (1992) quoted at the SEP offers a nice statement of this way of thinking about realism.) This sort of realism together with representationalism is rampant in the high tech industry.  Nonetheless, these theses are seen to imply that our claims (facts) are objectively true or false, depending on the state of affairs actually expressing or unfolding in the world.

Despite the fact of one’s perspective, metaphysics or ideals, the world that we represent in our thoughts or language is a socially objective world. (This form of realism may be restricted to some social or human subject-matter, or range of discourse, but for simplicity, we will talk only about its global form as related to realism above.)

The coherence theory of truth is not much different than the correspondence theory in respect to this context. Put simply, in the coherence theory of truth: a belief is true when we are able to incorporate it in an orderly and logical manner into a larger and presumably more complex web or system (sic) of beliefs.

In the spirit of American pragmatics almost every political administration since Reagan has used the coherence theory of truth to guide national strategy, foreign policy and international affairs. The selling of the War in Iraq to the American people, is a study in the application of the coherence theory of truth to America’s state of affairs as a  hegemonic leader in the world.

For many of the philosophers who argue in defense of the coherence theory of truth, they have understood “Ultimate Truth” as the whole of reality. To Spinoza, ultimate truth is the ultimate reality of a rationally ordered system that is God. To Hegel, truth is a rationally integrated system in which everything is contained. To the American Bush dynasty, in particular, to W.: truth is what the leaders of their new world order say that it is.  To Adi, containment is only one of the elementary processes at work creating, enacting (causing) and (re)enacting reality.

Modern scientists break the first rule of their own skepticism by being absolutely certain of information theory.

Let me be more specific.  Modern researchers have settled on a logical definition of truth as a semantic correspondence by adopting Shannon’s communications theory as “information” theory. Those object-oriented computer programmers who use logic and mathematics; understand truth as a Boolean table and as correspondence as per Alfred Tarski’s theory of semantics.

Modern computer engineers have adopted Shannon’s probabilities as “information theory” even though, on the face of it: the probabilities that form such an important part in Shannon’s theory are very different from messages; which stand for the kinds of things we most normally associate with objects. However, to his credit, the probabilities on which Shannon based his theory were all based on objective counting of relative frequencies of definite outcomes.

Shannon’s predecessor, Samuel Morse, based his communication theory, which enhanced the speed and efficiency with which messages could be transmitted, on studying frequently used letters. It is the communications theory I learned while serving in the United States Army. It was established by counting things — objects in the world — the numbers of different letter-type in the printer’s box.

When I entered the computer industry in 1978, I was somewhat astonished that Shannon’s theory of communications was already established in the field of information science — before word processors and “word” processing were common. I confirmed that belief by joining with information scientists for awhile, as a member of the American Society of Information Science (ASIS).

While at ASIS, I found out that Shannon’s probabilities also have an origin in things much like Morse code: although they in no way ought be considered to be symbols that stand for things. Instead, Shannon’s probabilities stand for proportions of things in a given environment.

This is just as true of observationally determined quantum probabilities (from which Shannon borrowed on the advice of the polymath John Von Neumann) as it is for the frequencies of words in typical English, or the numbers of different trees in a forest, or; the countable blades of grass on my southern lawn.

Neither Morse Code, nor Shannon’s Communications theory, nor any “information” theory, directly addresses the “truth” of things in or out of an environment –save Adi’s. The closest any computer theory or program gets to “interpretation” is by interpreting the logical correspondence of statements in respect to other statements — both with respect to an undefined or unknown “meaning” — the truth or significance or unfolding of the thing in the world. It takes two uncertainties to make up one certainty according to Shannon and Von Neumann– who had two bits of uncertainty, 1 and 0, searching for, or construing, a unity.

That is not us. That is not our scientific program. Our program was not to construe a unity, or “it” from “bit.”  That is the program of the industry, because, almost like clocks, everyone in industry marches in lock step by step, tick by tock, take-stock.

Adi began with the assumption that there is an overarching unity to “it.” He then studied how a distribution of signs of “it” (i.e., symbols that make up human languages describing “it”) manages to remain true to the unity of “it,” despite constant change. Such change, it can be argued, arrives in the guise or form of uneven or unequal adoption, selection, and retention factors, as seen in the overwhelming evidence of a continuous “morphogenesis” in as much as the formation, change and meaning of wordsfacts and other things, over eons.

To determine how people interpret the intensity and sensibility or “information” projected with language by means of speech acts (with messages, composed of words) — Adi investigated the sounds of symbols used to compose a real human language when most people were inventing artificial, specialized, logical and less general languages.  Adi chose to study the unambiguous sounds of Classical Arabic that have remained unchanged for 1400 years until present day.  That sound affects what we see is in no way some incidental trivia or minutia.

At the least, it helps truth break free of being bound to mere correspondence, a relegation reminiscent of mime or mimicry. Adi’s findings set truth free,  liberates truth, to soar to heights more amenable — such as high-fidelity,–  than those that burn out in the heated brilliance of spectacular failure.  In fact, in early implementations of our software we had an overt relevance measure called “fidelity” that users could set and adjust.  It speaks to the core of equilibrium that permeates this approach to conceptual modelling, analysis, searching for relevance and significance, subject and topic classification and practical forms of text analytics in general.

Tom Adi’s semantic theory interprets the intensity, gradient trajectory and causal sensibility of an idea presumably communicated as information in the speech acts of people. This “measure” of Adi’s (or we may call it “Adi’s Measure”) can be understood as a measure of the increase in the magnitude (intensity) of a property of psychological intension. (e.g., like a temperature or pressure change or change in concentration) observable in passing from one point or moment to another. Thus, while invisible, it can be perceived as the rate of such a change.

In my view, it is in the action of amplitude, signifying a change from conceptual, cognitive or imaginative will or possibility, to implementation or actualization in terminal reality. Computationally, it is and can be used as a vector formed by the operator ∇ acting on a scalar function at a given point in a scalar field. It has been implemented in an algorithm as an operating principle, resonating —   acting/reacting (revolving, evolving) as a rule, i.e.; being an operator: conditioning, i.e., coordinating/re-coordinating,  a larger metric system or modelling mechanism (e.g., Readware; text analytics, in general).

I mention this to contrast Adi’s work with that of Shannon, who, in order to frame information according to his theory of communications, did a thorough statistical analysis of ONLY the English language. After that analysis, Shannon defined information as entropy or uncertainty on the advice of Von Neumann.  The communications of information (an outcome) involves things which Shannon called messages and probabilities for those things. Both elements were represented abstractly by Shannon: the things as symbols (binary numbers) and probability simply as a decimal number.

So you see, Shannon’s information represents abstract values based on a statistical study of English. Adi’s, information, on the other hand, represents sensible and elementary natural processes that are selected, adopted and retained for particular use within conventional language –as a mediating agency– in an interpersonal or social act of communications. Adi’s information is based upon a diachronic study of the Arabic language and the confirming study in fourteen additional languages, including modern English, German and French, Japanese and Russian, all having suffered the indisputable and undeniable effects of language change — both different from and independent of the evolution of language, or the non-evolution, as-it-were, of Classical Arabic.

Adi’s theory is a wholly different treatment of language, meaning and information than either Shannon or Morse attempted or carried out on their own merits. It is also a different treatment of language than information statistics gives, as it represents the generation of salient and indispensable rules in something said or projected using language. It is different from NLP or Natural Language Processing which depend (heavily) on the ideas of uncertainty and probability.

A “concept search” in Adi’s calculation and my estimation, is not a search in the traditional sense of matching keys in a long tail of key information.  A “concept search” seeks mathematical fidelity, resonance or equilibrium and symmetry (e.g., invariance under transformation) between a problem (query for information) and possible solutions (i.e., “responses” to the request for information) in a stated frame or window (context) on a given information space (document stack, database).  A search is conducted by moving the window (e.g., the periscope) over the entirety of the information space in a scanning or probing motion.  While it ought be obvious, we had to “prove” that this approach works, which we did in outstanding form, in NIST and DARPA reviewed performances.

Adi’s theory is not entirely free of uncertainty as it is, after all, only theoretical. But it brings a new functionality, a doctrinal functionality, to the pursuit of certainty by way of a corresponding reduction of doubt. That is really good news. In any case, this is a theory that deserves and warrants consideration as a modern information theory that stands in stark contrast to the accepted norm or status-quo.

Read Full Post »

In part 1, I offered some context and my definition of ‘semantics’ as being the system of relationships that are important or significant to people and which are symptomatic of human experience. In this part, I will flush out what this means.

Though most people may be familiar with that famous quote: I think, therefore I am, from the chapter Discourse on Method in Rene Descartes’ philosophical classic, he changed his mind later in his life. Looking at later works of his, we can find that he really was not looking for truth through the lens of reason or inference. He was looking for a certainty so clear on its face that it is self-evident.

“So, after considering everything very thoroughly, I must finally conclude that the proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.” –from: Meditations of First Philosophy (from Wikipedia entry Cogito Ergo Sum ).

This kind of certainty grounds us or anchors us in reality. It is only a small step to go from I am, I exist to I am, I physically exist— the body being self-evident. It makes sense and seems quite reasonable that if a computational model is to formulate an outlook similar to the human perspective it must reckon with the conditions imposed on the physical being. Being primarily a body positioned in space, is a condition most certain of being human.

An ontology of the human condition begins by addressing the conditions of being human that are symptomatic of the physical state of existence of the human being. For this we can leave all philosophy, linguistics and psychology behind for the time being. I can describe my physical existence in simple first-person terms. The description is simple and characteristic of people I know.

I have weight and mass and I exist in a three-dimensional space. Most human beings have a body with a head, neck and torso and limbs. I have two arms with hands and two legs with feet, this is considered normal for a human. I have a front and a rear. I have a left side and a right side. I have a face on the front side of my head, an ear on each side and only hair on the back of my head. Usually, I am sitting or standing vertically with my feet on the ground beneath me and the sky overhead. All of this is self-evident.

Range of motions/visionMoreover, I have power and I can control myself. I can move my eyes to see at wide angles. I can move my limbs and my entire body through space within limits. I can only move in one direction at a time, for example

I have the power to exert a physical influence in the space surrounding my body. I can reach out in front or to the side within limits. I can grasp things, pull them to me or push them or throw them away.

Given that description of physically being human, let’s consider the semantic elements of human experience from this perspective. These are none other than the several elements of our human experience that are symptoms, or characteristic signs, of individual physical existence.

Remember, we are looking for universal semantic elements. These will not be a part of us, like our faculties or our eyes or opposing thumbs. They must be so indispensable that if we were without these elements we could not exist. These elements will be part of the natural state or condition of being human. It should be something that most certainly affects our ways or has the power to influence our perspective. In my view, there should be two sets of universal semantic elements.

The first of these semantic elements stems from our body-centered horizontal and vertical reference in three-dimensional space (four- if you count time as a dimension). This quality is not unique to human beings though it is universal property of the existence and experience of human beings. The significant fact here is that by being a point or center of reference, one’s orientation in space influences one’s view and perspective. Orientation affects every person, i.e., it is symptomatic of anybody and it is important to physical and psychological and interpersonal relationships.

The second of these semantic elements is might. Being out-of-control, or even experiencing a loss of control can be one of the most dreadful experiences for anyone. Control is very significant to people. People go after and forcefully exercise control. They get control or take control and hold on to control. Personal power is symptomatic of anybody.

Again, though it is not unique, it is a universal property of the experience of being human. Having the power control our limbs, to stand upright, to move, is important to everyone. For most people self-control is necessary for the purpose of meeting personal goals, and controlling their relationships to situations as well as their relationships with others. Once again, all of this is self-evident.

So we can conclude that there are at least two sets of semantic elements that are symptomatic or characteristic signs of the human condition and all the things and events that make up the human experience: One set of elements pertains to power and control (over life and limb) and the other set pertains to body-centered orientation in a physical space as well as a psychological space. While these may be self-evident properties, they have never been formalized into a computational model by other scientists working in the field.

What is not self-evident is the ways thinking processes seem to be interrelated with movement of the body and parts of the body. Thinking processes seem to be related to stages and processes involved in motion and movement. Not many scientists investigating the mind, language and human psychology would endorse this view. One need only peruse the Wikipedia entries for linguistics, cognitive science, human psychology, studies of mind to see what I mean. Not that it matters much, when engineers get together and talk about methods and procedures, best practices and the development of standards, they tend to stick to more concrete science.

Computational engineering is an engineering discipline and so when we started, we were looking for concrete grounds that neither linguists nor philosophers could deliver. We were not looking for the foundation of interpersonal relations between people, though we were looking for abstract relations within human expressions in a natural language. We were looking for certainty and an anchor even though we did not know what it looked like. This required the faith that we would know it when we found it and it took years of dedication and hard work to turn it into an semantic search and information filtering system for computers.

When Tom Adi began his scientific investigation into natural languages he was determined to search for natural laws instead of relying on linguistics or philosophy or any social science. The original scientific study that led Tom to produce his semantic theory was recently published in the book Semiotics and Intelligent Systems Development. The natural laws Tom was looking for were needed to satisfy his original premise that the elements and relations of natural languages correspond with the elements and relations of other natural systems at all times.

As I demonstrated above, the notions of Orientation and Control can be abstracted directly from human nature. They abstractly name universal properties of a natural, physical and human system of relationships. They correspond to the idea of polarity in language and to the strata of power and control enumerated in Adi’s theory of semantics. So the result is that we have something very solid, a concrete basis, for enlisting computational assistance in the interpretation of perceptions and intelligence and interpersonal understanding.

I will get into specific examples in the next part though it should not be difficult to recognize these semantic elements as the basis for the way we see and perceive the world. They also influence the actions we take and the interactions we cause to take place. For those that cannot wait, there is a formalization of this framework we call the “semantic matrix” that we have been using since 1986. The semantic matrix formalizes these elements according to their phonetic correspondents and the linked paper is a good introduction into some of the mathematical aspects of Tom Adi’s semantic theory.

Understanding humans and human events, society and culture, is about understanding their semantic elements and their relationships to the symphony of life and survival in an unforgiving and non-subjective world. The motivation for searching for certainty and universal elements of human relationships are the benefits afforded by finding harmony, simplicity and clarity in one’s understanding.

Universal properties are common tools in mathematical, i.e., computational, algorithms. By understanding their abstract properties, one obtains information about any constructions and can avoid repeating the same analysis for each individual instance. Readware technology applies this concrete foundation in its indexing and search algorithms.

When we began writing our first search engine, in 1985, we were breaking new ground in information science. For the first time, we would index text not by keywords but by abstract concepts corresponding to the way we naturally interpret the world. By fixing the boundaries of the information to the space enveloped by relations framed by orientation and control and by mapping syntactic elements from text in corresponding ways, we predicated we could map queries onto text and classify documents with more relevance than other methods.

I cannot say that I knew the exact correspondence then, as I feel do now, though I could see how it would affect relevance because it better related to the human condition. We tested Readware’s capability for achieving relevance in independently judged recall and retrieval exercises. We also tested Readware for commercial endeavors and we have worked on several commercial ventures with big and small companies.

One of these, commercial ventures had us working on RSS news feeds while we hosted Feedster’s service for a short while. I wrote about this experience in an earlier post. Internet statistics show that Readware had a substantial impact on page views and reach for Feedster. Like all business ventures, this one was subject to forces that had little to do with technology and technology alone was not able to save it.

In the third and final part, I will present additional references and depending on the disciplines of those following this article, I will add some additional and anecdotal information. As usual, I would like to hear what you think. While you are welcome to email me your comments, you can leave your comments for everyone to see by clicking on comments at the top of the page.

Read Full Post »