Feeds:
Posts
Comments

Archive for the ‘abstraction’ Category

By using the term ecology, I mean the study of the interaction of people with their environment: the environment of human awareness and knowledge.

I think that most people, feel that they are aware of their surroundings.  Psychologists say that because you feel as though you are aware, you assume everyone else is as aware as you; more or less.  Unfortunately, it does not turn out that way.  There are a host of ill effects because  there seems to be very little in our awareness that any few people can agree upon.  The lack of shared knowledge, the lack of shared intelligence, have an affect on the different level or type or kind of awareness in societies of people everywhere.  If everyone were (explicitly) conscious of one and the same thing, then we could say that everyone is conscious of such and such.  But we cannot make such a statement or claim in this day and age.   A day and age of modern communications, computers and “open information” mind you.

Nonetheless human beings are modelers in this world or environment in that we build or construct models of it that suit us or satisfy us either by explaining or predicting the circumstances in which we find ourselves.  I should say that I take it for granted that there are both good and bad models.  I want to introduce you to a good model of the organism of intelligence (mentioned in my last post) that each of us use, even though most of us are not very conscious of it.  I expect that anyone can tell a good model from a bad one.  A good model is one that stirs or moves your awareness. It affects you in such a way as you are disposed, obliged even, to pay closer attention, as it obliges one to think more exactly about someone or something; it is one that warrants becoming more aware of it;  conscious of it, learning it: ultimately using it for enlightenment and for gain.

A model M is equivalent to a knowledge K. M=K because we employ models in making predictions about certain attributes just as we employ our knowledge. The term “attribute” is used here as a noun in an ordinary way to signify a quality or feature regarded as a characteristic or inherent part of someone or something. Every environment has attributes that are characteristic of it.

For example, the ecosystem is an environment that has the attributes of air, water, earth and fire. The goal is to find just those attributes (and no more) that are enough to quantify the valuable or significant changes that make a substantial difference; affect our surroundings in some way. That is, to generate or induce knowledge and awareness we must perform a transformation: we must transform (what is recognized to be) an attribute of the environment into a personal or individual affect. That may sound strange, so let me explain it a little further.

In the case of the ecosystem, the attributes air, water, earth and fire can affect us, and one might readily imagine how the presence or absence of water or air can induce different states of mind. In any case, they may be the cause of some serious condition that could affect any one of us; imagine the situation where there is no air to breath. This quality makes air a good attribute of this environment (the ecosystem) because we can readily imagine and predict how we could be affected given some arbitrary change in the situation. But: — are these attributes sufficient and all that is necessary to predict all possible changes in the environment that might affect us?

Imagine now, how difficult it must be for scientists, for anyone, to build a model of the environment of human knowledge, awareness and consciousness. In some circles of research, that is what AI and AGI engineers are trying, have been trying to do. It is true the engineers and programmers have not been up to the daunting task of it. Yet that does not diminish the fact that it is what needs to be done in order to produce an AGI, after all: we need to be able to model our own situational awareness.  By doing so, we may become better equipped to anticipate and reduce the affects of unwanted and harmful eventualities of which many people are all too aware.

For example, economists create models of economies with certain attributes and premises. For better or worse, this is done in order to deduce conclusions about possible eventualities. Economic models are useful as tools for judging which alternative outcomes seem reasonable or likely. In such cases the model is being used for prediction. Thus the model is part of some knowledge about the environment.

The model embodies the knowledge because it is itself a capacity for prediction. Thus, a model M can be considered to be fully equivalent to a knowledge. Therefore we can assume here that a model is synonymous with a knowledge. More specifically, it appears that a qualitatively relative definition of knowledge is warranted: “A Knowledge K is a capacity to predict the value which an attribute of the environment will take under specified conditions of context.”

Now let’s talk about people (sapients) and frame a model of their environment, that is, the environment of their awareness; of which they are aware (sapient). We can assume that everyone’s awareness changes in regular and predictable ways and each person has some knowledge that allows them to predict the value of attributes in their own awareness. Here, as you see, an awareness is equivalent to the environment in which we abide. We are intuitively surrounded by or abiding in the environs of sapience.

Before I begin the example let me reveal that I have a knowledge of the attributes of a denotative awareness that includes and subsumes all possible connotative environments. I will say there are eleven attributes to this environment of awareness but I will only introduce two of them we call “Self” and “Others” in this example. Like all the attributes of this rather explicit awareness, these two attributes, Self and Others, correspond with the real entities and their activities, self and others, in the world of ordinary affairs and situations. I am only using these two in order to keep the explanation simple and real and because that is all that is necessary to demonstrate the meaning of intelligence, which I will now define as: the organism or mechanism of the attributes of the environment to affect awareness.

So, to be clear, I am not going to give the complete specification of that organism or mechanism here, but I will show you how two of the attributes of the environment I have clearly in mind “affect” both my predictions and yours.  Incidentally, let me also define a “mind” as a (psychical) state space (e.g. abstract and mental space).  So we begin with an assertion: Besides my own self, there are others in my environment; the environment in which I exist and of which I am aware.

I embody the organism we call  intelligence (as do you)  and I have a knowledge K to predict that the value of a single measurement of the attribute Others, equivalent to and connotative of “wife” will be Gloria, just in case I am asked about it. This prediction is observed to be a transformation of the state space of the attribute Others, just like the state space of the attribute Self.  Under the specified conditions and in the context of my own environment, the state space is transformed, by my own knowledge K to be equivalent to my name=Ken. Under the same specified conditions of context: the connotative context “my wife” is connected to the denotative context (observable yet normally left tacit or unemphasized) by taking successive measurements (e.g. making interpretations) of these explicitly shared attributes of the environment of my awareness. I believe that once consumed, that much ought to become clear and self-evident, that is: I take it as being axiomatic.

I can also predict that additional measurements of the attributes Self and Others will yield different values equivalent to the connotative appearance of several other self-organized entities, things or activities, that become salient to my own environment from time to time. In this way (and only in this way) my Knowledge K is different than your knowledge T. It is peculiar to my thoughts and perceptions in the context of the environment situated where I live, i.e; to my awareness of that environment. You will have a similar situation –your own “context” (the particular circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed) of the environment of your own awareness. We don’t know each others knowledge or awareness. We (may only implicitly) know and share the explicit attributes of such a (sapient) awareness.

That is to say that I live in the same environment (of general awareness and sapience) as you. And I have a knowledge K of Self and Others, as attributes of this environment that (the relevance or significance of which) you may only now be becoming aware of. Both Self and Others are clearly attributes in our shared awareness. In fact, they are attributes of a universal environment for homo sapients. Remember that a knowledge T, K, …, or M is a capacity to predict the value which an attribute of the environment will take under specified conditions of context. Everyone has their own name, knowledge (whether implicit and explicit) and their own conditions of context. This is the private knowledge held inside them and perhaps also by relatives and friends.

Now we are able to make some observations and see some of the implications that flow from what has been stated above. We can intuit, for instance, that a wholesome knowledge K is evidenced whenever an organism produces information or reduces a priori uncertainty about its environment. I realize this is incomplete, though it demonstrates that (connotative and social) knowledge, text and all computer data is synthesized from (the transformation of) valid attributes A, which cannot be construed as being contained in or patterned by (computer) data nor by modern language.

Any invariant or regular and unitary attribute A (whereby individuals are distinguished) ought be seen as a continuity to be treated as valid– and used as a handy and trustworthy rubric for making or producing transformations (in the state space of a mind) applied in a context of the environment.  Each measurement produces a single valuation, that could be the same or different at any moment and from place to place –only appearing to be impossibly chaotic or complex.  For those that understand such things, such an attribute may be considered a correspondence.  This correspondence may be formalized as a functional mapping of the form A: Ɵ → Ɵ where Ɵ is the (denotative) state space of the environment mapped to the (connotative) state space of the environment.  We found more than a dozen types or configurations of functional mappings that are applied in variant connotative contexts.

So, to conclude: an environment of human awareness can be understood simply as the denotative and connotative surroundings and conditions in which the organism of the attributes (and capacity of independent awareness with a knowledge) operates, is asserted and is applied.

The good news is that now that we know that it is the organism of the attributes of the environment of awareness, consciousness, that is both explicit and universal (not connotative belief,  knowledge or perception or conception –which are all relatively defined) we can get down to resolving differences while  accommodating everyone.  To be specific, we can seek better understanding and control over perceptual and conceptual states of awareness in a decidedly invariant environment (awareness) of continuous change, where intelligence is any organism or mechanism of the attributes of an environment that affects such awareness and consciousness.

______

We can also define semantics as the correspondence of both the denotative and connotative states of conception to the set of all possible functions given the attributes of the environment.  Now, if you want to know more details you will need to put me on a retainer and pay me.

Read Full Post »

The Wikipedia entry defines Quala thus:

Qualia (play /ˈkwɑːliə/ or /ˈkwliə/; singular form: quale (Latin pronunciation: [ˈkwaːle]) is a term used in philosophy to refer to individual instances of subjectiveconscious experience. The term derives from a Latinword meaning for “what sort” or “what kind.” Examples of qualia are the pain of a headache, the taste of wine, the experience of taking a recreational drug, or the perceived redness of an evening sky.
One might argue on this evidence that the definition applies only to some subjective qualities of a macro and  external experience while the most subjective experience of the organism must be, can only be, that internally generated experience of the individual self.  The quale of inner-experience cannot be a “macro” quantity, symbol or component such as the amount of pain or even the word, or the uncountable shades of red.  I can personally attest that one may know pain without also knowing how to interpersonally express or symbolize it.
 
I am not alone in believing that “qualia,” if it be an identifiable sort or kind of particular — salient to the awareness and consciousness, must instead be a micro, molecular or morphogenetic quantity representable in an associative network of firmly grounded states, (grounded in physical laws and causality).
 
I am aware, for example that my own inner-experience is conditioned by the homeostasis of the structure and function of my central nervous system; (not only the brain) the brain and its sensors along with the metabolism.  The objects of my inner experience are felt and reflected upon because I am emotionally invested in being here and now and in being me (the present particular “I am”).  
 
This emotional investment (from which one feels things) forms a feedback loop caused by the modal transformations of exogenous matters of the ecosystem and interpersonal realities into the conscious endogenous energy of self-realized experience.  It ought go without saying that I am also emotionally invested in the modern social world, (I have been raised with an American and interpersonal worldview) and I am socially, professionally and politically engaged in interactions with others.
 
A worldview is more than just a belief, opinion or perspective.  A worldview is a framework of ideas and beliefs through which an individual, group or culture interpret their conditions of existence.  I have more recently been developing the idea that modern ethnographic worldview is not an invention or a construction, rather it is an expression of poiesis: a creation or production of that which is named by the combining roots of organism.  
 
The expression of which must be seen in light of both morphogenic and “ontogenic” properties in that there are a set of semantic rules that govern ontogenesis (i.e., growth of the morphogenic fields of language from the simple to far more complex forms of expression).  The macro field of “human reality” is seen as an expression of this biogenic field of organismic poiesis, rather than as a social, cultural, literary or political construct, or any other ethnographic construction.  
 
Poietic semantics operates (in intelligent people) by unifying and focusing intuitive cognitive processes (onto rudimentary elements and operations of poiesis and organismic function) and by regulating interprocess interactions and individual (endogenous semiotic) rulemaking.  I can vouch for the idea that the uptake, adoption and retention of a poietic worldview affects associative thinking in intelligent people (it anchors them; it gives them an objective and transformative hand-hold in a sea of assumptions) from more than thirty-years of personal experience.  
 
A poietic worldview engenders (in its learner) an exactness in the immediate conception of the elements and operations of poiesis (i.e. it is a concretion of Daniel Kahneman’s system-1 type thinking (i.e it is not AI nor analytic/reductionist)). It synthesizes the components, elements and influences of associative thinking, making such thinking that much more concrete and reliable.
 
Here is a short video overview I prepared recently that can be shared and downloaded.  
 

Read Full Post »

What is “meaning” in questions such as: what is the meaning of life? It is the same as asking what is the truly real significance of life. Any answer is only theoretical.  Intuitively, any answer must be universal.  The truly real significance must, by definition, be significant for everyone.

That makes the notion appear to be either exaggerated or rather improbable.  The universality of such a theory of meaning would rest on the multitude of “real” things that are perceived by the theory as salient, pertinent properties and relations in “real life” and to humanity in general.  It would have to include everything we can imagine in experience.  How could it be possible?

This would also make it necessary to correspond with every “real” experience, in just enough (and no more) dimensions, necessary to make such experience “really” meaningful.  Intuitively, it must capture or cover any continuous or discrete distributions or extensions of “real” natural structure, elements or processes, in three dimensions of space and one dimension of real presence or immediate existence x.

It is very complex but not impossible.  On the one hand, one cannot help but wonder how to deal with such complexity.  On the other hand, we notice that very young children do it. Four-year old children seemingly adapt to complexity, with very little problem.  It is sophistication and obfuscation that comes later in life with which they have problems.  At four, children are already able to tell the differences in sensible and nonsensical distributions and extension of reality,  irrespective of whether they are the continuous or discrete variety.

These continuous or discrete distributions and extensions bear some additional explanation mainly due to the overarching significance to this context. First, they establish a direct correspondence with our most immediate reality. For every time we open our eyes, we see a real distribution of colored shapes.  Such a real distribution is nature’s way of communicating its messages to consciousness, via real patterns.

Second, perceived distribution patterns directly suggest the most fundamental ontological concept in theoretical physics: a field configuration, which in the simplest example of a scalar field can be likened to a field of variable light intensity.  That life is intense and that meaning is intense is not something one ought to have to prove to anyone. I will come back to intensity in another post, as I want to continue commenting on presence or real and immediate existence x. We must, in practice and in effect, solve for the real meaning of x as you see.

Meaning in this case, so defined, is literally the significance of truth, or more appropriately, what one interprets as significant or true within the dimensions of intense messages or information pertaining to real life as specified above. So, we must begin, undoubtedly, by defining what true is, then proceeding to the next step, we ought define the elements and structure to one’s interpretation of this truly significant nature of life. I did it a little backwards in this respect and this has always created a bit of a confusion that I did not see until recently.

One begins any such analysis by examining a subject’s real elements and structures. For the subject of truth, one also searches the literature where it is well represented. Such a search conducted on the subject of truth brings a broad range of ideas. To try and make a taxonomy of ideas from the varied opinion found there would turn out to be an exercise in incoherence, But it ought be acceptable to reference some theories and practices that have been adopted.

Ibn Al-Haytham, who is credited with the introduction of the Scientific Method in the 10th century A.D., believed, “Finding the truth is difficult and the road to it is rough. For the truths are plunged in obscurity” (Pines, 1986, Ibn al-Haytham’s critique of Ptolemy. In Studies in Arabic versions of Greek texts and in medevial science, Vol. II. Leiden, The Netherlands: Brill. p. 436). While truths are obscured and obfuscated; there can be no doubt that the truth does exist and the truth is there to be found by seekers. I do not accept views or opinions that the  average layman is too stupid or are otherwise not equipped to figure it out by themselves.

The Modern Correspondence Theory of Truth.

While looking for the truth it helps to know what shape it takes or what it may look like when one happens upon it or finds it lying around and exposed to the light. According to some: truth looks like correspondence between one thing or element and another, Scientist have long held a correspondence theory of truth. This theory of truth is at its core an ontological thesis.

It means that a belief (a sort of wispy, ephemeral, mostly psychological notion) is called true if, and only if, there exists an appropriate entity—a fact—to which it corresponds. If there is no such entity, the belief is false. So you see, as we fixate on the “truth of a belief” –a psychological notion such as a thought of something —to be sure —but some concrete thing, nonetheless, we see that one thing —a belief— corresponds to another thing —another entity called a fact. The point here, is that both facts and beliefs are existing, real entities — even though they may also be considered to be psychological or mental notions — beliefs, ideas –they– are reality.

While beliefs are wholly or entirely psychological notions, facts are taken to be much stronger entities. Facts, as far as neoclassical correspondence theory is concerned, are concrete entities in their own right. Facts are taken to be composed of particulars and properties and relations or universals, at least. But universality has turned out to be elusive and the notion is problematic for those who hold personal or human beliefs to be at the bottom of truth.

Modern theories speak to “propositions” which may not be any more real, after all. As Russell later says, propositions seem to be at best “curious shadowy things” in addition to facts. (Russell, Bertrand, 1956, “The philosophy of logical atomism”, in Logic and Knowledge, R. C. Marsh, ed., London: George Allen and Unwin, 177-281. Originally published in The Monist in 1918. , p. 223) If only he were around here now; one can only wonder how he might feel or rephrase.

In my view, the key features of the “realism” of correspondence theory are:

  1. The world presents itself as “objective fact” or as “a collection of objective facts” independently of the (subjective) ways we think about the world or describe or propose the world to be.
  2. Our (subjective) thoughts are about the objective fact of that world as represented by our claims (facts) which, presumably, ought be objective.

(Wright (1992) quoted at the SEP offers a nice statement of this way of thinking about realism.) This sort of realism together with representationalism is rampant in the high tech industry.  Nonetheless, these theses are seen to imply that our claims (facts) are objectively true or false, depending on the state of affairs actually expressing or unfolding in the world.

Despite the fact of one’s perspective, metaphysics or ideals, the world that we represent in our thoughts or language is a socially objective world. (This form of realism may be restricted to some social or human subject-matter, or range of discourse, but for simplicity, we will talk only about its global form as related to realism above.)

The coherence theory of truth is not much different than the correspondence theory in respect to this context. Put simply, in the coherence theory of truth: a belief is true when we are able to incorporate it in an orderly and logical manner into a larger and presumably more complex web or system (sic) of beliefs.

In the spirit of American pragmatics almost every political administration since Reagan has used the coherence theory of truth to guide national strategy, foreign policy and international affairs. The selling of the War in Iraq to the American people, is a study in the application of the coherence theory of truth to America’s state of affairs as a  hegemonic leader in the world.

For many of the philosophers who argue in defense of the coherence theory of truth, they have understood “Ultimate Truth” as the whole of reality. To Spinoza, ultimate truth is the ultimate reality of a rationally ordered system that is God. To Hegel, truth is a rationally integrated system in which everything is contained. To the American Bush dynasty, in particular, to W.: truth is what the leaders of their new world order say that it is.  To Adi, containment is only one of the elementary processes at work creating, enacting (causing) and (re)enacting reality.

Modern scientists break the first rule of their own skepticism by being absolutely certain of information theory.

Let me be more specific.  Modern researchers have settled on a logical definition of truth as a semantic correspondence by adopting Shannon’s communications theory as “information” theory. Those object-oriented computer programmers who use logic and mathematics; understand truth as a Boolean table and as correspondence as per Alfred Tarski’s theory of semantics.

Modern computer engineers have adopted Shannon’s probabilities as “information theory” even though, on the face of it: the probabilities that form such an important part in Shannon’s theory are very different from messages; which stand for the kinds of things we most normally associate with objects. However, to his credit, the probabilities on which Shannon based his theory were all based on objective counting of relative frequencies of definite outcomes.

Shannon’s predecessor, Samuel Morse, based his communication theory, which enhanced the speed and efficiency with which messages could be transmitted, on studying frequently used letters. It is the communications theory I learned while serving in the United States Army. It was established by counting things — objects in the world — the numbers of different letter-type in the printer’s box.

When I entered the computer industry in 1978, I was somewhat astonished that Shannon’s theory of communications was already established in the field of information science — before word processors and “word” processing were common. I confirmed that belief by joining with information scientists for awhile, as a member of the American Society of Information Science (ASIS).

While at ASIS, I found out that Shannon’s probabilities also have an origin in things much like Morse code: although they in no way ought be considered to be symbols that stand for things. Instead, Shannon’s probabilities stand for proportions of things in a given environment.

This is just as true of observationally determined quantum probabilities (from which Shannon borrowed on the advice of the polymath John Von Neumann) as it is for the frequencies of words in typical English, or the numbers of different trees in a forest, or; the countable blades of grass on my southern lawn.

Neither Morse Code, nor Shannon’s Communications theory, nor any “information” theory, directly addresses the “truth” of things in or out of an environment –save Adi’s. The closest any computer theory or program gets to “interpretation” is by interpreting the logical correspondence of statements in respect to other statements — both with respect to an undefined or unknown “meaning” — the truth or significance or unfolding of the thing in the world. It takes two uncertainties to make up one certainty according to Shannon and Von Neumann– who had two bits of uncertainty, 1 and 0, searching for, or construing, a unity.

That is not us. That is not our scientific program. Our program was not to construe a unity, or “it” from “bit.”  That is the program of the industry, because, almost like clocks, everyone in industry marches in lock step by step, tick by tock, take-stock.

Adi began with the assumption that there is an overarching unity to “it.” He then studied how a distribution of signs of “it” (i.e., symbols that make up human languages describing “it”) manages to remain true to the unity of “it,” despite constant change. Such change, it can be argued, arrives in the guise or form of uneven or unequal adoption, selection, and retention factors, as seen in the overwhelming evidence of a continuous “morphogenesis” in as much as the formation, change and meaning of wordsfacts and other things, over eons.

To determine how people interpret the intensity and sensibility or “information” projected with language by means of speech acts (with messages, composed of words) — Adi investigated the sounds of symbols used to compose a real human language when most people were inventing artificial, specialized, logical and less general languages.  Adi chose to study the unambiguous sounds of Classical Arabic that have remained unchanged for 1400 years until present day.  That sound affects what we see is in no way some incidental trivia or minutia.

At the least, it helps truth break free of being bound to mere correspondence, a relegation reminiscent of mime or mimicry. Adi’s findings set truth free,  liberates truth, to soar to heights more amenable — such as high-fidelity,–  than those that burn out in the heated brilliance of spectacular failure.  In fact, in early implementations of our software we had an overt relevance measure called “fidelity” that users could set and adjust.  It speaks to the core of equilibrium that permeates this approach to conceptual modelling, analysis, searching for relevance and significance, subject and topic classification and practical forms of text analytics in general.

Tom Adi’s semantic theory interprets the intensity, gradient trajectory and causal sensibility of an idea presumably communicated as information in the speech acts of people. This “measure” of Adi’s (or we may call it “Adi’s Measure”) can be understood as a measure of the increase in the magnitude (intensity) of a property of psychological intension. (e.g., like a temperature or pressure change or change in concentration) observable in passing from one point or moment to another. Thus, while invisible, it can be perceived as the rate of such a change.

In my view, it is in the action of amplitude, signifying a change from conceptual, cognitive or imaginative will or possibility, to implementation or actualization in terminal reality. Computationally, it is and can be used as a vector formed by the operator ∇ acting on a scalar function at a given point in a scalar field. It has been implemented in an algorithm as an operating principle, resonating —   acting/reacting (revolving, evolving) as a rule, i.e.; being an operator: conditioning, i.e., coordinating/re-coordinating,  a larger metric system or modelling mechanism (e.g., Readware; text analytics, in general).

I mention this to contrast Adi’s work with that of Shannon, who, in order to frame information according to his theory of communications, did a thorough statistical analysis of ONLY the English language. After that analysis, Shannon defined information as entropy or uncertainty on the advice of Von Neumann.  The communications of information (an outcome) involves things which Shannon called messages and probabilities for those things. Both elements were represented abstractly by Shannon: the things as symbols (binary numbers) and probability simply as a decimal number.

So you see, Shannon’s information represents abstract values based on a statistical study of English. Adi’s, information, on the other hand, represents sensible and elementary natural processes that are selected, adopted and retained for particular use within conventional language –as a mediating agency– in an interpersonal or social act of communications. Adi’s information is based upon a diachronic study of the Arabic language and the confirming study in fourteen additional languages, including modern English, German and French, Japanese and Russian, all having suffered the indisputable and undeniable effects of language change — both different from and independent of the evolution of language, or the non-evolution, as-it-were, of Classical Arabic.

Adi’s theory is a wholly different treatment of language, meaning and information than either Shannon or Morse attempted or carried out on their own merits. It is also a different treatment of language than information statistics gives, as it represents the generation of salient and indispensable rules in something said or projected using language. It is different from NLP or Natural Language Processing which depend (heavily) on the ideas of uncertainty and probability.

A “concept search” in Adi’s calculation and my estimation, is not a search in the traditional sense of matching keys in a long tail of key information.  A “concept search” seeks mathematical fidelity, resonance or equilibrium and symmetry (e.g., invariance under transformation) between a problem (query for information) and possible solutions (i.e., “responses” to the request for information) in a stated frame or window (context) on a given information space (document stack, database).  A search is conducted by moving the window (e.g., the periscope) over the entirety of the information space in a scanning or probing motion.  While it ought be obvious, we had to “prove” that this approach works, which we did in outstanding form, in NIST and DARPA reviewed performances.

Adi’s theory is not entirely free of uncertainty as it is, after all, only theoretical. But it brings a new functionality, a doctrinal functionality, to the pursuit of certainty by way of a corresponding reduction of doubt. That is really good news. In any case, this is a theory that deserves and warrants consideration as a modern information theory that stands in stark contrast to the accepted norm or status-quo.

Read Full Post »

This post follows on my last introduction to an objective point of view and it continues exposing Adi’s semantics and the objects of the metalanguage he developed to help explain the relation between language, thought and basic or fundamental existence.

In this post I will charaterize, once again, the idea of conception.  Instead of using a psychological or psychoanalytic language as I have in the past, I will return to the physical theme that guided early research, after finding support for these ideas in Bohm’s book On Creativity (mentioned previously), to introduce the semiotics of creativity.  In this context, semiotics is seen as a system for the interpretation of symbols and creativity is simply the ability or power to create and to conceive (e.g., to form or devise a concept).

In what follows, I will show how the symbols of language are steeped in the creative forces of Nature so that we may extract the flavor and meaning of life.

As I have reported elsewhere in this blog,  computer scientists and linguists are fond of propositional theories that turn beliefs into statements and assertions that can be aggregated into data.  So, it has been difficult showing computer scientists, logicians and programmers, that there are other ways to process meaning.  What is called ‘semantics’ in the computer industry is the epistemological truth or correspondence between such stated beliefs or assertions.  This is all good, even rational, yet somehow ‘artificial’.  This has been demonstrated in the past and more recently with game-playing computers.

The ‘epistemological methods’  do not account for the ‘natural causes’ of human perception or the production of belief. This may be hard to grasp fully, yet, one intuitively knows that their ability to act or judge (also seen as an action) is subject to physical forces and conditions, arising from within and without, and to the passage of time.  Dr. Tom Adi discovered the essential nature of these physical powers and creative forces while looking for semantics in samples of a historically consistent language.

The semantic logic of the poietic-side (generative) use of language derives from physical processes: Upon enduring (more-often after appreciating) the forces and powers behind prominent events — take one that evokes a familiar, if not pleasing, sensation X — a Speaker S may find they can fashion physical gestures and symbols and actual procedures (moving towards, away / forward, backward, etc.) into mental tools. Such tools are used for projecting the idea (the configuration or arrangement of objects and procedures) that causes X, where the sophistication and use of such tools increases over time. Children often learn repetitively; by simulating or causing a physical procedure (influencing X) to reoccur.

Such ’physical procedure’ (explained more fully below) may be carried out in the imagination or for real. There is nothing mysterious about sensation X. It is defined according to practice as a palpable feeling or perception resulting from something that happens to or comes into contact with the body. It is physical nature that all living organisms have a proprioceptive sense; one that relates to the stimuli connected with the position and movement of the body. These stimuli, produced within the organism, are sensations that cause further reaction or response. Most people have witnessed a flower turn its petals to the sun.

The sensation that moves the flower is produced from within, from a sense of the extent, direction and force of impinging stimuli, i.e., For the flower, the ‘meaning’ is in the orientation of the flower in respect to the natural forces moving it to take the ‘right’ position. Moral and other distinctions holding mind and body apart are unnecessary to one’s proprioceptive sense of the position, location and relevant extent of objects and forces in one’s immediate presence.

While all living beings have a proprioceptive sense of being at their discretion, (to avoid running into things, face in the right direction, or simply satisfy their role, etc.) humans beings also have limited dominion over the creative forces of nature to go along with their animal instincts. It is human nature to uncover or discover the physical nature that causes one’s experience. One can use or abuse these powers and act in many ways, though mainly, one acts to change the future and one may act as if the future is irrelevant. The liberty and power to judge plays a major role.

As everyone does or should know very well, we cannot pass physical nature from ourselves to others, we can only project our own sensations as ‘sense-data’ — the idea that something (in the surrounding environment) affects us or causes X. We expect others can “feel” the same way or “see” or “sense” the “controlling presences” (often, even without quite knowing them ourselves).

The meaning in this sense-data is gathered up in the symbols we use to project the idea that causes sensation X. Others have to ‘get’ or apprehend the idea that produces sensation X.  To ‘have meaning’ is to be capable of causing sensation X to arise. Any useful sign must indicate a physical procedure: the forces and conditions that characterize the extent (limits and relevance) of objects in respect to a perceptible position or location and relevant extent of sensation X that a Speaker S desires to be produced in a Listener L.

Plainly, what is called the idea (here) is the position and power — of the particular configuration of being, forces and conditions — that produces sensation X and causes the anticipated reaction in an individual. The problem today is that the meaning of ideas, — the bearing of such forces and conditions — can be confusing, tacit, vague or ambiguous; hidden behind a plethora of speculative, metaphorical or subjective references projected using ordinary speech-Acts A.

Now let us turn our sights onto that ‘physical procedure’ and characterize the forces and conditions involved in the creation of meaning and the production of significance. A focal interpretation of such forces of production P and conditions of existence R is at-hand.

The formulation that follows derives from Adi’s theory of semantics, where the abstract objects of Adi’s metalanguage objectify natural operations, forces and conditions. These sets of objects, defined below in mathematical terms, construct a conceptual polar coordinate system given folks share a proprioceptive sense of being (a body in motion, oriented in space and time).

While a skeptic might accept a claim that humans are specs on a rock hurtling through space, being a body in motion in space and time is only slightly more abstract and ‘being human’ claims little more. It claims the need for knowing one’s position or location, power and relevant extent, in respect to other states and objects in the same dimension. Adi’s arrangement interprets the limits to the natural system of objects, forces and states present to interpersonal experience from a proprioceptive point or value.

Computationally, any sequence, function, or sum of a series (such as a series of sounds or phonemes, i.e., signs) can be determined to be progressively approaching or receding from this point or value, i.e.; its bearings can be determined.  If meaning is determined to be the property of something existing, said or done to impact one’s sensations  — as it appears to be — this functionality appears critical to predicting significance or pertinence and relevance.

It has been difficult for most people to understand how the positions of arbitrary objects and vague forces and conditions can be characterized or calculated from language. Many linguists quickly dismiss the whole idea as radical, incomprehensible or impossible, out of hand. It does not make them ‘right’.

Language is widely considered to be like a map of the territory of reality.  People use maps to get and set their bearings. People use language to navigate the world of other people and their opinions, along with other objects, things and feelings. Now that you have been introduced to this point of view, I urge the reader to think critically about what follows in connection with the examples that are included at the end of this characterization of Adi’s semantic objects.

While these forces and conditions are taken to be axiomatic, the implications can be barely perceptible. So I will first characterize the sets of (real) forces and conditions emanating from or impinging on the senses.Then I will present Adi’s semantic matrix where, essentially, thought and action, theory and practice, meet. The intersections of the matrix are overlaid with examples of legitimate workaday representations. Here first are the objects and sets comprising Adi’s semantic metalanguage; focused on the semantics of creativity (the ability to create):

Based upon semantic findings from a study of Classical Arabic, we assume there exists a changeless and universal content to life, a set of creative forces P, necessary to the body of conception, order and change in life:

P= { p(i) | i = 1, 2, 3 } =  {assignment, manifestation, containment}.

Supervening on these forces are a symmetrical set G of psychosomatic states: G={self,others}, symbolizing unity and plurality, and; a symmetrical set T of biophysical states: T={open,closed}, symbolizing propagation and restriction. When the objects of these sets are crossed, they reveal a fixed (and rich) set of conditions R that marshal the forces P into elementary (and evolutionary) processes or procedures:

R = T x G = { r(j) | j = 1 to 4 }                                                                                   =  {(closed, self),(open, self),(closed, others),(open, others)}.

The objects organized by ‘self’ and ‘others’ are seen as categorical beings objectifying engagement conditions present at all human and social events (wherever these entities are in relevant configurations in the same dimension). The states ‘open’ and ‘closed’ also organize categorical beings. Instantiations of these states objectify boundary conditions. Some may associate these categorical beings with Whitehead’s “controlling presences”. A natural symmetry holds between these objects and conditions R and objects organized by them. Symmetry is found at the root of life itself.

The former conditions objectify natural bonds formed from sensations of attraction and engagement.  This asserts nothing more than that the bare abstractions ‘self’ and ‘others’ stripped of any other associations yet afford a (concrete) sense of attraction and engagement (with unity and plurality) necessary to the formation of bonds.  The latter conditions afford a sense of the scope and constraint of present boundaries (e.g., the scope of space, distance and the constraint of time).

In essence, there are two-sides to each state of being influencing the bonds and organizing the bodies in motion or flux and present at any event.  The intersection of the conditions R with the set of forces P objectifies the valence of binding, unifying and organizing significant objects, forces and conditions into procedural states of being.

The selection and formulation of physical procedures — composed in respect to R of P — determines the type of polarity in the relationships R that ensue; whether applying or acting on the creative force of nature as implied by words and language. Adi derived four perceptible types of orientations from the crossing of boundary and engagement conditions. The valence of relationships R affords a sense of choice or bias; giving direction to, or unfolding: inward, outward, or being jointly or disjointly engaged.

The elementary processes, ‘Assignment’, ‘Manifestation’, and ‘Containment’, comprising the set of physical forces P within our dominion, are easily recognized as the creative forces of change when transformed into physical procedures and participatory acts of assigning, manifesting and containing; a capability to change the future in accordance with the conditions of existence R, described above.

Each speaker S marshals these forces and conditions in order to educe (to develop or bring out the latency of X, i.e., the potential of) the idea. The syntactic arrangement of consonant sounds encode symbolic processes that project the physical processes bearing on X.  It is here that there is harmonious agreement (semantics) or fidelity (or not).

Consequent to this view, a speaker S should (naturally) choose words and use language (speech-acts) A in such a way as to designate those physical forces P and (identify) the objects, states and relationships R that bear upon (or will have relevance and bearing to) Speaker S or Listener L or both S and L –from an objective point of view that S and L can and do share.  This prediction was tested by constructing a conceptual search engine (commercialized as Readware) that transforms arbitrary sequences of text and inquiries into values according to this theory. The search engine showed outstanding performance in tests that measure relevance, recall and precision in text retrieval programs. It also passed reading aptitude tests.

The results show that we can indeed construct a general point of view that thereafter predicts relevance and significance in matters presented to that objective viewpoint, one that can be readily implemented in computer logic.  A proprioceptive point of view proves to be an objective point of view; a view that is psychologically sensible to both S and L and that includes a sense of the internal unity of self-awareness and the external plurality of others, as well as a sense of the states of propagation and restriction, as categorical beings in and of themselves.  See the table below for examples.

The logic of the esthesic-side (aesthetic) understanding of language is explained as follows: in order to educe sensation X Listener (/reader) L filters the idea from within the projected sense-data –while decoding speech-act A.  If the idea is apprehended, its meaning is represented by the bearing of the forces of P and R to X; in which case we say that the meaning is induced in L, i.e., it causes the intended sensation X to actually or figuratively occur to L (i.e., appear to represent or symbolize a relevant form of physical power or influence). In such a case the idea and its meaning can/will cause sensation X to occur.  See the examples in the table below:

The Semantic Matrix of Creative Praxis

(the idea of conception)

Read Full Post »

Consider the nature of conceptual vs. data processing.

Data are elements of conception.  A conceptual element of human insight or imagination is not data. A conceptual element or concept is symbolic of human insight and fancy, i.e.; it is a function of creative thought –of engaging the imagination, the intellect and the creative force of existence in symbolic and physical processes of creation and its renewal.

A creative process is thereby directive and a concept is no arbitrary symbol. A concept represents the unification of symbolic processes of conception: the interplay and engagement of the intellect and imagination and psychological and physiological processes in the creative processes and conditions of conception; in the activity of perceiving and experiencing creation.

A concept can thus be seen as a part of the larger totality of Creation. Such a totality engages not only of the intellect and imagination but also of the harmonious order, essence and totalities, or coherent wholeness, of subsequently experienced (and socially distributed) psychological, physiological and creative processes and conditions.

As I showed in my last post: The essence of the order, structure and the coherent wholeness of the creative processes and conditions are condensed and objectified by way of shared conceptual insight. Such objects are often perceived, copied, reflected upon and instituted as the names of things, and used as words and expressions in the language.

Consider these long-lived conceptual institutions: Beauty. Justice. Liberty.

In the foreword to David Bohm’s book On Creativity, editor Lee Nichol writes:

We have found, developed and formally tested that language and the objective terms in which conceptual processes can be (computed) understood and measured.  While the independence assumption has led AI into torpor, a new interdependence assumption coupled with conceptual processing and critical thinking can lead to a new era of creative computing.

Creativity, not intelligence, is the hallmark of humanity.  However, the prevailing view is that the concepts and insight to creativity cannot be computationally defined and that creative thought is vaporous and empty of any substance. The power of thought or of concepts to engender creative actions in human beings remains shrouded in religious or mystical superstition.

We need assistance and support though, to change that view and help to usher in a new era of intelligent progress and creative achievement.

Read Full Post »

I know I said in my last post that I would continue with some examples of the molecular structure of signs, I have decided to postpone that demonstration.

I do this because a fellow empiricist sent me Rudolf Carnap’s paper “Empiricism, Semantics, and Ontology” available here, all but accusing me of violating the basic principles of empiricism and leading back to a metaphysical ontology of the Platonic kind.  Nothing could be further from the truth.  In later conversation, my friend assured me that he was not accusing me, however, he did mistake the subject matter of my semantics from within a framework of linguistic (lexical or functional) semantics. It occurred to me that others are doing that as well.

It is difficult to speak about any sort of meaning in any context because much of modern society, including societies of professionals, have been covertly driven to and infected by mediocracy.

Google was the flag bearer of mediocracy onto the Internet when they further distorted the value of quantity over quality and pursued their business plan of monopolizing content irrespective of any judgments over any sort of quality, such as: harmlessness, lawfulness, fit, utility, relevance, truthfulness, trust, etc.  Why did Google announce “they would do no evil”?  (NOTE:  This is not intended as defamatory, rather, it is stated as a matter of fact.) The announcement means that the object called evil was presupposed in the mind of the speaker.  This speaks to the process of semosis and to the fact that all public signs presuppose their objects.

Young people who expect to succeed in the future had better abandon any ideological, nominalist, secular or doctrinal and linguistic presuppositions they have about semantics and learn about semiosis.  That is best done by viewing a video skit that is the very best introduction I have ever come across in my thirty years of practice.  I dare say I could have not done a better job than John Deely in explaining this subject matter: the subject matter of semiotics.  The video is in five parts for easier viewing.  One should listen to all five in order to form complete picture of semiotics and what it is semiosis is all about (at least in the sense that I have come to know it and what I have in mind when I refer to objectivity, meaning, relevance, semantic objects and structures, truth, etc.).

Here is the introduction and part one.

A sign, as ordinarily understood, is simply something that suggests the presence or existence of something else, a perceptible indication of something not immediately apparent. What’s so difficult about that? Why should that require the development of a whole new perspective on reality and experience, as so-called “semiotics”, as the study of the way signs works, claims? This video, a dialogue between a semiotician and a proponent of “realist” philosophy, addresses directly the question of what is the difference semiotics makes for our understanding of what is a sign. (by John Deely)

Read Full Post »

I believe that western culture has been damaged while many American’s have been caught up in a stream of what I can only characterize as insincere thought.  For if the thoughts of people were true and sincere we most certainly would not be facing what appears to be one man-made crises after another.  We would not be struggling with extant experience punctuated by greed, deceit and insincerity at the power centers of modern life in the western world: government, banking and big business. The trappings of reward and/or power have replaced culture as the thing to be achieved, the main objective or the goal to be reached. Patricians of the west tend to go more for the brass ring than the ring of truth.

I would wager that when asked, most Americans would not know how to tell a sincere thought from an insincere one or how to check the sincerity of their own thoughts. They cannot even recognize and certainly not articulate the objects of their own thoughts.  In my view, that about sums up the breadth and depth of the problem we face.  The cultivation of the person in any culture is an organic matter you see.  Many of our concepts are cultivated over generations and validated in the plurality of implications over the range of the experience of many people and their lifetimes, let alone during one’s own lifetime.  Yet, we invigorate our words with so-called new meaning until they are laden heavy and weighted down by the dominating dogma.  Significant words like semantics and ontology, even knowledge and truth can no longer be used in precise ways without extensive explanation and context.

In addition, people seem no longer willing to spend time tending to what may matter most of all –being sincere of thought. Without realizing, they pay dearly for the privilege of that choice, for from unchecked and insincere thoughts sprout only erroneous actions, wasteful efforts and deflected, or at the very least, miss-directed energy.  Moreover, due to the pressures exerted by large, dispersed and sophisticated societies, I believe we are entering and era when a person will no longer be able to perceive the difference between the true thought and any other sort of thought, whether in error or not.  No concept will escape.  Therefore I would like to help once again introduce people to the illustrious virtue –the notable merit– of sincere thought and the means to achieve it.

While planning my own book with a working title of “An Anatomy of Thought”, I read the 2003 book of the same title by the very respected and accomplished British biologist and physiologist Ian Glynn in which he wrote about the multiple meanings of the word mind and the difficulty of fixing the subject.

Glynn’s main subject and subtitle is: The Origin and Machinery of the Mind, where he focused on the evolution of the brain, the nervous system and the physiology of the body. The focus of my own work is the sufficiency and necessity of the sincerity or fidelity of thought and its symbolic representations, to extant and personal experience.

Many people believe that modern human disposition and worldview or range of thought has been cultivated from the symbolism of language, and to some extent it has, though language competency does not necessarily make people better or more sincere thinkers.  In fact, it can be found that more sincere thinkers have more competency in language.  Consequently, an entire field of computing is dedicated to language with the objective of understanding it.

For the last thirty years, software for parsing, translating and categorizing words, building dictionaries, identifying names from other parts of speech, along with spelling and grammar aids, has been developed, yet; except for software developed by Adi and I, (1985-2005) there is no software for functionally mapping a single disposition of the human mind or for measuring its fidelity and salience. Instead of understanding a single disposition of the human mind, the trend has been towards aggregations and statistical analysis of large data sets comprised of parallel text or dictionaries.

Glynn relates how hard the meaning or contents of a state of mind is to determine when he writes about the multiple meanings of the simple word: mind.  He writes that we can mind the dog and mind the baby and notes that minding dogs and babies are different activities.  We can go out of our mind, bring something to mind and be mindful of our manners. We might or might not mind whether he has it correct or not.  He might have been as easily referring to any word or phrase of any language, let alone mind,  in this statement at the bottom of page 3 of the preface of his book:

Confusion is perhaps the single reason why current efforts to achieve artificial intelligence cannot be successful.  All the prestigious players –those in government, commercial and in circles between– are caught up in a historical tide of ingenuity based not upon the merit or the pursuit of sincere thought, or on the greater good of the culture, but upon personal reward and the pursuit of power or control over the drudgery, dullness and insincerity of extant and mundane experience.  After all, it is the easy thing to do.

The main trouble is that computational or artificial approaches, including pattern matching and data mining of linguistic or textual information, that work with so-called natural language processing, vocabularies and a set of rules or rule-base –whether process or reward driven– cannot possibly make correct interpretations of facts when such correctness is essential.  They look good –or good enough– to the masses and that alone is the signal of fatality. When tested, they often break or a given process  breaks-down. This is well known.

They break because the conventions shared between those conversing machines (that many envision) do not require that their representations be sincere. In fact, programmers do not care and business executives do not want to know.  That is, the data in the machine needn’t be based in fact or on correct interpretations of a fact.  Moreover, there is no means to check whether an asserted fact is true, nor is there any ideal for comparison.  There are logical conditions of truth and consistency but these are insufficient determinants of the necessary fidelity or sincerity of the true judgment.

Since the 1950s the computing and software arts have taken off. Today, many of us cannot imagine life without computers.  But if we look at the history of the development of the computer industry, we find that corporations took care in the early days to make data entry correct and complete in all respects. The reason for this is because they knew that once incorrect or inconsistent data crept into the system the information that is output from computing the data would become corrupt and useless.

Now that we live in the Internet age of 2010, computing has become pervasive and we have everyone, even children, creating web sites and posting their knowledge, opinions, assertions, assumptions and all manner of information, on the Internet and in social networks.  Now that people believe that the massive amounts of data will bring new intelligence and new knowledge –who is checking for consistency or completeness or sincerity (wikipedia being a notable example and an exception to the norm)?  The question is then: what means are available to the common man to obtain fidelity, moreover; check their own language or that of others, or to test for sincere thoughts and ideas? There is a missing foundation.

The English Philosopher Roger Bacon (whose date of birth is established upon his statement in the Opus Tertium, written in 1267, that “forty years have passed since I first learned the alphabet”) upon observing that all languages are built upon a common grammar, stated that they share a foundation of ontically anchored linguistic structures.

While the Franciscan friar and empiricist lectured on Aristole and was certainly acquainted with Arab and Jewish commentaors on Aristotle, he did not manage to find or point out any ontic foundation (a foundation for the way things are or for the way extant relationships work).  It was not until Tom Adi came along in 1982 that we began to obtain the means to test our interpretations, to compare them with the facts and against the plurality of their implications.

Read Full Post »

Older Posts »