Feeds:
Posts
Comments

Posts Tagged ‘abstraction’

The world is lacking an operational definition of intelligence that can lead to more exact thinking and to computer systems that help people to think more clearly and effectively. A good operational definition ought to be:

  1. Specific enough to be implanted as a procedure one that can be easily and readily followed.
  2. Motivational, manageable, measurable such that it leads to invention, progress, successful outcome.
  3. Attainable such that any baby can use the organism to sense and control entities and activities in its world or environment.
  4. Relevant, in that it is determinate of what is to become significant, or;
  5. Timely, and
  6. Salient

This definition (stated below) addresses two questions:

  • Where do we get the intelligence to deal with a growing, changing reality?
  • How does intelligence work to make changes in our favor?

Most researchers agree that human intelligence is observed in behavior, in particular, in language and through speech acts. The Sapir-Whorf theory of linguistic relativity, was summarized by the semanticist Stuart Chase, when he stated:

“First, that all higher levels of thinking are dependent on language. Second, that the structure of the language one habitually uses influences the manner in which one understands his environment. The picture of the universe shifts from tongue to tongue.”

Restating this linguistic theory as a systems theory and in terms of analytic and computational engineering, notational engineer Jeffry Long wrote:

“First, that all abstract thinking is dependent upon the existence or invention of notational systems. Second, that the underlying ontological inventions of the notational system one habitually uses influences the manner in which one understands his environment. Acquiring literacy in a major notation causes us to add a new dimension to our picture of the universe.”

Based on twenty-seven years of intimate experience, I can restate Tammam Adi’s theory of semantics based on Classical Arabic, in this way:

First, living in the world is a growing, expanding experience or (ontogenic) process in which we make things (speech, nouns, names; things, artifacts, etc.). The words of language are made of abstract structures referencing bits or segments of this growing/making reality that we construct and utilize for common edification and understanding.

Second, the growth in common and social sense, along with modern languages, rests on ontogenetic intelligence in the organism of mind and on the success of its notational system: its elementary (ontogenic) processes and semantic rules, and its recognizable symbols (e.g. alphabets) and system of writing. Collectively, we call these “ontological inventions” for making progress.

Thirdly, word structure is composed of clusters or configurations of ontological inventions involving and representing both real and abstract entities and activities, arranged in such way as to be productive (of making sense, meaning, things) of understanding.

With Adi’s theory of (algebraic, axiomatic) semantics, it is possible to specify the ordinary conditions and ontogenic controls of sapience in the following concise and formalized way:

There is a self-organizing mechanism (regulating schemata) comprising:

  1. the polarity of an abstract entity, representing engagement conditions, (G) distinguishing the involvement and participation of oneself and others, (G={Self, Others}) in;
  2. a symmetrical relationship (R) crossing the polarity of an abstract procedure, representing an ontogenic orientation and boundary conditions
    (T={Open, Closed}, and R=T X G) for;
  3. a set of invariant and elementary processes
    (P={assignment, manifestation, containment}) being structured by the abstract entity, using the polar procedure for growing and making (sense, understanding, artifacts etc.) and;
  4. which schematic arrangement of such entities and activities generates symbolic and semantic operations (syntactically) carried out or produced (i.e. interpreted) by enacting them (via speech-acts, etc.).

We call this intelligence and we say: “Intelligence is the organism of a mind uniting (abstract and real) entities and activities in such a way that they are productive of regular changes from the beginning until the end.”


Read Full Post »

Albert Einstein wrote: “Computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together they are powerful beyond imagination.”

The partnership between human beings and computers is long and enduring and there are so many examples of just how powerful the influence of computers really is. This was especially true after the debut of the personal computer, and again after the debut of the Internet that gets us connected today.

When spreadsheets came out we became better tabulators. When word-processing and spell-checkers arrived we became better writers. The widespread use of relational databases made it easier to collect, store and manage information making us more intelligent about larger collections of data.

Over the decades of computing the costs of storing data have dropped to nearly nothing.  In many cases storing data on the Internet is free.  The costs of collecting data has dropped significantly.  There was a time, not so long ago, that the 300 baud modem was the most common way to connect or be “on-line” with another computer.  The costs to download 10 megabytes over long distance telephone lines was not inexpensive.  Now people connect to the Internet over public wireless networks in most cities. It is offered free by many business establishments. People now download a thousand times the amount of data moved in 1985.

But something went wrong. The five basic means and capabilities needed for intelligence are collection, storage, retrieval, analysis and dissemination. We have systems of collection, storage, retrieval and dissemination but the systems we do have for analysis are not generally something anyone can run on their personal computer.  Even if we can run them on a desktop pc, they are complex systems that require significant expertise to make them work well in limited areas of specialization.

Analyzing the patterns and ordering the data helps us learn about the world and obtain to better and more complete theories.  Albert Einstien wrote:  “Concepts that have proven useful in ordering things easily achieve authority over us that we forget their earthy origins and accept them as unalterable givens.  Thus they might come to be stamped as “necessities of thought,” “a priori givens,” etc.  The path of scientific progress is often made impassable for a long time by such errors.  Therefore it is by no means an idle game if we become practiced in analyzing long-held  commonplace concepts and showing the circumstances on which their justification and usefulness depend, and how they have grown up,  individually, out of the givens of experience.  Thus, their excessive authority will be broken.  They will be removed if they cannot be properly legitimated, corrected if their correlation with  given things be far too superfluous, or replaced if a new system can be established that we prefer for any reason.”

Yet, still, here and now as we are in the twenty-first century we are lacking knowledge of those things that are given in our individual, private, and our public, social experience.  There is no model, no theory by which we can know, count and measure the givens of experience.  Einstein also wrote that: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple as possible without having to surrender the adequate representation of a single datum of experience.”

So, it is a fair question to ask after the adequate representation to the givens of experience.  It is reported that in a letter to his son, Einstein wrote that: “Life is like riding a bicycle.  To keep your balance you must keep moving.”

Isn’t it time to move on to a new way of thinking about intelligence and our means and capability to alter the structure and order of our independent, yet collective reality?  This video below defines simple basic and abstract elements of thinking that could make it possible for computers to do more intelligent analysis in much simpler ways, and to help us become better thinkers in the process.

Read Full Post »

I would like to address the few questions I received on the three parts 1,2 and 3 of the semantics of interpersonal relations. The first and most obvious questions was:

I don’t get it. What are the semantics?

This question is about the actual semantic rules that I did not state fully or formally in any of the three parts. I only referred to Dr. Adi’s semantic theory and related how the elements and relations of language (sounds and signs) correspond with natural and interpersonal elements and relations relevant to an embodied human being.

Alright, so a correspondence can be understood as an agreement or similarity and as a mathematical and conceptual mapping (a mapping on inner thoughts). What we have here, essentially, is a conceptual mapping. Language apparently maps to thought and action and vice-versa. So the idea here is to understand the semantic mechanism underlying these mappings and implement and apply it in computer automations.

Our semantic objects and rules are not like those of NLP or AI or OWL or those defined by the semantic web. These semantic elements do not derive from the parts of speech of a language and the semantic rules are not taken from propositional logic. And so that these semantic rules will make more sense, let me first better define the conceptual space where these semantic rules operate.

Conceptually, this can be imagined as a kind of intersubjective space. It is a space encompassing interpersonal relationships and personal and social interactions. This space constitutes a substantial part of what might be called our “semantic space” where life lived, what the Germans call Erlebnis, and ordinary perception and interpretation (Erfahrung) intersect, and where actions in our self-embodied proximity move us to intuit and ascribe meaning.

Here in this place is the intersection where intention and sensation collide, where sensibilities provoke the imagination and thought begets action. It is where ideas are conceived. This is where language finds expression. It is where we formulate plans and proposals, build multidimensional models and run simulations. It is the semantic space where things become mutually intelligible. Unfortunately, natural language research and developments of “semantic search” and the “Semantic-Web” do not address this semantic space or any underlying mechanisms at all.

In general when someone talks about “semantics” in the computer industry, they are talking either about English grammar, rdf-triples in general or they are talking about propositional logic in a natural or artificial language, e.g., a data definition language, web services language, description logic, Aristotelian logic, etc. There is something linguists call semantics though the rules are mainly syntactic rules that have limited interpretative and predictive value. Those rules are usually applied objectively, to objectively defined objects, according to objectively approved vocabulary defined by objectively-minded people. Of course, it is no better to subjectively define things. Yet, there is no need to remain in a quandary over what to do about this.

We do not live in an completely objective, observable or knowable reality, or a me-centric or I-centric society, it is a we-centric society. The interpersonal and social experience that every person develops from birth is intersubjective — each of us experience the we-centric reality of ourselves and others entirely through our own selves and our entirely personal world view.

Perhaps it is because we do not know and cannot know– through first-hand experience at least– what any others know, or are presently thinking, that there is this sort of dichotomy that sets in between ourselves and others. This dichotomy is pervasive and even takes control of some lives. In any case, conceptually, there is a continuum between the state of self-realization and the alterity of others. This is what I am calling the continuum of intersubjective space.

A continuum of course, is a space that can only be divided arbitrarily. Each culture has their own language for dividing this space. Each subculture in a society have their own language for dividing this space. Every technical field has their own language for dividing the space. And it follows, of course, that each person has their own language, not only for dividing this space, but for interacting within the boundaries of this space. The continuum, though, remains untouched and unchanged by interactions or exchanges in storied or present acts.

The semantics we have adopted for this intersubjective space include precedence rules formulated by Tom Adi. Adi’s semiotic axioms govern the abstract objects and interprocess control structures operating in this space. Cognitively, this can be seen as a sort of combination functional mechanism, used not only for imagining or visualizing, but also for simulating the actions of others. I might add that while most people can call on and use this cognitive faculty at will, its use is not usually a deliberate act; it is mainly used subconsciously and self-reflexively.

We can say that the quality of these semantics determine the fidelity of the sound, visualization, imitation or simulation to the real thing. So when we access and use these semantics in computer software as we do with Readware technology, we are accessing a measure of the fidelity between two or more objects (among other features) . This may sound simplistic though it is a basic level cognitive faculty. Consider how we learn through imitation. Note to self: Don’t leave out the cognitive load to switch roles and consider how easily we can take the opposite or other position on almost any matter.

We all must admit, after careful introspection, that we are able to “decode” the witnessed behavior of others without the need to exert any conscious cognitive effort of the sort required for describing or expressing the features of such behavior using language, for example. It may be only because we must translate sensory information into sets of mutually intelligible and meaningful representations in order to use language to ascribe intentions, order or beliefs, to self or others, that the functional mechanism must also share an interface with language. It may also be because language affords people a modicum of command and control over their environment.

Consider the necessity of situational control in the face of large, complex and often unsolvable problems. I do not know about you, but I need situational control in my environment and I must often fight to retain it in the face of seemingly insurmountable problems and daily ordeals.

Now try and recognize how the functional aspects of writing systems fill a semiotic role in this regard. Our theoretical claim is that these mutually intelligible signs instantiate discrete abstract clusters of multidimensional concepts relative to the control and contextualizing of situated intersubjective processes.

Like the particles and waves of quantum mechanics are to physics, these discrete intersubjective objects and processes are the weft and the warp of the weaving of the literary arts and anthropological sciences on the loom of human culture. We exploited this functional mechanism in the indexing, concept-analysis, search and retrieval software we call Readware.

We derived a set of precedence rules that determine interprocess control structures and gave us root interpretation mappings. These mappings were applied to the word roots of an ancient language that were selected because modern words derived from these word roots are used today. These few thousand root interpretations (formulas) were organized into a library of concepts, a ConceptBase, used for mapping expressions in the same language and from different languages. It was a very successful approach for which we designed a pair of ReST-type servers with an API to access all the functionality.

To make this multi-part presentation more complete, I have posted a page with several tables drawn up by Tom Adi, along with the formal theory and axioms. There are no proofs here as they were published elsewhere by Dr. Adi. These tables and axioms identify all the key abstract objects, the concepts and their interrelationships. Tom describes the mappings from the base set (sounds) and the axioms that pertain to compositions and word-root interpretations, together with the semantic rules determining inheritance and precedence within the control structures. You can find that page here.

And that brings me to the next question, which was: How can you map concepts between languages with centuries of language change and arbitrary signs? The short answer is that we don’t. We map the elements of language to and from the elements of what we believe to be are interlinked thought processes that form mirror-like abstract and conceptual images (snapshots) of perceptive and sensory interactions in a situated intersubjective space.

That is to say that there is a natural correspondence between what is actually happening in an arbitrary situation and the generative yet arbitrary language about that situation. This brings me to the last question that I consider relevant no matter how flippant it may appear to be:

So what?

The benefits of a shared semantic space should not be underestimated. Particularly in this medium of computing where scaling of computing resources and applications is necessary.

Establishing identity relations is important because it affords the self-capacity to better predict the consequences of the ongoing and future behavior of others. In social settings, the attribution of identity status to other individuals automatically contextualizes their behavior. By contextualizing content, for example, knowing that others are acting as we would effectively reduces the cognitive complexity and the amount of information we have to process.

It is the same sort of thing in automated text processing and computerized content discovery processes. By contextualizing content in this way (e.g, with Readware) we dramatically and effectively reduce the amount of information we must process from text, to more directly access and cluster relevant topical and conceptual structure, and to support further discovery processes. We have found that a side-effect to this kind of automated text-analysis is that it clarifies data sources by catching unnatural patterns (e.g., auto-generated spam) and it also helps identify duplication and error in data feeds and collections.

Read Full Post »