Student Publications
Stefan Fokuhl
Title: Cognitive Psychology
Area: Social and Human Science
Country:
Program: Doctorate in Business Administration
Available for Download: Yes
View More Student Publications Click here
Sharing knowledge is a vital component in the growth and advancement of our society in a sustainable and responsible way. Through Open Access, AIU and other leading institutions through out the world are tearing down the barriers to access and use research literature. Our organization is interested in the dissemination of advances in scientific research fundamental to the proper operation of a modern society, in terms of community awareness, empowerment, health and wellness, sustainable development, economic advancement, and optimal functioning of health, education and other vital services. AIU’s mission and vision is consistent with the vision expressed in the Budapest Open Access Initiative and Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. Do you have something you would like to share, or just a question or comment for the author? If so we would be happy to hear from you, please use the contact form below.
For more information on the AIU's Open Access Initiative, click here.
Abstract
Cognitive psychology became more and
more important during the last
decades. It is involved also
in other branches of psychology and
has changed them. But also other
parts of social science are
being influenced. Thus, new
knowledge about learning influences
pedagogy. But even new sights
about attention are important for
completely other realms, such as
marketing.
This shows how important cognitive
psychology is. The purpose of this
thesis is to introduce some
parts of cognitive psychology:
topics such as attention,
consciousness, memory, learning,
thought
and language are illustrated. There
are also shown some methods that are
used in mental processes:
algorithm and heuristics. The
properties, advantages and
disadvantages of them are explained.
A
chapter about intelligence rounds
this thesis off.
1 Introduction
The term 'cognitive' or its noun
'cognition' is borrowed from the
Latin expression 'cognoscere'
which means 'to know'. Bought into
psychology it is used for the branch
cognitive psychology. It
means here not just everything that
deals with knowledge but with
concepts like mind, reasoning,
perception, intelligence, learning
and like memory, attention, action,
problem solving and mental
imagery. Cognition is an abstract
object that is not be able to
investigate directly but only by
observation others.
There are a lot of similarities
between how we process information
and how modern technologies
process information. In a very
theoretical way it is even equal.
Moreover, a computer is much
faster, more reliable in processing
given information. That is the
reason why more and more
computer replace erroneous human
worker in our factories.
Both - computer and human brain -
uses either algorithm or heuristics
to process high level
information. These two method can be
seen in several sections, such as
problem solving, reasoning,
comprehension of languages. They
appear there independent on the
section and accompany them
like a leitmotif. They are
introduced in the according chapters
here.
But human beings are not only able
to process new or already existing
information but they are able
to create new ideas and make
personal decisions. Insofar,
cognitive processes are much more
richer
than 'just processing machines'. A
good example for such a information
processing machine is a
roboter that can walk, receive input
from the environment and process it
but is never be able to have
own desires or ideas. Here start the
differences between a merely machine
and a living creature.
Such desires or ideas are build in
the brain as cognitive processes. We
combine stored information
to new structures and create things
as imagine first. No computer or
roboter can do that. But since
we construct computer and roboter we
are able to evaluate how much
cognitive effort our brain is
able to perform.
The goal of this thesis is to show
how the human brain is able to
accomplish the tremendous and
difficult assignment of information
processing in even different and
disjunct partitions. How is our
brain prepared do achieve them
conveniently?
2 Definition
This thesis consists of six parts:
Consciousness/Attention, Memory,
Learning, Thought, Language
and Intelligence. Each component is
here presented shortly now:
Consciousness/Attention
In order to interact to our
environment, first, we must be able
to receive information. That is the
task of our senses. But the received
information must be processed, that
is, we must be aware of
what is going on around us. This is
the state is consciousness. If we
cannot process it then we
cannot notice our environment and
act, react or even interact with it.
Thus, consciousness it the
base or precondition of any mental
deliberately processing. However,
there is too much input and
we need to filter out useless and
unimportant information. This is
done by attention. Therefore,
consciousness and attention are very
important step for any cognitive
process regarding incoming
information from outside.
They belong to cognitive psychology
as the first steps of mental input
information processing. Both
terms, consciousness and attention,
are introduced here.
Memory
After receiving such a huge amount
of information be must encode, store
and - retrieve - it in order
to reuse it in a reasonable way.
According to the theory of Richard
Atkinson and Richard Shiffrin
(1968) there are three different
stores which are introduced in this
chapter. Each part of the memory
encodes, stores and retrieves
information, but in different ways,
duration and capacity. When the
importance is high enough then it is
transferred to the next part until
finally it is stored in the last
one. It is also mentioned how we can
improve storing information.
Memory is a part of cognitive
psychology because we can only do
mental tasks on information the
is not volatile and - therefore -
not existent. It were not possible
to compare any information if it
were not stored at any place.
Learning
In this chapter the two types of
conditioning according to Pawlow,
Thorndike and Skinner are
introduced: Classical and Operant
Conditioning. At the first sight
they seem nothing to do with
cognitive psychology but they are
also based on mental processes. They
require cognitive capability
as well. While conditioning some
utilities can be employed, such as
reinforcement or punishment.
There is also another type of
learning that is based on
observation of others and imitation
- social
learning. These kind of learnings
are not based on behaviorism but on
cognitive processes.
Thought
Our daily thinking is categorized by
psychologists to problem solving,
reasoning, and decision
making. In this context, the first
time the two important methods
appear: algorithm and heuristics.
But before, the question is answered
how we encode our thinking: in
language, imagery or even
acoustic. Furthermore, it is shown
how we categorize objects in a
reasonable hierarchy in order to
retrieve it faster at later access.
Problem solving, reasoning, and
decision making are obviously mental
processes that belong to
cognitive psychology.
Language
One of the greatest mastered tasks
in the early childhood it the
acquisition of the mother language.
There is no grammar book, no
vocabulary list but just the
listening to people around the
child. Thus,
a child learns only from thinking
about the just listened phrases and
imitation. But it is also
discussed what a language is,
actually. It consists of several
levels that are processed
separately. In
order to predict the continuation of
a still listened sentences we apply
heuristics as well. Another
question is how far the own language
influences the way of thinking. Does
the own language form
the thoughts?
Acquiring an language and comprehend
a sentence is a pure mental process.
Language is an
important part of cognitive
psychology.
Intelligence
The last chapter in this thesis
treated the ability or capability of
any mental process that is a human
being to perform. The definition of
intelligence is somehow difficult if
not impossible. Several
approaches are done. Thus, some
psychologists think that there is a
multiple system of intelligences,
others divide intelligence in three
parts. Intelligence even seems to
depend on the external
environment: one culture put the
emphasis on matters that are
completely unimportant in other
cultures.
Some methods of measuring
intelligence are explained. As
result of such tests the IQ and
normal
distribution are introduced.
3 Consciousness /
Attention
3.1 Consciousness
While we are awake and while we
sleep a huge amount of information
arrives us. If we processed
all those informations we would be
overloaded and too busy to do
something else. Consciousness
allows us to filter the most
important information and to drop
silently the useless signals. We can
decide what is important and can
concentrate on what we think is
essential for us. And we can
combine the new information with the
already memorized information in our
brain to a useful
structure.
Thus, we can interpret, analyze
current given data, and then merge
them together with our
experience to adequate activities
(Marcel, 1983). First, we can
monitor the environment and,
second, we can control the resulting
functions (Kihlstrom, 1984). The
consciousness let us directly
control our life, it even protects
our life: If a danger approaches it
let us know the danger and we
can react in the right manner by
using the current information
combined with our experience. It let
us be aware of positive and negative
events.
But the consciousness does not work
like an input machine where all
information is processed in
the same way. There are personal and
individual differences in how we
process the same
information. E.g., in the cinema all
members of the audience see the same
movie (= the same input),
but all process it and think about
it in a personal way. Thus, the
conscious is not a bare instrument
like a computer but it is a
individual and complex instrument to
let us survive in the world.
3.2 Attention
As already said, the consciousness
filters unimportant information and
let enter important input. We
are aware of only few signals. We
can decide what is important for us
by turning our attention
toward a task, all other information
is diminished but not totally. While
we are concentrated the
focus is on one task but all other
things are excluded.
3.2.1 Attention on Several Tasks
The interesting question arising
here is whether we are able to
concentrate on more than one task at
the same time. We learned before
that we focus on one task. Are we
able to focus on several tasks
simultaneously? It also clarifies
the question how far the left
information is filtered out. Several
experiments are developed to
investigate it.
3.2.1.1 Dichotic
Presentation
Colin Cherry (1953) developed an
experiment to investigate how we
select attention among
multiple sources. A subject receives
two different messages to each ear
and must repeat them right
after hearing them. So, he is forced
to concentrate on one voice. This
situation happens quite often
at parties where multiple people
talk at the same time and we want to
follow a message. The
subjects could recall the message
from the focused ear but were not
able to hear to the second
message. But the second message was
not completely ignored. At least,
they noticed when the voice
was replaced by a different gender
or a tone. If the unattended message
suddenly got identical to the
first one, all subjects noticed it.
This experiment checked the auditory
channel, but there are tests
that check the optic channel as
well.
3.2.1.2 Stroop Effect
One famous experiment is called
stroop effect. Some words of colors
are written in a table and the
foreground color differs from the
word. Thus, it is very difficult to
say quickly the right color of the
word.
Red
Green
Yellow
Blue
Green
Blue
Red
Yellow
Blue
Yellow
Green
Red
The written word interferes with the
foreground color. There are two
stimulus and we have to
decide at each word what we
concentrate on. Sometimes it tilts
and we say the wrong word. That is
an indicator that we cannot ignore
the non-attended signal completely.
3.2.1.3 Counting a Number
Chain
During a further experiment
(Kahneman, 1970) the subjects were
told to perform two tasks: First,
they had to listen to a spoken chain
of numbers (Example: 4-2-6-8). After
a second they had to add
a one to that chain and to speak
them loud (5-3-7-9). Second, they
had to identify a visual shown
single letter and to tell the letter
aloud after the added number chain.
Thus, the sequence was: a)
reading the single letter, b)
listening to the number, c)
repeating the number after one
second, d)
repeating the letter. The elapsed
time between a) and b) was variable.
Two depending variables
were used: first, the average
percentage of wrong repeated letters
depending on the point in time of
the display of the letter, second,
the average percentage of errors
while repeating the number chain
as function of the point in time of
the display of the letter. The
figure 1 shows the result of the
experiment. The most errors on
repeating the letters were done
during a thin line of time.
Fig. 1: Chain of
Numbers
3.2.1.4 Conclusions
The tests (and some others) allow
the following conclusions:
3.2.1.4.1 Attention is a Gradual
Process
One noticeable fact during the
dichotic presentation was that the
subjects recognized personal
considerable or important messages
on the unattended ear, though. If
this were not possible a
important information would never be
able to get our attention (e.g.
danger or a cry for help). That
means that there must be an analyze
of all received information, even
the unattended. Of course, the
attended information is more
intensive processed and longer in
memory (Norman 1968).
3.2.1.4.2 Input is analyzed by
Experience
Because of the above mentioned fact
there must be a mechanism that
checks the input whether there
is a meaningful information even in
the unattended input. This implies
that they are compared with
already known information from the
memory. According to Neisser (1967),
the new incoming input
is combined with the information
from the memory and then transported
to the conscious. Thus, all
information that reached the
conscious level is already
preprocessed. Or, in other words, it
is
impossible to get information at the
conscious level that is not
preprocessed already.
3.2.1.4.3 Attention is a limited
Function
The chain-of-numbers experiments
proofs that we cannot perform more
than one task or pay
attention to multiple inputs
at the very same time. If the
elapsed time is long enough we are
able to
do two or more things at the same
time. As long as the slice is wide
enough we can distinguish
easily two tasks but if the delay
gets shorter we get problems to
solve both tasks adequate. There is
also a difference in what the two
tasks are. The more similar the
inputs are the more difficult they
are to perform (Navon & Gopher,
1979). The famous example is to read
and to dictate a text at the
same time. It is almost impossible.
But it can be trained a bit. After a
certain time of training
subjects were able to read an write
a dictated text (Spelke, Hirst, &
Neisser 1976)
3.2.1.4.4 Automation
Yet, there is an exception how we
can do even more than two tasks and
inputs each. For a given
task we can select the necessary
input and this input seems to step
forward. Thus, subjects can
easily search for numbers among a
heap of letters, or strokes or
crosses under a crowd of check
marks. The to be searched items seem
to be emphasized. Exactly this
emphasis is not controlled, the
filtering process appears to be
automated (Posner 1982; Schneider &
Shiffrin, 1977).
Another kind of automation happens
if we do tasks that we are used to
do. We do not need to
concentrate on those processes any
more. A vivid example is the driving
of a car. While we learn to
drive we are fully concentrated on
the different subtasks and the
corresponding inputs. Later, when
we have learned it, we do not
concentrate on the driving but do
other things at the same time like
talking or discussing to the
passenger. Just if we need more
attention to the driving (drive into
a
parking lot) we stop talking and
concentrate on that.
Moreover, we are able to do more
trained and unattended tasks while
we concentrate on a non
trained and unknown task.
3.2.1.5 Decisions without
Thinking
Because of our ability to do things
without our attention we are able to
do more than one process.
This is a big advantage. But there
is a big disadvantage, too. We can
decide or perform something
without paying attention to it.
While we are concentrated on a
single process someone else might
ask us an (for us) unimportant
question and we answer or react
without paying attention about what
we answered. This effect is called
compliance. In an experiment people
in the subway were asked to
give seat with a given meaningful
"because". But, the people gave seat
even if the because was a
non sense phrase. (Langer, 1978).
Later, when asked why we decided in
that way we try to find a reason and
find causes that did not
exist in reality. We construct an
excuse that never happened before
(Zimbardo, 1995).
Chapter summary
In this chapter a short introduction
about conscious is given, first.
Second, the term attention is
illustrated. Hereby, the question is
whether we are able to pay attention
to multiple input and if not,
which parameter are used to decide
what we pay attention to. After a
lot of practice we are able to
automate a task and no longer need
to concentrate on it. Therefore, we
are freed again for further
tasks and can perform it
unconsciously. And important mile
stone to grow up and to be able to
do
more things at the same time.
4 Memory
4.1 Introduction
To be able to survive we need not
only a heart and other organs but we
need a construct that allows
us to store any kind of information.
Meaningful information is always in
need during our daily life.
Without own storage we were not able
even to go or eat. Every action that
needs information were
not possible to perform. Since we
have computer it is easier to
explain why we need such a
construct called memory. The analogy
between memory in general and a
computer helps us to
understand that a memory is
necessary. No computer could work at
all without memory. No person
could survive without memory. In
fact, the human memory is far more
complex than any kind of
computer. Our human memory is
located in our head. In this chapter
I provide an introduction about
how our memory works.
First of all some definitions.
Memory is not only the
place or storage where the
information is stored, furthermore,
it is the whole
process or ability of encoding,
storing and retrieving any kind of
information. Again, this can be
compared with a branch of computer
technology: databases. A database
stores and retrieves any
kind of information. But the memory
is far complexer, it realizes the
encoding as well. A database
needs already encoded information,
but storing and retrieving are
similar.
Encoding is a process
by which any kind of information is
changed in that way that it is store
able
in the memory. Again, an analogon
from computer technology: A picture
in paper format is not
storable in computer memory. First,
it must be scanned in, digitalized,
and changed in a certain
format of bits. This last process is
called encoding. Now, it is
storable.
Storage is the place
where the information is held. Each
stored information needs a storage
unit. We
only have a limited amount of such
units. So, it could theoretically
happen that the storage is full
once. Here we have the same case
when a hard drive is full: All
storage units (here: sectors and
cylinders) are used and no further
information can be stored. The
storage units and their limits in
our memory I am going to explain
later.
Retrieval is the
process of finding the already
stored information again. This
process is as important
as the encoding and the storage for
information that is only stored is
useless. It needs to be found
quickly in an effective way by using
hints and indexes.
4.2 Encoding,
Storage and Retrieval
4.2.1 Encoding
The process of encoding is a bit
complexer than computer's encoding
and it compounds more
subtasks. The input is analyzed
first. It is compared with already
stored patterns (is it a sound,
picture or smell ?), thus, its not
merely a 'stupid' encoding but an
intelligent preprocessing. Many
useless input is already filtered
out. The input is categorized or
classified into already known and
general areas or is even recognized
as important for later purposes.
4.2.2 Storage
The Storage does not work like a
machine. We do not store information
like a computer that
memorize each single word and never
forgets it. Rather, the storing of
information demands several
processes before it is stored
permanently. The more a information
is connected to similar or
neighbored items and the more it is
practiced and repeated the more is
the likelihood that we stored
it permanently. Many psychologists
believe that if an information is
stored once it will never be lost
again. The only thing that prevent
us from remembering is that we
cannot find it. And thus, we are
at the last important process:
4.2.3 Retrieval
The retrieval of information seems
to be simple but it is the biggest
problem among all processes.
One information might be stored well
in the memory but it is useless
unless we find it again. The
retrieval works better the more ways
are built to the information. In
other words the more indexes
point to the information the more
possibilities exists to find it
quickly and reliable. One fact is
unique: While we retrieve an
information it is changed. Each hit
of a memory unit changes the
content but reinforces the
retrieval. Thus, what we retrieve
often is likely to retrieve the next
time
even better as if the way to the
info is made wider. There are two
way of retrieving information:
recall and recognition. The
recall is a completely self-made
reconstruction is stored memory
without any hints and the
recognition is just a comparison
between a given input and a stored
information. Because is the given
input a recognition is easier to
perform than the recall.
4.3 Three Stores
One view about how memory works is
an approach of Richard Atkinson and
Richard Shiffrin
(1968). This view is widely accepted
but other views are developed
either. I will only describe the
view of Atkinson and Shiffrin
because of its acceptance.
According to them the memory
is divided in 3 different stores:
the sensory memory, the short-term
memory and the long-term memory.
These three stores do not have a
certain realm in the brain
each, its rather a hypothetical
structure.. The way of the
information goes always from the
sensory
memory to the short-term to the
long-term memory. One fact makes the
whole memory process
more complex: Each store or stage
has their own encoding, storage and
retrieval.
4.3.1 Sensory Memory
The sensory memory is the first
memory within the memory chain.
Here, arrive all information
from the senses. This memory gets in
action directly after the input but
before any process of
recognition of pattern. Therefore,
this memory is precategorial
(Crowder & Morton, 1969). Each
sense has its own part within the
sensory memory. The visual part is
called iconic memory, the
auditive part is called echoic
memory. The left parts are not as
well investigated as the former one.
The remembrance of the iconic memory
lasts for nearly a half second, for
the echoic memory
around some seconds (Neisser, 1967).
This kind of memory is needed for
the recognition of the
patterns. If we had not such a
memory we would see or listen as
long as the physical inputs lasts,
there were no 'buffer', too short to
process the input contiguously.
4.3.1.1 Encoding
During the first encoding there is
already a selection of which inputs
are marked as important (not
to mix up with a recognition of
patterns). While a sportsman is
performing the sport he/she does not
notice injuries. Later, after
finishing he/she get aware of those.
Or, a soldier does not notice own
wounds until he is off the battle
field. These two examples show that
there is a selection regarding
the importance.
How long and how much can the
sensory memory store? According to
the experiments of Sperling
(1960) we can only store a limited
time and amount of input. He
presented some lines as shown
below containing 3 randomly ordered
consonants to subjects for the
period of 50ms.
F W X
D N W
F Q L
After showing the complete table he
asked to repeat one line that was
chosen by an acoustic signal.
He noticed that all subjects were
almost always able to repeat the
chosen line perfectly. He
interpreted this result as evident
that we can memorize within the
sensory memory nine items. In
further tests with four consonants
and a larger delay he could
ascertain the limit of the sensory
memory. After a delay of one second
the subject were significantly
lesser able to repeat the chosen
line.
Later scientists enhanced the
experiment by testing the acoustic
ability to remember auditive input
(Darwin, Turwey, & Crowder, 1972)
They proved that we can store more
acoustic information than
we can normally report. This fact is
probably necessary to process words
and sentences to a
semantic meaning.
But why is a limit of one second
useful? Or why just a second? Two
considerations may give an
answer. Firstly, if the time to
remember were too short we could not
process continuously the input.
There were interruptions during our
merging of the received input.
Secondly, on the other side, if
the time to remember were too long
the old information would interfere
with the new ones. We had
an overlap of old and new inputs. An
experiment of Averbach & Coriell
(1961) gets us to see that.
Two rows of eight letters each were
presented briefly to subjects. A
marker that indicated the
location of a target letter was
presented either simultaneously or
at varying time intervals following
the presentation of the letter
array. Subjects were to report the
identity of the marked target
letter.
The accuracy of identification of
the target letter decreased with
increases in the temporal interval
between onset of the letter array
and marker. During a certain time
the subject 'sees' the marker
instead of the previous letter. This
phenomenon is called backward
masking.
During this very first time of
processing there is a race of all
input against the time. The most
inputs
loose that race and never reach the
short-time nor the long-term memory:
if the input is too short
and directly interfered by next
inputs. Thus, we need a certain time
to receive and process
successful useful and important
information.
4.3.1.2 Conditions for a
Transfer to the Short-term Memory
We receive much more information
than we can process. Therefore, the
sensory memory filters the
vast part. What are the conditions
that an information is transferred
to the short-term memory?
First, we must pay attention
to that input. Only if an input is
important and worthy enough for us
to
pay attention to it is able to reach
the next realm or level. If this is
the case then we select that input
and concentrate on it. This step or
decision is called selective
attention. We select one input out
of a
multiple amount of inputs and
concentrate on it while we ignore
the rest of inputs. This input is a
candidate for further processing in
the short-term memory.
Second, while we perceive an
input a pattern recognition takes
place. If one input consists of
pattern
that we can recognize and process to
useful information it is available
for a longer time in the
sensory memory and will we
transferred to the short-term memory
because it is regarded as
important, useful, and helpful.
4.3.2 Short-term Memory
The short-term memory is according
to Richard Atkinson and Richard
Shiffrin the second kind or
realm of memory. In the order of
processes of memorizing is lays
between the sensory and the long-
term memory. As already mentioned
the flow goes from sensory via
short-term to the long-term
memory. Some interesting properties
differs it from the other two,
sensory and long-term memory
(Zimbardo, 1995).
·
The capacity of the short-term
memory is very limited. Compared to
the other two memories
it is negligible.
·
The duration is also very limited,
around 20 seconds, unless it is kept
conscious.
·
Only in this stage information is
processed consciously, the other two
are never conscious
for us.
·
This part of memory is the only one
that is able to assemble subsequent
pieces of input to a
context such as a conversation or a
fluent input of visual or auditory
input from a speech.
We can also follow and adapt to a
changing situation by the short-term
memory. Only here
we are aware of any flow or change.
One example: We sit in a restaurant
and are talking
concentrated. The waiter passes by
our table. After a while resounds a
loud clangor. Even
though we haven't seen the cause we
know that is was not caused by a
tree or car or
whatever but by the waiter. The
short-term memory accesses the just
received input
including information from the
long-term memory to interpret the
cause of it. (Baddeley &
Hitch, 1974).
4.3.2.1 Encoding
As already said, only selected and
regarded as important information
reaches the short-term
memory. But the question is how it
will be encoded there. Two kinds of
encoding are used:
articulatory-acoustic or semantic
encoding.
4.3.2.1.1 Articulatory-acoustic
Encoding
One method of encoding is the
articulatory-acoustic encoding. We
rather hold the label of the item
in the short-term memory if it is
labeled. Even verbal patterns that
are received in visual input
(=reading) are stored in an acoustic
form. Experiments dealing with lists
of letters have proved it.
Subject were presented an amount of
letters like CBTPEGFSYX for a short
time. Mistakes of
memorizing were done with letters
which sound similar (D and T), but
not which are written similar
(D and O) (Conrad, 1964). During the
test no one read or spoke those
letters. This proves that we
think and memorize in
acoustic or language form, not in
visual pictures like the form of the
letter.
But there is still the question
whether we use an acoustic code (=
using the description of its sound)
or an articulatory code (= using the
way how to pronounce the word), but
in both cases it has
something to do with the sound of
the item, hence the term
'articulatory-acoustic' is used in
order to
pay attention to the still existing
uncertainty.
Tests with deaf subjects show that
we are able to use other encodings
as well: visual and semantic
encoding, but as healthy people much
lesser. (Bellugi, Klima & Siple,
1975; Frumkin & Anisfeld,
1977).
4.3.2.1.2 Semantic Encoding
Keppel & Underwood (1962) made
further investigations of the
paradigm of Peterson & Peterson
(1959) and pointed out that the very
first try was recalled even after 18
seconds error free. They
observed that these results worsened
down to 10% after the third or
fourth trial. This phenomenon is
called proactive inhibition. It
refers to the fact that new
information is interfered by old
information
and thus prevent memorizing them.
Wickens et al. refined the paradigm
of Peterson & Peterson
(1959) again by using even different
meanings of 3-letter words. They
told the subject now a
completely different code after the
3rd or 4th try, not a 3-letter-code
but a 3-digit-number.
Surprisingly the ability to memorize
was again at the level of nearly
100%. Wickens et al. thought
that such a dramatical improvement
always takes place on the forth
trial no matter what kind of
information. In other words is that
phenomenon more general? This is
indeed the case and it shows
that a change in the meaning
influence the memorizing. This
implies again that we use semantic
encoding as well because just
acoustic encoding would never take
care of the meaning. The subjects
do not have to be aware of the
change of the meaning, therefore the
change is not conscious.
(Wickens, 1972; Wicken, Born, &
Allen, 1963). The processing of the
meaning not only the
acoustic means that there must be a
close connection between the
short-term and the long-term
memory because recognizing the
meaning implies a use of the
long-term memory.
4.3.2.2 Storing
The process of storing into the
short-term memory is like a
bottleneck. Here we can store only
very
few information. The maximal amount
of information is around 7 items. It
seems to differ between
5 and 9 items. This limit is very
strict. It is not possible to store
one item more than usual. If we
need to store a new item the first
stored will be deleted.
George Miller (1956) found the limit
of the short-term memory and it is
confirmed by many
scientists. A list of coincidental
numbers or letters was shown to
subjects who were asked to read
and repeat right away. All were
able to repeat around seven numbers
or letters. Some person can
repeat only five, some others even
nine items.
5 8 3 4 9 5 0 8 6 5 4 8
G J R X O M A Y I P E S V
Fig. 2: Available Space in
Short-Term Memory
It is easy to confirm by ourself
that there is a limit. We can only
store around seven items. More is
not possible, even if we work hard
at it.
4.3.2.2.1 Processing in
Short-term Memory
The limit seems to be too low. How
can we memorize longer information,
though? There are two
ways how we are able to store longer
information: chunking and repeating.
Chunking
Chunking means to pool or merge
several items to a larger one, that
is like to merge e.g. four
numbers (2 0 0 7) to a larger item
(2007) or 'i n f o r m a t i o n'
(=11 letters) to 'information' (=1
item).
Chunking is not limited just to
numbers or letters. Each pattern or
information can be reorganized to
few items. Those items do not
necessarily have to have a meaning,
but the new structure must be
recognizable. Here are some examples
given:
19451939191819141871
1945 1939 1918 1914 1871
GMIBMUNESCOUSAWHO
GM IBM UNESCO USA WHO
THISISANORMALSENTENCE
THIS IS A NORMAL SENTENCE
Fig. 3: Chunking
The big advantage is that the
short-term memory can store seven
items. In this case each word is an
item, not just the letters of a
word. Thus, we are able to store far
more real letters than only seven.
It
is just a matter of reorganizing.
When we merge them together in a
reasonable, traceable way we
are able to place more information
in here. This chunking can be
nested, so we can even regard a
whole sentence as one chunk. Hereby,
the length of the content of one
chunk is not important. Of
course, this nesting is not
infinite, there is still a genetic
limit. Lehrl and Fischer (1988)
argued in
their theoretical framework of
information psychology mental power
that the capacity C of the
short-term memory can be calculated
by the product of the individual
mental speed Ck of
information processing (in bit/s)
and the duration time D (in s) of
information.
C (bit) = Ck(bit/s) × D (s)
But Chi (1976) and Simon (1972)
argued that the limit also depends
on the age of the subject. An
adult has his limits beyond a
child's limit. It might be caused by
a more limited structure of its
short-term memory.
The real limit cannot be seized into
a fixed formula. Those attempts are
just approximations. The
real limit probably depends on more
parameters and is still unclear.
The important point of how much we
can store in the short-term memory
is how much information
we are able to reorganize them into
chunks. A adequate example is how
much chess pieces a novice
and a chess master are able to
recall after seeing them for few
seconds. While a novice is able to
recall around seven pieces, a chess
master is able to recall around 25
pieces. He reorganize all given
pieces to traceable chunks. This
shows, that the ability of chunking
can be trained. (Simon, 1974;
Simon & Chase, 1973) This fact is
applied to learn or memorize more
efficient.
Repeating
The question was how to enlarge the
very limited capacity of the
short-term memory. One way is to
reorganize information into items.
The second way is to repeat the
information.
An insightful experiment shows
clearly how far we are able to
maintain information without
repeating it. Subject were shown 3
coincidental consonants like TBX.
They were told to repeat
them after a signal. The time
between showing and repeating them
varied from 3 - 18 sec. In order
to prevent a rehearsal the subjects
were asked to count down aloud a
3-digit-number three-step-wise
until the signal resounded. Thus,
they were kept concentrated. Already
after an interval of three
seconds the consonants were repeated
unreliable and after 18 seconds
nearly no subject was able to
repeat them (Peterson & Peterson,
1959). No one was able to recall
them without repeating.
There are two ways to perform such a
repeating: maintenance rehearsal and
elaborative rehearsal.
Maintenance Rehearsal
The easiest way to keep information
in the short-term memory is just to
repeat it several times.
Thus, we can prevent new inputs to
interfere and to kick out our wanted
information. It is also a
matter of concentration: what we
repeat we concentrate on and block
other inputs from entering the
short-term memory. One big
disadvantage of maintenance
rehearsal is that the information do
not
necessarily enter the long-term
memory.
Elaborative Rehearsal
To ensure that information enters
the long term-memory elaborative
rehearsal has to be done.
Hereby, the input is analyzed and
bound to already existent
information from the long-term
memory. Practically, elaborative
rehearsal takes place when we
process the input in such a way that
we combine it other useful
information. One example shows this
process: A phone number
(25365869) is memorizable in (at
least) two ways: 1) when we build a
calculation
(2536+3333=5869). Here
we combine the new information with
an mathematical addition that is
already known in the long
term-memory. 2) we compare the
number with the layout of the
keyboard of the computer. We type
2536 and then we shift up the finger
one row and type 5869.
Here we combine the new input with
the known layout of a keyboard. Of
course, there is no limit in
building such ways and every
individual has his own preferences.
Often elaborative rehearsal is even
combined with chunking. Thus,
elaboration is used to construct
new chunks. Chase & Ericsson (1981)
investigated this process by
training a student to remember
as much as possible numbers. After
2.5 years of training this student
was able to recall numbers
with a length of 80. His method was
simple: He was a former
long-distance runner and compared
the numbers with the times for
different runnings. Hereby, he used
already known information from
his long-term memory (=elaborative
rehearsal) and constructed new
groups of numbers
(=chunking). His limitation of the
short-term memory was still the
same, he was still not able to
recall more than seven characters.
4.3.2.3 Retrieval
Once the information is stored we
need to retrieve it reliable.
Sternberg (1966) tested subjects by
a
simple experiments and used the
speed of retrieval to point out how
we retrieve information from
the short-term memory. A short list
of letters (lesser than 6 items) was
presented to subjects. After
few seconds one letter was shown to
them and they had to decide whether
it existed in the list or not
and to tell it as soon as possible..
The only parameter was the length of
the list and only the time
was measured. Theoretically there
are two algorithms of finding the
information. Parallel or serial
searching. Parallel searching means
that all parts of the set are
checked at the same time. Serial
searching means that the pieces are
checked one by one. But there are
two different serial (sub-)
algorithms again. The list could be
checked completely (exhaustive
scanning) or after finding the
searched pieces it stops
(self-terminating scanning). The
latter is more intelligent because
it stops
right after the first letter in best
case. So, the run time is O(1) in
computer science notation. In the
worst case all items must be
checked. Then, the run time is equal
to the exhaustive scanning
because the algorithm must go though
the entire list. Thus, the run time
is O(n). It is interesting that
Sternberg found out that we use the
exhaustive scanning algorithm. This
use is not optimal, but we
can afford the wasted time since the
list is never longer than 7 +/- 2
items.
4.3.3 Long-term Memory
The long-term memory is that part of
memory where we keep all kind of
information since we were
born. Again, this part is divided in
three sections: encoding, storing
and retrieval.
4.3.3.1 Encoding
The long-term memory encodes
different than the short-term
memory. The reason is that the long-
term memory has different tasks and
needs more possibilities to retrieve
the information. Long-term
memory is not just a big box where
information is stored but it is able
to solve problems and to
create new ideas and to apply old
experience to sort new input. This
encoding is done semantically.
4.3.3.1.1 Semantic Encoding
Different to the short-term memory
where information is subsequent
stored, here all information is
stored like in a library or in a
filing cabinet, which means that all
information is stored parallel and
ordered according to their meaning.
Often the data has more than on
meaning. This fact is
represented in multiple indices.
Therefore, there are more than one
way to retrieve the data.
One evidences prove that data is
stored here according to their
meanings. It is very likely that
sentences are not stored literally
like a file on an audio tape and
recalled literally but rather the
meaning and content of a sentence
(Bransford & Franks, 1971).
Furthermore, a sentence that
someone has not understood is not
recallable as contributing item for
further synthesis.
There are some ways to organize such
an index or sorting. But here we see
that chunking and
elaborative rehearsal are
preparations for a creation of such
an index:
·
Find an adequate headline for a
topic. If we find something
recapitulating for a given topic
or context or if it is given we can
handle this fact easier and make it
more to a sortable and
comprehensible unit.
·
Grouping. Supposing that there are
several items it is easier to group
them into several
categories. We do that daily but it
is the art to group them efficient
and intelligent. If a new
item has to be added we can either
append it to an existing group or
create a new group.
Basically, an item is supposed to be
processed in such a way that
encoding, storing and retrieving is
matched, balanced or aligned as much
as possible. The more data is
processed accordingly the more
it is likely that it will be
utilizable later (Zimbardo, 1995).
4.3.3.2 Storing
The long-term memory stores not only
input from outside but also thoughts
and creative ideas.
Thus, it stores more than
the merely information from the
world outside. All combinations and
connections from mental production
are hold here as well. To fulfill
this high requirement the long-
term memory is divided in several
sections according to Tulving
(1972): The procedureal memory,
the declarative memory, whereas the
declarative memory is divided again
in semantic and episodic
memory.
4.3.3.2.1 Procedural Memory
In this part of memory there is
information stored about how we make
things and do tasks, either
concerning cognition or movement or
perception. (Anderson, 1982;
Tulving, 1983) Each acquired
skill is basically a sequence of
subsequent actions. Those subsequent
actions are stored as single
unit and are gained by a lot of
practice (Bandura, 1986).
They require a lot of practice which
means they are difficult to obtain
but even more difficult to
forget. Once we can swim we can swim
forever even after may be 20 years
or longer. We probably
just need a very short time to get
used to it again, but actually we
still know how to do it.
Once we have learned a skill we
don't need to pay attention on how
we do it, we can concentrate on
other things, moreover paying
attention on it again is even going
to disturb the automated skill.
4.3.3.2.2 Declarative Memory
Again, the declarative memory is
divided in two parts: the semantic
memory and the episodic
memory. The declarative memory
contains facts that we have to
recall consciously.
Semantic Memory
The semantic memory is like a
encyclopedia or a library. All
abstract or theoretical information
is
stored in here. Here are not stored
personal information or own
remembrances rather facts which
are valid for a general range like
history or formulas of natural
science.
Episodic Memory
This part is just the opposite of
the semantic memory. In here are all
personal remembrances are
stored. Here is hold not just the
fact but also the time and the
context of an event. But often, we
remember a fact (that belongs to the
semantic memory) via a personal
event, such as we remember
a fact about history via the lesson
at school when the teacher dealt
with that subject. In such a case
there are connections between the
semantic and the episodic memory.
It is very interesting that we with
each recall from the episodic memory
even more save in the
memory. We save the remembrance on
the remembrance as well. This
additional saving
contaminate the existing
information and veil it. Therefore,
the more we remember a personal
event
the more it becomes unreal.
4.3.3.3 Retrieval
Like the short-term memory the
stored information must be retrieved
in order to use it for further
tasks. The question is just how we
retrieve them. Many scientists
believe that no information is lost
or forgotten but we no longer know
how to retrieve them (Linton, 1975).
We have two kinds of remembering
from the long-term memory: free
recall and recognition.
4.3.3.3.1 Free Recall
A free recall from memory means a
complete self-constructed report or
repetition of stored
information. We only got a command
to reproduce it (by others or by
ourself) and start recording
what we know unless we don't
remember any more because of the
lack of indices. The problem
here is that we do not have any cues
from outside. Thus, if there is not
path at all to the information
or we cannot find it then we will
never be able to recall them.
As a test of an exam it refers to a
own written description or article
out of the memory without any
cues like dictionaries or
encyclopedias.
4.3.3.3.2 Recognition
The main difference to a free recall
is that we receive an input from
outside first and then we just
need to compare it with already
existent knowledge. If there is no
knowledge we cannot recognize it
and have no result (even not a
negative result), but if we know it
we can answer a result.
As a test of an exam it refers to a
multiple choice task.
The recognition seems to me easier
because we "just" compare a given
input to a stored item but
sometimes if the inputs are very
similar it is almost as difficult to
remind as a free recall because it
approximates to a free recall.
If we slow down and stagnate in an
oral exam - that is a free recall -
and the examiner gives us a
cues and we can continue fluently
then it changes from a free recall
to recognition, or in other
words the cues provides us a path to
the actually lost knowledge.
Chapter Summary
In this chapter the memory is
introduced. After providing some
explanations about basic
expressions, each memory is
presented in detail. Most in detail
explained is the short-term memory
for several reasons, First, it is
the last step before entering the
long-term memory and, second, it is
the only memory that we experience
consciously. It is, so to speak, the
connection between the
external and the internal world.
Different types of encodings and the
- very limited- way of storing
in chunks is mentioned but also the
ways of improvements to increase
this limits. In long-term
memory the different kind of
encoding and retrieval is described.
The memory theory of
Atkinson-Shiffrin is only one way to
explain how our memory works. The
problem is that we cannot look
inside the memory and see while it
work how it works. Only
experiments from outside can provide
us an idea. But even the results
have to be interpreted. Thus,
some theories are made and accepted,
as long as they are plausible and
can explain the memory in
an more easy way.
Modern technological devices are
very helpful to look inside a
memory, but the memory or thinking
itself is still concealed. Here, we
ought to have a machine that works
in the same way like the
memory, that can decode memory's
information before it is already
decoded by the memory.
Thereby, to make such a machine we
have to know the code first. But, if
we already know the code
we no longer need to construct such
a machine. This means, we will never
be able to understand
completely how our memory works.
5 Learning
The type of learning treated here is
not a continuation of the last
chapter. It is rather a description
of
how we learn behavior, habituation,
and our character. Thus, the meaning
of learning here is not an
increase of knowledge but a stable
change in the behavior caused by
experience from the result of a
certain action. The first and
pioneering experiments performed
Iwan Petrowitsch Pawlow
5.1 Classical Conditioning
He studied the behavior of a dog
that was presenting food. He found
out the following generalized
phenomenon:
The first step (A) is an already
known and normal unconditional
reaction (UCR) of a subject to a
unconditional stimulus (UCS) like
here a salivating dog after
presenting it some food.
At the same time a different
stimulus that has nothing to do with
the first one or the reaction is
presented (B). This happened several
times. In our example a bell is
ringed while presenting the
food.
After a while just the second
stimulus is presented and the
subject shows the same reaction like
to
the first stimulus (C). Thus, the
unconditional reaction changed to a
conditional reaction (CR). The
learned conditional stimulus (CS) is
an actually neutral stimulus. The
dog in Pawlow's experiment
salivated after the ringing bell
alone.
A
UCS
UCR
UCS
B
UCR
UCS
C
CS
CR
Fig. 4: Classical Conditioning
The conditional reaction is not
learned right after the first
occurrence of both stimuli. The more
they
are presented together the
more is the likelihood that the
conditioning is more stable. This
process is
called acquisition. But the
conditional reaction might not
necessarily last forever. After a
while the
subject 'learns' again that the
reaction is in vain and is going to
loose it. This phase is called
extinction.
Another parameter of a successful
conditioning is the time between the
two stimuli. The optimal
time interval is zero. But a
tolerance is permitted depending on
the kind of stimuli. Some stimuli
might have a time delay of up to a
minute, others must take place
within a second.
If a subject is conditioned and
suddenly a completely other
conditional stimulus is presented
the
subject shows no reaction. The more
the conditional stimulus resembles
the original conditional
stimulus the more likely the subject
reacts. He generalizes the stimulus.
Or, the other direction, the
lesser the similarity the more the
subject discriminates and the lower
is the likelihood of a
conditional reaction. Thus,
generalization and discrimination
are the two counterparts of how a
subject reacts to a third stimulus.
5.2 Operant Conditioning
Nearly at the same time like Pawlow,
Thorndike (1898) experimented with
cats. But he did not pay
attention to reaction to given
stimuli but to responses to own
actions within a given environment.
He presented not a second stimuli to
a first one but just the first one
beyond an obstacle and
observed what the cats would have
done. After some trial and errors
the cats have overcome it.
Skinner (1938) refined the
experiments and was able to teach
subject almost any kind of behavior.
The learning is possible by using
rewards after any wanted action
(=operant). Operant conditioning
means the change of the probability
of a behavior to a wanted direction.
5.2.1 Feedbacks
Such a reward or - more general -
feedback can be a reinforcement or
even a punishment and
motivates the individual to repeat
or to avoid the behavior.
5.2.1.1 Reinforcement
We learn what be will rewarded. Such
a reward reinforces the subject to
do the action again, or
increases the likelihood that that
action will be repeated. Such a
reinforcement does not necessarily
be a feedback from outside. Already,
the knowledge that the behavior is
right is a reinforcement.
There are positive and negative
reinforcers.
5.2.1.1.1 Positive
Reinforcement
A positive reinforcer provides a
pleasant atmosphere or gives a
comfortable feeling. Therefore, it
increases the motivation to do the
very same again or even to improve
the action in order to receive
a more positive feedback. Such a
positive reinforcer from outside
might be a smile, a praise or a
compliment. The action and positive
reinforcer together as a pair are
called positive reinforcement.
5.2.1.1.2 Negative Reinforcement
The result of negative reinforcement
is the same , namely the motivation
to repeat the action, but it
is based on the withdrawal of a
displeasing stimulus or situation.
For example, if a student does the
homework concentrated then it is
done more quickly. Here, the action
is the performing of the
assignments, and the unpleasant
situation is the feeling not be able
to be free, the reinforcer is the
knowledge that it can be done more
quickly. When he is ready the
situation changes to a more
pleasant one.
5.2.1.2 Punishment
The opposite site of feedback is
punishment. This is necessary when
the individual does not act in
the wanted way. Thus, this action is
to be diminished. The negative
feedback helps that the action
disappears. Hereby, the likelihood
of a repeat decreases because the
individual excepts again the
negative consequence. As well, there
are two different kind of
punishment.
5.2.1.2.1 Positive Punishment
A positive punisher appears when a
unpleasant stimulus is given or a
unhappy situation takes place.
Such a positive punisher might be a
scolding, a hit, a laugher or extra
work after an unsolicited
action.
5.2.1.2.2 Negative Punishment
Similar to negative reinforcement
negative means a withdrawal of a
stimulus. Here, it is the
withdrawal of a pleasant stimulus
such as the opportunity to watch TV
or playing games with
friends.
The punishment leads to avoid a
situation. It is also called
aversive conditioning because the
individual tries to escape of
something by an aversive stimulus.
5.2.2 Differences to Operational
Conditioning
There are some basic differences
between classical and operant
conditioning.
The individual is in
classical conditioning rather
passive. He reacts on a stimulus and
has no own
proactive initiative. In operant
conditioning he acts and try to find
a solution to a given
environment. He even interact with
it in order to find a solution.
In classical conditioning there is
always a pair of stimuli, one
unconditional and one conditional,
whereas in operant conditioning
there is no stimulus at all. Of
course, there is the environment
providing stimuli but not an active
emphasized stimulus.
In operant conditioning the
motivation is caused by
reinforcement or punishment.
Classical
conditioning does not know such
feedback.
Operant conditioning is far more
important for our daily life: We use
it for others (like our children)
and others apply it to us (like
colleagues or the boss) but often
not in an optimal way.
5.3 Cognitive Influences
In conditional and operant
conditioning come behavior, stimuli,
interaction and consequences to the
fore. Each kind of learning is
eventually a act of cognition
because the just observed stimuli or
input
must be remembered and interpreted
but here the behavior is in the
foreground (Bandura, 1976;
Miller & Dollard, 1941). But there
are some kind of learning that are
based on cognitive activities.
Some of them are introduced here.
5.3.1 Social Learning
This kind of learning is very
common, too. Social learning or
observational learning starts with
the
simple observation what others do.
First, the individual is not
performing directly but only
watching
passive. Compared to the
conditioning types no learning is to
expect. Despite, learning is in
effect.
Bandura (1965, 1969, 1986) tested in
numerous experiments that already
children learn by
observing adult's behavior. They
just imitate the featured actions.
During the most famous experiment,
Bandura divided children in three
groups and presented all of
them a video clip. All children saw
a man who punched or kicked a bobo
doll. Later in the film
shown to the first group, the man
was rewarded, in the second film he
was punished, and in the
third film he experienced no
consequences. Right after the film,
the subjects were asked to play
with such a doll. The children of
the first and third group copied
that man. Children of the second
group played the most less
aggressive.
Four depending processes are
necessary to learn by observing.
·
First of all, the individual must
pay attention to the happening.
Again, such a paying
depends on the personal structure
of the individual: is it attractive
enough? does it have a
adequate level to concern about?
what is the own state?
·
Second, the model behavior must be
stored in the memory. Sometimes,
this period of time
might be very long, until a adequate
opportunity to perform is given.
·
Third, the motor action must be
reproduced out of the memory. No own
practice is possible
because it is the first time he acts
in that way
·
Fourthly, the subject must be
motivated enough to perform and to
imitate the former seen
action.
Now, the question arises what and
how can increase the motivation. But
this question is not
discussed here.
5.3.2 The Cognitive Map
Some situation are not explainable
by merely stimuli and behavior
structures. Tolman & Honzik
(1930) performed some experiments
with rats that were to find a way in
a maze to their food.
Several ways were possible. These
ways had different lengths. After
closing the easiest and shortest
way by imposing the lock B, the rats
went without any doubt and error the
longest way (way 3).
They must have learned a cognitive
map of the maze before. There was no
certain stimulus that lead
them to the food, all ways looked
the same. Thus, they were not able
to distinguish the right way
from the others by a unique
stimulus.
Fig. 5: Cognitive Map
But not only rats - also human
beings - use cognitive processes to
solve problems. Suppose, there is
a problem to solve that we cannot
touch or try and error. We can only
analyze the problem in our
brain and synthesize a solution
virtually. This is a clear cognitive
process. This matter is introduced
in the next chapter.
Even the conditioning is a cognitive
process. The subject must
·
process information in an active way
·
scans the environment for
significant events
·
memorizes many properties in the
memory
·
organizes and integrates the new
information from outside is in a
beneficial way
·
decides what is to do based on the
new information combined with old
information from the
long-term memory
This changes the point of view from
a behavioral learning to a cognitive
learning. Behavioral
learning emphasizes the physical
stimulus and the environment,
cognitive learning emphasizes the
internal process necessary to
perform such a task. (Spada, Ernst,
& Ketterer, 1992). Such a change
in researcher's thinking took place
in the last 30 years and changed the
aspects of learning, thinking
and memorizing. Also, neurological
processes became more important
(Garcia & Garcia y
Robertson, 1985; McGaugh, Weinberg,
Lynch & Granger, 1985; Thompson,
1986).
Chapter Summary
This section about learning start
with the two kind of conditioning.
At the first sight they are
behavioral but they demand a
tremendous cognitive effort to be
performed. Actually, from the
internal point of view they are
cognitive tasks. In operant
conditioning the two feedbacks -
reinforcement and punishment - are
explained more deeply. The third
kind of learning - social
learning - that is a merely
cognitive process as well is
introduced next. The memorizing of a
cognitive map round this chapter
off.
6 Thought
In our daily life we think nearly
every second. Psychologists define a
thought or thinking in a
different, more precise way.
Thinking is a conscious and
controlled concern in order to reach
a
certain goal. Thus, thinking can we
applied by solving problems, finding
a decision or a jugdement,
and reasoning. These three parts are
discussed in this chapter. But
first, the structures in which we
think are illustrated. Which methods
and concepts do we use to progress
such tasks?
6.1 Cognitive Structures
During a day we receive numerous
signals and inputs from outside. The
brain were unable to
process such a tremendous amount of
information if there were not
special filters that allow to pass
only useful information. But not
only the first filter are necessary,
also we need to catalogize and
organizes the input into classes.
6.1.1 Conceptual Thinking
Each individual, not only human
beings, is different. Each tree,
each animal differs from the next.
Again, each situation differs to the
former one. If we considered each
case as a single and unique
case it were each time a brand new
situation and we could newer use
already learned solutions. That
is not only impossible, it were even
dangerous. We could not escape a
danger because also each
dangerous situation is different.
Therefore, we need a technique to
organize and classify each new input
and to add it in a reasonable
way to the already existing
information.
Already little babies are able to
categorize objects of the
environment. We have (an already
built)
class of, for example, 'games'. Each
game is different but we sorted them
to a class because there is
a certain similarity. We have
created a set of concepts which we
have ordered all objects, ideas or
situations in.
But we build not only classes, we
build a hierarchy as well. The
'game' we can organize in a tree
with a root, vertices and leafs. The
figure 6 shows such a hierarchical
tree using the example game.
Games
Videogames
Olympic Games
Parlor Games
Card Games
Simulator
Winter Games
Summer Games
Flightsim.
Trainsim.
Biathlon
Run
Shotput
Monopoly
Taboo
Poker
Skat
Fig. 6: A Hierarchy of the Term
'game'
If we receive an input that does not
fit in any concept we have two ways
to organize it:
First, we interpret the new item by
sort it into already existing
classes. This process is called
assimilation. Second, we enhance a
very similar class or even create a
new structure until the new
item fits well and accordingly. This
way is called accommodation. We
always apply both ways to
integrate new information into old
knowledge in order to use both in
the future. The right balance
between assimilation and
accommodation is important.
But also for retrieving such a
classification is important. It
helps us to find an element much
fast as
if it were unorganized. But not all
elements in such a set are regarded
equal. We find a typical
member of a class faster than an
untypical (Kintsch, 1981; Rosch,
Mervis, Gray, Johnson & Boyes-
Braem, 1976). Moreover, we find a
element faster if we search in the
right or next higher level of
the category. We are able to answer
faster when we are asked to find the
'Poker' under 'card games'
then under 'games' (Rosch, 1987)
6.1.2 Non-language Thinking
While we are thinking we use our
language to express what we want to
say in order to reach our
goal. A thought is composed of the
language we use daily. But we
can/must think without language
sometimes (Steinberg, 2000), E.g.
when we have to imagine an observed
object or a spatial item.
Those imageries cannot be
represented by any verbal language.
6.1.2.1 Visual Imagery
Subjects were coincidentally
presented a letter in different
spatial order. They were to be
identified
as real image or mirror image as
soon as possible. The more the
letters are turned around the more
time took it to decide whether it is
a mirror image or not. This time
delay depends on the rotation
and is a proof the we turn mentally
the image back upright in order to
decide it. Hence, this is a
proof that we also think imaginary
(Shepard & Cooper, 1982).
Fig. 7: Turned Letter
Fig. 8: Scan of an Image
During another experiment subjects
were shown a boat for a short
moment. Shortly after seeing
them, they were asked to emphasize
on the motor of the boat. When the
subject pay attention to the
motor they were asked to decide
whether there is a wind shield. As
next question, there were asked
whether there is a anchor. The
anchor was at the other end of the
boat, it took them longer to decide
whether there is an anchor than to
decide whether there is a wind
shield. The reaction time was
slightly longer. Obviously, they
scanned the boat in their mind from
the end to the top (Kosslyn,
1980). This is dealt as another
evidence for a mental image.
There are still visual problems that
we cannot resolve picturally.
Suppose to have a sheet of paper,
how thick is it after being folded
once? how think is it after two
foldings? And now, how thick is it
when it is folded 50 times? Our
image of that paper let us
underestimate the total result.
We use not only language and visual
imageries but also smell, sounds,
and sense according to the
input. A melody could never be
represented in language or an image.
Those different kinds of
representation enriches our
cognitive actions and often, we
understand faster and more clear
with
the aid of diagrams or drawings. But
mainly we express our thoughts -
especially abstract ideas - in
language.
6.1.2.2 Cognitive Maps
As already mentioned in section
5.3.2 we represent images and
geographic items in cognitive maps.
If we have to answer questions like
'What is the best route from our
home to our work?', 'What way
can we go if this optimal way is
blocked?', 'Which city is more in
the north: New York or
Shanghai?', or 'Which countries
flows the Mekong through?' then we
must access our cognitive
maps that represent our spatial
environment. Orientation, path
finding - or more general geographic
problems - and even walking in
darkness depend on the existence of
such maps in your memory.
(Hart & Moore, 1973; Thorndyke &
Hayes-Roth, 1987), Yet, a computer
algorithm would solve
such geometric problem in a
different - for us unusual - way. If
a computer looks for a closest pair
it
might use a geometric algorithm that
ascertain it deterministically by
using a sweep line (Hinrichs,
Nievergelt, & Schorn, 1988). For us,
such a mechanical way were too
unhandy (see below:
Algorithm and Heuristics in
6.2.2)
6.2 Problem Solving
In our daily life we meet problems
that need to be solved. Here, the
problems are discussed are
problems we can solve by
goal-oriented and analytically
thinking. Basically, a problem is a
discrepancy between that what we
know and that what we ought to know,
or a situation that we are
in and a situation that we suppose
to be in. Problem solving is ,
therefore, to find a way to overcome
those discrepancies. We are at the
beginning at a starting point and
have to go to the goal by
allowed operations. In information
technology a problem consists of a
initial point, the allowed
operations and the target state.
These three components compose the
problem space (Newell &
Simon, 1972):
Suppose PS is the space of
problem solving, begin is the
initial state, operation the
allowed and
performed operation and end
is the target state.
PS(begin operation end): begin
end
Operation
The operation might be divided in
multiple sub-operations and
sub-targets:
PS(begin operation end): begin
sub-target sub-target end
Sub-Operation
Sub-Operation
Sub-Operation
6.2.1 Well- / Ill- structured
Problems
While we face problems we come
across well-structured and
ill-structured problems (Simon,
1973).
A well-structures problem has a
clear ans precise defined initial
state, clear defined operations and
a
distinct target state. Sometimes,
there are more than one way to
change the state (Anderson, 1988).
In a ill-structured problem either
one of them is not clear defined or
even ambiguous, or two, or all
of them, so that the problem is
based on uncertainties. Therefore,
the solution of the problem is very
likely wrong or unsure.
If the way is long and difficult to
solve the problem then we might use
sub-operations to approach a
sub-ordinate target - or in other
words - we achieve our goal step by
step. Two important tools are
very helpful to understand and to
find a way out: analysis and
synthesis. Analysis mean that we
break down the complexity to smaller
parts and make it more easy to
handle. Synthesis is the
complementary process: We join the
pieces together to a complex one.
Naturally, the analysis takes
place in the very beginning and the
synthesis at the very end.
An example for a well-structured
problem is any mathematical problem
like: "What is 567,643
divided by 456?" The initial point
is clear defined (the numbers are
distinct), the target state is clear
(the result) and the operations as
well (dividing).
An example for a ill-structured
problem is (Gardner, 1978; Weisberg,
1995): "The Schneeville
Wolverines won the championship
basketball game 72:49, yet not one
man on the team scored
match as a single point. How is that
possible?"
6.2.2 Algorithm and Heuristics
So far the problem space is
illustrated. We know now what a
problem is, but we still don't know
how to solve it. How do we find the
way to the goal? There are two
different strategies we can
apply to the problem: algorithm or
heuristic.
6.2.2.1 Algorithm
An algorithm is a methodical
procedure composed of single
subsequent steps that leads
guaranteed
to a solution (If the problem has a
solution). It has the following
properties (Six, 1999):
·
The complete procedure must be
described in an finite text. The
elementary components are
called steps.
·
Each step must be performable.
·
The procedure must end after finite
steps.
·
Each step is strong prescribed at
any time. No coincidence is allowed
(deterministic).
Here is given an example: Suppose,
we must calculate the factorial of a
number (n!). One algorithm
could look like this:
1) Set a counter to 0 and a result
variable to 1
2) Increase the counter by 1
3) Multiply the counter by the
result variable and save the result
in the
result variable.
4) Go back to step 2) until the
counter is equal or bigger than n
5) The result is stored in the
result variable
An algorithm looks inflexible and
stiff but find the solution always.
However, some algorithm are
very time-consuming. One example is
the famous
traveling-salesman-problem. A
salesman wants to
visit all locations once as a
round-trip and wants to return to
the start position. He wants to find
out
what is the best route. Thus, this
problem is a geographic problem. The
algorithm has to check all
possible routes and decide at the
end which is the ideal. This is so
time consuming that it takes the
permutation of n locations (O(n!)).
This problem is NP-hard and even
NP-complete (Gary &
Johnson, 1979). It would take years
to find a route if n is bigger.
6.2.2.2 Heuristics
Practically, such a algorithm is not
usable. Therefore, an informal and
shorter way is searched for.
We use heuristics to make the way of
problem solving shorter. Heuristics
are not exactly defined
ways like algorithms. They are rules
of thumb. Even coincidence is
allowed. Thus, they are shorter
and not deterministic, but a big
disadvantage is that they not
necessarily find a solution. And if
they
find a solution it might not be
exactly, it's just a approximation.
We have to accept such a unsure
solution if the costs for an
algorithm are too high.
We human beings use often heuristics
because it is not our nature to
think so strict and
deterministic.
6.2.2.2.1 Problems with
Heuristics
A heuristic is a subjective rule of
thumb that might lead to a solution.
However, there are some traps
that prevent us to find a solution.
The heuristic itself is not wrong
but we apply the heuristic in a
wrong way.
Functional Fixedness
Often we trust our experience and
memory and look for similar ways to
solve a problem that helped
in the past. But such an analogy
might even prevent us from finding
the right way because the set of
possible ways is limited to the
already-known ways - the range is
shorter.
We see an object only in that way
how we use it daily but not
necessarily in other - unusual -
ways.
A matchbox is not only a container
for matches. We could use it even as
pedestal (Duncker, 1945)
or a hammer is not only a tool to
drive a nail into the wall but could
also be used as end of a
pendulum (Maier, 1930; 1931; Birch &
Rabinowitz, 1951).
Cognitive Set
Such a fixation does not refer to
the item itself but also to the use
of operations or combinations of
operations. Based on our experience
we prefer certain operations more
than unusual operations.
Luchins and Luchins (1959) proved
that in the nice water-pitcher
problem. We tend to solve each
problem in the same way and this way
is not always the best, sometimes
even unsolvable in the
former ways.
People who apply for a job are often
tested during an interview whether
they are fixed thinking or
not. In one example the candidate
was lead into an empty room. Just a
burning lamb was inside but
no switch. He was asked to turn the
light off. One solution were to
rotate the bulb, but it was hot.
One candidate took his shoe and
throw it toward the bulb in order to
destroy it. This candidate got
the job. He was the only one who
used his shoes in a different way
but not as the usual function.
Perceptual Set
Before we start to solve a problem,
we observe and view the problem from
a point of view in order
to understand what the problem is.
Often, he doesn't change this point
of view later but another
point of view might be better. We
perceive or encode the problem just
from one side - may be
another or even the opponent side
might be lead to a quicker and more
smarter solution. This
obstacle is called perceptual set.
Moreover, these sets or hindrances
influence each other and the
definitions are not so strict. The
perceptual set could also be defined
as functional fixedness, namely,
when we 'perceive' an object
only as an object having one
function. Or it could be a cognitive
set because we see only the well-
used function. The borders are
flowing, but they are distinct to
emphasize the kind of obstacle.
6.2.2.3 What is better ??
For our daily use one question
arises now: Should I take an
algorithm or shall I take a
heuristic? For
simple math problems it s quite
clear: we use algorithm. But often,
algorithm are very time-
consuming because they are complex.
On the other side is a heuristic not
exact and might even not
lead us to a solution. We must
balance out the pro and cons and
decide whether we can accept a
longer time to find the result
(algorithm) or whether we can accept
an unreliable result. While using
a heuristic ,often it helps to stay
again from the problem and start the
next day again. Probably we
approach the problem different and
find the solution sooner. Especially
ill-defined problems causes
the phenomenon that we find the
solution suddenly after almost
giving up. This phenomenon is
called incubation. Such incubations
occur quite often (Koestler, 1964)
6.3 Reasoning
We often must arrive at a conclusion
in daily life. Thus, have several
facts that need to be combined
to a convenient result. In order to
archive such a conclusion we may
apply only logical operations.
Such operations are always very
abstract. Reasoning is very similar
to logical operations that
computer do.
6.3.1 Deductive Reasoning
The first kind of reasoning is the
deductive reasoning. There are more
than one fact, called
premises, that let follow a logical
correct conclusion. Always, there
are used quantifiers: 'None',
'one', 'some', 'all'. Suppose A,
B are true premises, then it the
following conclusion C true
as well.
All A are B
All B are C
All A are C
Every individual accepts this
conclusion. But we often accept the
following conclusion as well:
No A are B
All B are C
No A are C
People quickly accept such a
conclusion, even though it is not
true. It happens if in the premises
occur the same quantifiers like in
the conclusion. They provide a more
'comfortable' atmosphere.
The appropriate term is atmosphere
hypothesis (Woodworth & Sells,
1935).
Another way that people often take
is that they simplify improperly the
premises. Such a sentence
like "All B are C" is regarded as "B
= C". But indeed, there still might
be some elements in C that
are not B ( "All B are C" means "B
is smaller or equal C" ). Such
simplifications lead to wrong
conclusions, of course.
In order to make sure that someone
does not fall into such a trap one
tool is very useful: venn
diagrams. They can show visually the
logic membership of elements to
sets. In the latter case the
venn diagram would look like this:
No A are B
A
B
All B are C
B=C
B C
4 conclusions are
1)
2)
possible
A
B=C
A
B C
3)
4)
B C
A
B C
A
Fig. 9: Venn Diagram
Again, people take heuristics
because no one take the effort and
try every possible and allowed
combination in order to find all
conclusions.
6.3.2 Inductive Reasoning
Inductive reasoning leads not to a
strictly logic conclusion but to a
probably right conclusion. The
experience of the former conclusions
or empiric values form the
conclusion. The conclusion might
be wrong because some conditions
might change that the conclusions
base on. Some examples are:
·
The sun has risen in the east every
morning up so far. => The sun will
also rise in the east
tomorrow.
·
All observed dogs are black. => All
dogs are black.
One classical experiment performed
Wason (1960). He told subjects
several number combinations
and asked them to find out a rule
how the combinations are build (=
constructing the next
combination based on the former
combinations). As soon as they were
sure to find the rule they
were to tell it. Sometimes it were
better to observe the negative
examples, but human beings seem
to have big problems to investigate
them.
Seldom, we compute probabilities
mathematically correct. Suppose we
play the heads-or-tails
game, throw the coin and it shows
after 6 times the heads, we usually
think that the next try must be
the tails. 'I had now six times the
heads => The next time it is a
tails'. From the mathematical point
of view this conclusion is
completely rubbish. The probability
is still p = 0.5 per event, thus
after 6
time: 0,016 - even after 100 times a
heads it does not change (Kahnemann
& Tversky, 1972). This
kind of phenomenon is called the
Monte Carlo Effect (Lück, 2002),
because in the casino game the
player often thinks in the same way:
after several red throws the bullet
must land in a black field.
But the problem is that the bullet
does not memorize the last fields.
6.4 Decision Making
(Serious) decision making is a
process that includes sometimes
problem solving and mostly
reasoning. Reasoning we use for the
analytical process first. When we
have found a conclusion we
can interlace that conclusion and
come to a decision. Therefore,
problem solving, reasoning, and
decision making belongs together
somehow. The last of the three is
introduced here now.
Like problem solving, decision
making comes across daily. In
everyday life we must decide less
important and more important things.
But unfortunately, we don't come
always to an optimal
decision. More often, we base our
decisions on our personality than on
neutral facts, on biases and
on - heuristics. That means, if we
regard a decision like a problem
that has to be solved, we use
heuristics in order to find the best
way.
6.4.1 Limits
Simon (1957) noted that our thinking
is rational but limited by certain
phenomenon. The term for
that limitation is bounded
rationality. The most common limit
is satisficing.
6.4.1.1 Satisficing
Often, a problem or a decision
making has not only several but many
possibilities. If satisficing
occurs we do not consider all
possibilities but stop at the first
possibility that seems to be good
enough and drop the still left ones.
All our requirements are fulfilled
(otherwise we wouldn't stop
here) there still might one
possibility that is even better,
though. Someone can only find the
optimal
solution if he considers really all
possibilities with the same
emphasis.
But often such an accurate analysis
is not possible for several reasons:
·
All possibilities do not occur at
the same time (e.g. the choice of a
marriage partner) so a
real good decision making would take
an endless time and the first
possibilities were already
invalid.
·
Even if all ways were accessible at
the same time it would take so much
time to detect any
detail. But often, we must decide
very quickly. Therefore, there is a
lack of information that
lead to (conscious) uncertainty, but
we must come to an end. Thus, we
decide what seems to
be the first yet best fitting
possibility.
6.4.1.2 Elimination by Aspects
If we indeed see all options but are
well knowing that we don't have time
for such a detailed
analysis we eliminate some aspects
by minimizing a criterion for that
aspect. Second, another aspect
is chosen, a minimum criterion is
created and those options that do
not fulfill it are weeded out
again. These elimination steps might
be continued until one option
remains. The limitation here is
that we focus on one criterion but
dismiss other factors that might as
important as the criterion.
They are wiped away unconsciously
sometimes and the result is inexact.
But in practice we don't go until
the very last possibility, we
eliminate until some options are
left
and then we look more careful for an
option (Payne, 1976).
6.4.2 Heuristics
Even for a decision making we use
mental shortcuts that make it
easier. As already mentioned
above, heuristics are not so exact
and don't guarantee the right
solution. Furthermore, sometimes
they even distort rational
decisions.
6.4.2.1 Availability
According to Tversky & Kahnemann
(1973) people judge depending on
which information is more
easy to access. Facts that occur
more often or - more exactly - that
we perceive more frequent are
more available in our mind. The
perception is influenced by how
often it is presented, how unusual,
how distinctive, and how outstanding
it is. The decision making is based
on this easy availability.
6.4.2.2 Representativeness
Some facts or situations have a
similar appearance. We assume a
communality between those facts
and decide likewise because the
second fact represents the first
fact. Each situation or fact occur
also with a certain probability. But
this probability is not included in
the calculation. Thus, the
result can neglect relevant base
rates (Tversky & Kahnemann, 1973).
6.4.2.3 Overconfidence
Occasionally, we have too much
self-confidence and overestimate our
own judgment. For example,
people were given 200 question with
two choices. They were to choose the
right answer and to
write down how likely their answer
is correct. The result proved that
people is highly overconfident.
When someone was 100% confident, the
rate was only 80%. (Fischhoff,
Slovic, & Lichtenstein,
1977).
Chapter Summary
As first topic a classification of
the stored information is
introduced. It helps us to retrieve
old and
classify new information. Then, some
types f non-language thinking are
presented: imagery and
cognitive maps are used according to
the represented object. Is does not
make sense - if even not
impossible - to represent such an
information in a verbal code. As
first main part is problem solving
illustrated. Problems are divided
into ill- and well-defined problems.
Here come algorithm and
heuristics into play. Some raising
problems with heuristics are
mentioned. While we use a heuristic
we are limited due to human
properties. Second, reasoning is
illustrated. There are two kind of
reasoning explained: deductive and
inductive reasoning. The third part
consists of decision making.
Again, here are some heuristics
introduced.
In general, heuristic occur through
all types of cognitive tasks. They
are useful and often the result
is right. We cannot deny such
heuristics at all but must be aware
of the inexact result. It is just a
rule
of thumb, the result might be right
but is it not guaranteed. Only
algorithms let us be sure to have a
right answer but they are often not
handy for the daily use. Thus, the
use of a heuristic is a good
alternative if a inexact result is
tolerable.
7 Language
Language is - like any other code- a
medium to transport and archive
information. The code is
arbitrary. The sender and recipient
understand it. Language is a complex
structure that consists of
several levels but also the
transportation of the language has
multiple levels. Such levels are
mentioned superficially here, the
question here is how we learn and
comprehend a language, that is
how we employ our cognitive
abilities in order to transport and
recognize information.
7.1 General Properties of
Languages
Pure information is code less. But
pure information no one can process.
When information is
transferred, received or sent a code
must be applied. Such a code is
either an agreement specialized
for machines, like computer, or a
human understandable language. The
code must be
understandable by the sender and the
recipient and the form of
transportation must be reliable and
adequate. Again, for each part there
are multiple levels. The levels for
language are introduced later.
Here, just one example for the
transportation and storage is given.
If information is transferred or
stored we need a data medium. One
simple example is here the paper and
the ink. On the paper are
written letters. Such letters are
already a code and this code is
already arbitrary. They must not
necessarily be roman letters.
They even must not be human readable
letters. A letter could also be a
magnetic or optical sequence
(like on a hard drive or a CD). In
general, a code is predefined and
the data medium must be
changeable while receiving the
information and then keep it durable
until the information has
arrived or is no longer needed.
For the human language we can use
acoustic or written words as
transportation, for storage we can
use only written words, and as a
data medium we have the air, paper
or a screen and human
receivable letters and words at our
disposal.
But for a transfer of information
there must be a sender and a
receiver. We receive a language by
ear or eye in an acoustic or optical
form and understand it. This ability
is called verbal
comprehension. If we send out words,
paragraphs and sentences then we are
producing a language.
This counterpart is called verbal
fluency.
Psycholinguists (Brown, 1965; Clark
& Clark, 1977; Glucksberg & Danks,
1975) have figured out
that all languages have the
following six properties. A human
language has even more properties
than a merely technical code:
1. Communicative: A language allows
us to communicate between each other
(= Transfer)
2. Arbitrary: A language is
not necessarily based on logic. It
is a predefined code
3. Productive: A language is based
on rules yet own combinations of
words can be produced
4. Dynamic: A language changes
constantly
5. Meaningfully structured: A
language has a structure. Not all
combinations are meaningful
but different arrangements create
different meanings.
6. Multiple level: A language
consists of several levels.
The first levels are the phonemes.
We can produce by our mouth around
100 phones but each
language use only a subset. Those
phones consists of vowels and
consonants. If we combine one or
more phonemes then we build words.
Of course, not all combinations are
used. Only a predefined
subset is allowed. Again, if we put
words together according to
arbitrary rules then we can
construct
well-structured sentences. It
implies that not all combinations
are allowed. These rules are based
on
a structure, the syntax. If
such a sentence is understandable
then we have encoded our information
successful and is has a volitional
meaning, the semantic.
7.2 Acquiring Language
The acquire of a language is a
cognitive wonder. No baby learns a
language by memorizing endless
vocabulary lists and studying
grammar rules. He just learn a
language by listening. Already an
unborn baby is sensitive for sounds
from the environment and after birth
he can response to the
mother's voice immediately after
birth (DeCasper & Fifer, 1980;
DeCasper & Spence, 1986). They
move rhythmically according to the
speech (Field, 1978; Martin, 1981;
Schaffer, 1977; Snow, 1977;
Stern, 1977). For language
investigations, not the facial
expressions are important but the
sounds
besides crying. Interestingly, until
four month all infants coo in all
possible phones around the
worlds, no matter which country they
come from or which language the
parents speak, even deaf
children coo in the same way
(Stoel-Gammon & Otomo, 1986). As
they pass to the babbling age,
deafs show no progress but normal
infants slowly adapt the phones of
the own language and loose
the ability to sound foreign phones.
This proves that the infants
perceive, store and process
cognitively language sounds from
next people. Until by age 18 month,
children own a vocabulary
of up to 100 words (Siegler, 1991)
depending on their cognitive
capabilities (Menn & Stoel-
Gammon, 2001). They produce
two-syllable-words, then build
one-word or two-words sentences,
but never three-word sentences
(Zimbardo, 1995). Rather, the
sentences are getting longer. There
are three semantic realms that occur
at that age among all languages: the
mover, movable items and
localizations (Braine, 1976). The
children are concerned with their
own visible environment. By age
around 2.5 years the children use
words that refer to their feelings
and wishes, they enhanced their
treasury of words with mental
describing words (Shatz, Wellman, &
Silber, 1983).
Beside all these visible progresses
it is still not known how they
acquire the mother language. The
grammar is not learned just by
imitation. Children do not just
parrot parent's words and sentences
(Szagun, 1980). Yet, they learn new
grammar rules (Lenneberg, 1962) and
are constantly
concerning with analyzing structures
and synthesizing new sentences
(Moskowitz, 1978; Carey,
1978). The corrections from the
parents are rather referring to the
validity than to the correct
grammar (Brown & Hanlon, 1970).
But the social interaction is very
important. A child cannot learn a
language just by listening to TV
or radio. He must interact with
living persons (Moskowitz, 1978).
By age 10, a child's skill is
basically as well developed as an
adult, the sentences are just
getting
new words and more complexity.
If a child grows up in an
environment of more than one
language some other interesting
questions
arise because they use different
lexicons and different pattern of
syntax. The language is embedded
in the local environment, thus a
bilingual grown child is embedded in
different cultures and
environment. Many linguists believe
that bilingual speakers have
differing cognitive systems and
furthermore, that a language
influences the thinking. Thus, such
bilinguals seem to think different
when using the first or using the
second one. But the question is
still unclear answered.
Another fact is quite clear: the
later we start learning a language
the lesser we attain the level of
native speakers. Children who
started learning a second language
until the age of 7 talked later as
well as native speakers. With
increasing age the skill declined
rapidly and continuously (Johnson &
Newport, 1989; Bialystock & Hakuta,
1999; Hakuta, 2001). It seems that
during the early childhood
there is a sensitive period for
acquiring a language or even for
acquiring multiple languages at the
same time.
7.3 Comprehension
If someone want to say something to
us he uses a language. His goal of
the sender is that the
receiver understand as end-product
our proposition. While we process
the input - as voice or as
written word - we execute several
levels until we really understand
the intention or proposition of
what the sender is saying to us.
This is independent on how we
receive the message.
After receiving and perceiving the
message we recognize the words or
word strings. Now we must
convert them into a set of
propositions. It is not enough just
to identify the meaning of the
words, it
is not just a word for word
translation, other important
informations such as grammar and
sequences in the sentence are needed
as well in order to prevent
ambiguity and misunderstandings.
7.3.1 Ambiguity and
Expectation
Very often a word has different
meanings. Not only this, sometimes a
word consisting of the very
same letters is used as noun or as
verb. Thus, there are more than one
options to use a word. Despite
the several meanings we can
comprehend a language pretty fast.
We can listen (and understand) to
up to 250 words per minute and
college students can read around 280
words per minute, or more
than four words per second. Even
though most words of a sentence have
multiple meanings we can
understand quickly. Later, it is
shown how can we do that at that
speed. Here there are shown now
some ambiguities:
·
Word: One word has several
meanings: e.g. to consider
has at least 10 different meanings.
·
Word form: a word can be a
verb, an adjective or a noun: e.g.
fat has two verb forms: noun
or adjective, haunt can be a
noun or a verb
·
Lexical: The exact meaning is
a word cannot be determined because
the context is not
given: The teacher strikes idle
kids: strikes" can occur as
either an verb meaning to hit or a
noun meaning a refusal to work.
Meantime, "idle" can occur as either
an verb or an
adjective.
·
Syntax: Exactly the same
sentence structure has two meanings:
visiting relatives can be a
nuisance. The first meaning
is that relatives bother us and the
second is the we don't like to
visit them.
No language has no ambiguity.
Linguists know this problem for a
long time. But also computer
programmer who are writing a
language program meet this problem.
It is not simple to determinate
whether a word is a verb or not. The
simplest way to check all
possibilities is to check all of
them
but suppose a sentence with 5
ambiguous words let the speed bog
down drastically. Obviously, it is
not a problem for us to dissolve
quickly such ambiguities. Now, the
question arise how we can
handle such a rich amount of options
in such a short time.
Language comprehension seems to be
just a bottom-up process. We receive
the message and
process level-by-level until we
understand the meaning. Each word is
checked concerning the
possible meanings and the syntax is
identified. Word set are grouped to
grammatical categories and
as last level the proposition is
found.
But we use top-down processes as
well. A top-down process is
controlled by the expectations of
how a sentence could continue before
we have received the rest of the
sentence. Therefore, we
guess what follows the already
received first part of a sentence
while the sentence is still being
spoken. This implies, of course,
that we have already processed the
first part as bottom-up process,
otherwise we could not guess
the following part by the meaning.
Indeed, the top-down process
accelerate the comprehension
considerably. Even programmer use
this technique to shorten the run
time of such programs.
7.3.2 Lexical Lookup, Syntactic
and Semantic Processing
For each level we have to process
the incoming messages. As the
decoding is a bottom-up process,
we process the lexical level first,
then the syntactic and at least the
semantic level in order to attain
full comprehension. But we cannot
divide the two latter ones sharply.
To recognize the semantic we
need to involve the syntax as well.
7.3.2.1 Lexical and Syntactic
Processing
As we grow up, we build our own
dictionary. Each word has an entry
there. It contains the sound of
the word, the spelling, and the
meanings of each word. If one word
is recognized and processed the
corresponding entry is activated and
all meanings are scanned through. If
a word has more than one
meanings all entries are accessed
accordingly. The more meanings a
word has the more entries are
accessed. This happens unconscious,
even if we are not aware that a word
is ambiguous. Swinney
(1979) found out that the activation
of all meanings does not depend on
the context and therefore,
they are pure bottom-up processes.
This touching of all entries lasts a
very short time.
But a simple lexical lookup is not
sufficient to understand an
underlying meaning. Furthermore, the
sentence must me parsed as well. A
mere lexical lookup does not detect
any sequences that are even
more important to acquire the
intended meaning. Some experiments
let us believe that a subject
very likely split a sentence into
parts (Fodor & Bever, 1965; Garrett,
Bever, & Fodor, 1966).
7.3.2.2 Syntactic and Semantic
Heuristics
In order to dissolve the semantic
information we are supposed to use a
systematic approach - an
algorithm. The problem is that the
amount of possible combinations of a
structure of a sentences is
endless. But there is no algorithm
that is able to process infinite
combinations. There might also be
a structure that is unaccounted
within the algorithm. Again, we seem
to use heuristics - rules of
dumb - that shorten the finding of
the meaning, or even make it
possible to find them. Some of them
are:
·
Word Order Strategy: If there
is the sequence Noun-Verb-Noun then
the first word is
supposed to be a noun that is the
actor who does what is described by
the second word, the
verb. The second noun is supposed to
be the object of that action.
However, this strategy
does only fit to simple and active
sentences. Thus, if there is a
passive sentence this strategy
must be rewritten. First of all,
therefore, must be decided before
applying the strategy
whether there is a passive or a
active sentence.
·
Cue word strategy: After a
relative pronoun begins a new
clause. However, if the pronoun
is omitted this strategy fails.
Often, 'that' is omitted. In this
case this strategy does not divide
a sentence in its parts.
·
Semantic strategy: According
to the verb there is a certain
action or case expected. Each
verb let expect a particular group
or given thing as object. E.g., the
verb 'go' gets us to
expect a location as object or
'paint' a visible solid item. This
strategy targets at the semantic
and is useful to anticipate the
following rest of the sentence.
Chapter summary
This chapter deals with human
language. First, it is generalized
to a code to transport or store
information. Second, the amazing
step of acquiring a language is
explained more deeply. The next
step is to explain how we use
language in daily life - from the
internal point of view: how do we
comprehend a spoken or written word.
Several levels are illustrated. such
as lexical lookup,
syntactic and semantic processing.
Again, here are some heuristics
listed.
8 Intelligence
The last chapter of this thesis
deals with intelligence. It is
already difficult how to define
intelligence. There are even
controversial approaches. A very
general definition might be that
intelligence is just the ability or
capability to do such thing like
mentioned above, that is problem
solving, reasoning, decision making,
archiving a mental goal, or
acquiring a language. The
definition is not only different
from which point of view but
additionally from the culture, or -
in
other words - from an external view
because intelligence might not be
understood isolated. Eastern
cultures pay more attention to
modesty rather than to
self-portrayal. Thus, someone is
more
intelligent who does not show it.
African people even see intelligence
in a complete different way
than western people. Such people
would be not regarded as intelligent
in western countries and vice
versa because they have another
emphasis that were on the other side
unimportant. (Cole, Gay,
Glick, & Sharp, 1971; Gladwin,
1970). This fact has influence to
later mentioned methods of
measuring intelligence.
Intelligence can be improved and
reinforced during the life span.
There is not a fixed and
unchangeable amount of intelligence
that we get at birth. (Dettermann &
Sternberg, 1982; Perkins
& Grotzer, 1997; Ramey, 1994; R. J.
Sternberg, Okagaki, & Jackson, 1990)
Furthermore, like we
train and practice other skills,
intelligence can be improved by
daily practice in our daily tasks by
thinking systematically and
carefully. Already at school time a
teacher can increase students'
intelligence. (Ceci, 1996). If they
are encouraged to use their brain in
a productive way they got a
higher capability to apply their
intelligence (R. J. Sternberg,
Torff, & Grigorenko, 1998).
8.1 Measuring
The intelligence can only be
measured by tests and the scores are
compared to other scores. It has
always a reference to other
subjects. There is no absolute gauge
like kg or meter. But, in order to
measure intelligence reliably such
tests must have some
characteristics:
·
Validity: Such a test must be
able to measure what intelligence
really is, not other things. If
a subject is asked to connect two
adjacent points on a paper by a line
it has nothing to do
with intelligence. Each task or
question is supposed to check what
refers to intelligence.
·
Reliability: The test must
provide a reliable score independent
on when and where it is held
and who is tested. If someone is
tested several times within a short
time frame the score is
supposed to be the same.
·
Standardization: All subject
ought to have the same conditions.
This goal is not able to
archive completely because
external circumstances cannot be
excluded completely (How
awake is the subject, how hot is the
weather and so on, is there any dis
quietness), but the
conditions of the test can be
equalized: the same length of the
test, the same time, the same
font of the text, the same and
simple words, the same time of the
day, etc.)
8.1.1 Intelligence Quotient
Very early the intelligence was
measured by paying attention only to
the real or chronological age.
But this was far too inaccurate.
Binet (1857-1911) introduced another
aspect. He compared subject
by their score of tests regardless
how (chronological) old they are. If
a 12-years-old boy has the
same score like the average of the
14-years-old-children he has a
mental age of 14. But this method
is still not good enough because is
is useless when chronological ages
need to be compared. The
next step was the beginning of a
formula that is till today in use.
The mental age is divided by the
chronological age times 100:
IQ=
mental age
×100
chronological age
Thus, if the 12-years-old boy is a
mental age of 14 he has an IQ of
IQ= 14 ×100=117
12
The formula works well until an age
of 16 because the intelligence
increases yearly. After 16, the
intelligence does not go up as much
so that the difference between
mental and chronological age
decreases and the IQ suddenly goes
to 100. Suppose, that boy with the
age difference of 2 years is
30 then his IQ approximates 100:
IQ= 32 ×100=107
30
Therefore, the base of the mental
age is left. Now, the base is the
huge number of tests that builds a
normal distribution curve. Most of
the people are around the middle
whereas only very few are
extremely intelligent or mentally
retarded.
68 % of all people have a score of
85 - 115 and around 95% have an IQ
of 70 to 130.
Fig. 10: Normal
Distribution of IQ
8.1.2 Stanford-Binet Scale
There are several tests that help to
determine someone's IQ. The
Stanfort-Binet test was invented by
Binet and Simon and was improved by
Terman and Merril from the Standford
University (hence the
first name). This test measures the
cognitive ability by asking some
series of question like a scale.
The more are answered the higher is
the graded scale. The current
version is number 5. The
questions are about:
·
Verbal Reasoning:
Vocabulary: A given word has to
be explained; Comprehension:
A
coherence within the daily life of a
given fact is to be shown;
Absurdities: There is one odd
feature of a given situation. This
has to be identified; Relations:
Among four items one does
not fit. The fourth has to be
selected and explained why it does
not fit.
·
Quantitative Reasoning:
Number series: There is a series
of numbers, it has to be
completed or continued;
Quantitative: A simple
arithmetical problem that is
introduced as
text has to be solved
·
Figural Reasoning: A set of
geometric pieces is shown. They have
to be combined together
to a demanded form
·
Short-term memory:
Sentences: A just received
sentence has to be repeated;
Digits: After
listening to a series of digits, it
must be repeated either forward or
backward; Objects: A set
of items is shown in a particular
sequence. They have to be shown in
the same sequence as
before.
Each of these factors is tested in
two separate domains, verbal and
nonverbal, in order to accurately
assess individuals with deafness,
limited language skills, or
communication disorders. The
Stanford-Binet test is regarded as
valid and high reliable. Therefore,
it is widely used for years.
8.1.3 Wechsler
Scale
This test was developed by David
Wechsler and revised in 1995. It
includes three tests, one for
adults - the Wechsler Adult
Intelligence Scale-Revised (WAIS-R),
one for children - Wechsler
Intelligence Scale for Children-3rd
edition (WISC-III), and one for
preschool children - Wechsler
Preschool and Primary Scale of
Intelligence (WPPSI).
The test provides three scores:
·
Verbal Scale: Information:
general knowledge acquired from
culture and actual events;
Comprehension: social
questions has to be answered;
Arithmetic: A simple
arithmetical
problem that is introduced as text
has to be solved; Similarities:
It has to be explained how
two items or ideas are akin;
Vocabulary: A word must be
defined; Digit span: After
listening to a series of digits, it
must be repeated either forward or
backward; Letter-Number
Sequencing: a test of
attention/concentration and
short-term memory by ordering given
mixed letters and digits.
·
Performance Scale: Picture
Completion: the ability of how
quickly visual details are
perceived is tested; Digit Symbol:
de- and encoding; Block Design:
Given block must be
changed to a demanded pattern;
Matrix Reasoning: a part from
the problem solving and
inductive reasoning realm;
Picture Completion: Each given
picture lacks one detail. The
subject must find out what it is;
Symbol Search: Hereby the speed
of the visual perception is
tested; Object Assembly: A
set of geometric pieces is shown.
They have to be combined
together to a demanded form.
·
Overall Scale: This scale is
based on the combined scores as a
composite.
The main difference between
Stanford-Binet and Wechsler is that
Stanford-Binet is based on the
mental age and therefore only
appropriate for children. Wechsler
has a separate part for children
and even for preschool children.
8.2 Types of Intelligence
8.2.1 Gardner's Theory
It is difficult to say what exactly
intelligence is. Several approaches
have been made. Gardner
(1983, 1993, 1999) had the idea of a
multiple theory. He regards
intelligence as a system of eight
intelligences. Each realm is
distinct and independent but they
can interact each other. Everybody
has a different amount of each realm
and thus everybody has an unique
cognitive profile. If
someone might be very intelligent in
music he might not be good at
languages. Thus, the balance
over all realms result in
the performance intelligence.
1. Linguistic intelligence: When
someone has the ability to be good
at languages then he has a
high linguistic intelligence. They
learn foreign languages very easily
as they have high
verbal memory and recall and an
ability to understand and deploy
syntax and structure. This
intelligence also has it's realm in
words, spoken or written. They are
typically good at
reading, writing, telling stories,
and memorizing words and dates. They
tend to learn best by
reading, taking notes, and listening
to lectures, and via discussion and
debate. They are also
frequently skilled at explaining,
teaching, and oration or persuasive
speaking. Those who
have an high level at linguistic
intelligence are mostly writers,
lawyers, philosophers,
politicians, and teachers.
2. Logical-mathematical
intelligence: People who can and
like to think logic, abstract have a
high level at this type of
intelligence. It includes inductive
and deductive reasoning, and
numbers. But it means not only
mathematics, chess, computer
programming, and other
logical or numerical activities but
also abstract pattern recognition,
scientific thinking and
investigation, and the ability to
perform complex calculations. People
who own a lot of such
an intelligence are scientists,
mathematicians, doctors, and
economists.
3. Spatial intelligence: People with
strong visual-spatial intelligence
are typically very good at
visualizing and mentally
manipulating objects. They have a
strong visual memory and are
often artistically inclined. Those
with visual-spatial intelligence
also generally have a very
good sense of direction. It has a
high correlation between spatial and
mathematical abilities
if solving a mathematical problem
involves visually manipulating
symbols like vectors or
geometry. And it has a strong
correlation between
bodily-kinesthetic intelligence if
someone
has a good hand-eye coordination but
here more in a technical sense
(someone can show his
spatial idea with his hand or on
paper). People with such an
intelligence are artists,
engineers, and architects.
4. Bodily-kinesthetic intelligence:
If someone can control his body and
his movements he has a
strong bodily-kinesthetic
intelligence. People are generally
adept at physical activities such
as sports or dance and often prefer
activities which utilize movement.
They may enjoy
acting or performing, and in general
they are good at building and making
things. They
often learn best by physically doing
something rather than reading or
hearing about it.
Usually athletes, martial art
sportsmen, dancers, actors,
comedians, builders, and artisans
are
equipped with bodily-kinesthetic
intelligence.
5. Musical intelligence: Those who
have a high level of
musical-rhythmic intelligence
display
a greater sensitivity to sounds,
rhythms, tones, and music. They
normally have good pitch, a
exact feeling for rhythm, and are
able to sing, play musical
instruments, and compose music.
Typical jobs for high musical skills
are musicians, singers, conductors,
and composers.
6. Interpersonal intelligence:
People in this category are usually
extroverts, can easily interact
with others and are characterized by
their sensitivity to others' moods,
feelings,
temperaments, and motivations and
their ability to cooperate in order
to work as part of a
group. They communicate effectively
and empathize easily with others,
and may be either
leaders or followers. People like
that make a career as politicians,
managers, social workers,
and diplomats.
7. Intrapersonal intelligence: This
is just the opposite to
interpersonal intelligence: People
are
introverted and prefer to work
alone. They have introspective and
self-reflective capacities.
They are usually highly self-aware
and capable of understanding their
own emotions, goals,
and motivations. They often have an
affinity for thought-based pursuits
such as philosophy.
There is often a high level of
perfectionism associated with this
intelligence. Philosophers,
psychologists, theologians, and
writers have a high level at
intrapersonal intelligence.
8. Naturalistic intelligence:
Originally, Gardner developed just
seven intelligences. This is the
newest of the intelligences and is
not as widely accepted as the
original seven. It is
important to note that this type of
intelligence is not part of
Gardner's original theory of
Multiple Intelligences. People who
have a greater sensitivity to nature
and their place within
it, the ability to nurture and grow
things, and greater ease in caring
for, taming, and
interacting with animals have a high
level at this intelligence. They are
also good at
recognizing and classifying
different species. Thus, they are
mostly scientists, naturalists,
conservationists, gardeners, and
farmers.
Gardner's theory in not accepted
widely among psychologists. The most
common criticisms argue
that Gardner's theory is based on
his own intuition rather than
empirical data and that the
intelligences are just other names
for talents or personality types.
Despite these criticisms, the
theory has been accepted among
educators over the past twenty
years.
8.2.2 The Triarchic Theory
Steinberg prefers a triarchic theory
of human intelligence (R.J.
Sternberg, 1985). His emphasis laid
on how the types of intelligences
work together. The three main
aspects are: analytical, creative,
and practical. The analytical part
is used while comparing,
contrasting, or evaluating ideas.
The
creative part is used while create
new ideas, new situations or invent
new things. The last one -
practical - is activated while
someone can adapt to every day's
demands and can apply new ideas in
a practical way.
Again, here as before, a subject
might be good at one part, what
counts is the summary of all parts
together. His intelligence
performance consists of the total of
all three parts.
Chapter Summary
In the current chapter it is shown
how intelligence is measured. After
mentioning which
preconditions a valuable test must
fulfill, some formulas are
introduced, for example how the IQ
is
computed. Two main scales -
Stanford-Binet and Wechlser - are
presented. Then, there are two
approaches illustrated of how
intelligences might be as a system:
Gartner's theory and the triarchic
theory. All these - and much more-
theories show that is it not easy,
if not impossible, to well define
what intelligence is. They are all
just tries and approaches. But it is
still very important to measure
intelligence, either as prediction
or to ascertain the actual state.
Conclusion
Even though all subbranches of
cognitive psychology are still under
investigation we already can
say that wherever we look the
processes are optimized and
goal-oriented. One example is the
use of
cognitive maps. We encode and store
the geographic map in such a way
that we later are able to
find the way out. This is sometimes
even important for survival. This
actual encoding and storing
we still don't know, but we know
that it is reliable and effective.
The process of automation allows us
to do different things at the same
time even though we are not
able to do multiple things at the
same time. But after getting
automatic we are again free for new
tasks. This system provides us a
very flexible, individual and
constructive way of learning more
and
more. Here comes to surface that
attention and learning are working
together. But also different
parts of thoughts are able to work
together. Often, we need problem
solving as part of decision
making and vice versa.
If there are more process involved
they fit and work together via well
defined 'interfaces'. Thus, the
three parts of memory, sensory,
short-term and long-term memory
works well together. Or
sensation and perception are a base
for attention and conscious. It
seems as if everything is planned
and constructed in order to fulfill
the cognitive tasks that need to be
done for daily life.
But there are still limitations. We
are not perfect. Thus, while using
heuristics we are in danger of
approximate our limitations. If we
solve problems we experience limits
like functional fixedness,
cognitive sets or perceptual sets.
On the other side, we are able to
solve ill-defined problems. If we
were just like a machine we never
could do that. Feeding a computer
with a ill-defined problem will
lead to a crash or not solution. But
we are still able to reorganize a
ill-defined problem until it will
be solved (if there is a solution).
The only parameter that is more in
need is time.
For the author the most interesting
facts is that we as human beings use
heuristics in very much
realms of cognitive processes.
Hereby, the processing if
information resembles the processes
of
information processing in computer
and roboter. Furthermore, it is
interesting that the human brain
does not use the most effective
theoretical algorithm (see
4.3.2.3), but - on the other
side - does not
waste efficiency ,that is, our brain
does not slow down processes
unnecessary.
Cognitive psychology is a very
fascinating and helpful branch for
daily life. Really, in each part of
our life cognitive psychology is
involved. There is still al lot of
investigate because the human brain
is very complex and in transparent.
References
Anderson, J. R. (1982). Acquisition
of cognitive skills. Psychological
Review, 89, 369-406.
Anderson, J. R. (1988). Kognitive
Psychologie. Spektrum: Heidelberg,
Germany.
Atkinson, R. C., & Shiffrin, R. M.
(1968). Human memory: A proposed
system and its control
process. In K. W. Spence & J. T.
Spence (Eds.), The psychology of
learning and motivation:
Advances in research and theory
(Vol. 2) Academic Press: New York.
Averbach, I. & Coriell, A. S.
(1961). Short-term memory in vision.
Bell System Technical Journal,
40, 309-328.
Baddeley, A. D. & Hitch, G. (1974).
Working memory. In G. H. Bower
(Ed.). The psychology of
learning and motivation (Vol. 8)
Academic Press: New York.
Bandura, A. (1965). Influence of
models' reinforcement contingencies
on the acquisition of
imitative responses. Journal of
Personality and Social Psychology,
1, 589-595.
Bandura, A. (1969). Principles of
behavior modification. Holt,
Rinehart and Winston: New York.
Bandura, A. (1976). Lernen am
Modell: Ansätze zu einer
sozial-kognitiven Lerntheorie.
Klett:
Stuttgart.
Bandura, A. (1986). Social
foundations of thought and action: A
social cognitive theory. Prentice-
Hall: Englewood Cliffs, NJ.
Bellugi, U., Klima, E. S., & Siple,
P. A. (1975). Remembering in signs.
Cognition, 3, 93-125.
Bialystock, E. & Hakuta, K. (1999).
Confounded age: Linguistic and
cognitive factors in age
differences in second language
acquisition. In D. Birdsong (Ed.)
Second language acquisition
and the critical period hypothesis
(pp. 161-181). Erlbaum: Mahwah, NJ.
Birch, H. G. & Rabinowitz, H. S.
(1951). The negative effect of
previous experience on productive
thinking. Journal of Experimental
Psychology, 41, 121-125.
Braine, M. D. S. (1976). Children's
first word combinations. Monographs
of the Society for
Research in Child Development, 41,
(Serial No. 164).
Bransford, J. D. & Franks, J. J.
(1971). The abstraction of
linguistic ideas. Cognitive
Psychology, 2,
331-350.
Brown, R. (1965). Social psychology.
Free Press: New York.
Brown, R. & Hanlon, C. (1970).
Derivational complexity and order of
acquisition. In J. R. Hayes
(Ed.) Cognition and the development
of language. Wiley: New York.
Carey, S. (1978). The child as word
learner. In M. Halle, J. Bresnan &
G. A. Miller (Eds.)
Linguistic theory and psychological
reality. MIT Press: Cambridge, MA.
Ceci, S. J. (1996). On intelligence
... more or less (Expanded ed.).
Harvard University Press:
Cambridge, MA.
Chase, W. G. & Ericson, K. A.
(1981). Skilled memory. In J. R.
Anderson (Ed.) Cognitive skills
and their acquisition.
Erlbaum: Hillsdale, NJ.
Cherry, E. C. (1953). Some
experiments on the recognition of
speech with one and two ears.
Journal of Acoustical Society of
America, 25, 975-979.
Chi, M. T. H. (1976). Short-term
memory limitations in children:
capacity or processing deficits?
Memory & Cognition, 4, 599-572.
Clark, H. H. & Clark, E. V. (1977).
Psychology and language: An
introduction to psycholinguists.
Harcourt Brace Jovanovich: New York.
Cole, M., Gay, J., Glick, J., &
Sharp, D. W. (1971). The cultural
context of learning and thinking.
Basic Books: New York.
Conrad, R. (1964). Acoustic
confusions in immediate memory.
British Journal of Psychology, 55,
75-84.
Crowder, R. G & Morton, J. (1969).
Precategorial acoustic storage
(FAS). Perception and
Psychophysics, 8, 815-820.
Darwin, C. J., Turwey, M. T., &
Crowder, R. G. (1972). The auditory
analogue of the Sperling
partial report procedure: Evidence
for brief auditory stage. Cognitive
Psychology, 3, 255-267.
DeCasper, A. J. & Fifer, W. P.
(1980). Of human bonding: Newborns
prefer their mothers' voice.
Science, 208, 1174-1176.
DeCasper, A. J. & Spence M. J.
(1986). Prenatal maternal speech
influences newborns' perception
of speech sounds. Infant Behavior
and Development, 9, 133-150.
Dettermann, D. K. & Sternberg, R. J.
(1982). How and how much can
intelligence be increased?
Ablex: Norwood, NJ.
Duncker, K. (1945). On problem
solving. Psychological Monographs,
1945,58 (5, Whole No 270).
Field, T. (1978). Interaction
behaviors of primary versus
secondary caregiver fathers.
Developmental Psychology, 14,
183-184.
Fischhoff, B., Slovic, P., &
Lichtenstein, S. (1977). Knowing
with certainty: The appropriateness
of
extreme confidence. Journal of
Experimental Psychology: Human
Perception and Performance,
3, 552-564.
Fodor, J. A. & Bever, T. G. (1965).
The psychological reality of
linguistic segments. Journal of
Verbal Learning and Verbal Behavior,
4, 414-420.
Frumkin, B. & Anisfeld, M. (1977).
Semantic and surface codes in the
memory of deaf children.
Cognitive Psychology, 9, 475-493.
Garcia, J. & Garcia y Robertson, R.
(1985). Evolution of learning
mechanisms. In B. L. Hammonds
(Ed.) Psychology and learning: 1984
Master Lectures. American
Psychological Association:
Washington, DC.
Gardner, H. (1983). Frames of mind:
The theory of multiple
intelligences. Basic Books: New
York.
Gardner, H. (1993). Multiple
intelligences: The theory in
practice. Basic Books: New York.
Gardner, H. (1999). Are there
additional intelligences? The case
for naturalist, spiritual, and
existential intelligences. In J.
Kane (Ed.) Education, information,
and transformation. Prentice-
Hall: Englewood Cliffs, NJ.
Gardner, M. (1978). Aha! Insight.
Freeman: New York.
Garrett, M. G, Bever, T. G., &
Fodor, J. A. (1966). The active use
of grammar in speech perception.
Perception & Psychophysics, 1,
30-32.
Gladwin, T. (1970). East is a big
bird. Belknap: Cambridge, MA.
Glucksberg, S., & Danks, J. H.
(1975). Experimental
psycholinguists. Erlbaum: Hillsdale,
NJ.
Hakuta, K. (2001). A critical period
for second language acquisition? In
D. B. Bailey, Jr., J. T.
Bruer, F. J. Symons, & J. W.
Lichtman (Eds.) Critical thinking
about critical periods (pp. 193-
205). Paul H. Brooks: Baltimore.
Hart, R. A. & Moore, G. I. (1973).
The development of spatial
cognition: A review. In R. M.
Downs & D. Stea (Eds.) Image and
environment. Aldine: Chicago, IL.
Hinrichs, K., Nievergelt J., &
Schorn, P. (1988). Plane-Sweep
Solves the Closest Pair Problem
Elegantly. Inf. Process. Lett.
26(5): 255-261.
Johnson, J. S. & Newport, E. L.
(1989). Critical period effect in
second language learning: The
influence of maturational state on
the acquisition of English as a
second language. Cognitive
Psychology, 21,60-99.
Kahneman, D. & Tversky, A. (1972).
Subjective probability: a judgment
of representativeness.
Cognitive Psychologie, 3, 430-454.
Kahnemann, D. (1970). Remarks on
attention control. Acta
Psychologica, 33, Attention and
Performance, 3rd ed.
Keppel, G., & Underwood, B. J.
(1962). Proactive inhibition in
short-term retention of single
items.
Journal of Verbal Learning and
Verbal Behavior, 1, 153-161.
Kihlstrom, J. F. (1984). Conscious,
subconscious, unconscious: A
cognitive view. In K. S. Bowers
& D. Meichenbaum (Eds.) The
unconscious: Reconsidered. Wiley:
New York.
Kintsch, W. (1981). Semantic memory:
A tutorial. In R. S. Nickerson (Ed.)
Attention and
performance (Vol. 8). Erlbaum:
Hillsdale, NJ.
Koestler, A. (1964). The act of
creation. Macmillan: New York.
Kosslyn, S. M. (1980). Image and
mind. Harvard University Press:
Cambridge, MA.
Langer, E. (1978). Rethinking the
role of thought in social
interaction. In J. H. Harvey, W. J.
Ickes,
& R. F. Kidd (Eds.) New direction in
attribution research (vol. 2)
Erlbaum: Hillsdale, NJ.
Lehrl, S. & Fischer, B. (1988). The
basic parameters of human
information processing: their role
in
the determination of intelligence.
Personality and individual
Differences., 9, 883 - 896.
Lenneberg, E. H. (1962).
Understanding language without
ability to speak: A case report.
Journal of
Abnormal and Social Psychology, 65,
415-419.
Linton, M. (1975). Memory for
real-world events. In D. Norman & D.
Rumelhardt (Eds.)
Explorations in cognition. Freeman:
San Fransisco, CA.
Luchins, A. S. & Luchins, E. H.
(1959). Rigidity of behavior: A
variational approach to the effects
of Einstellung. University of Oregon
Books: Eugene.
Lück, H. E. (2002). Einführung in
die Psychologie. FernUniversität
Hagen: Hagen, Germany.
Maier, N. R. F. (1930). Reasoning in
humans: I. On direction. Journal of
Comparative Psychology,
10, 115-143.
Maier, N. R. F. (1931). Reasoning in
humans: II. The solution of a
problem and its appearance in
consciousness. Journal of
Comparative and Physiological
Psychology, 12, 181-194.
Marcel, A. J. (1983). Conscious and
unconscious perception: An approach
to the relation between
phenomenal experience and perceptual
processes. Cognitive Psychology, 15,
238-300.
Martin, J. A. (1981). A longitudinal
study of the consequences of early
mother-infant interaction: A
microanalytic approach. Monographs
of the Society for Research in Child
Development, 46(203,
Serial No. 190).
McGaugh, J. L., Weinberger, N. M.,
Lynch, G., & Granger, R. H. (1985).
Neural mechanisms of
learning and memory: Cells, systems
and computations. Naval Research
Reviews, 37, 15-29.
Menn, L. & Stoel-Gammon, C. (2001).
Phonological development: Learning
sounds and sound
patterns. In J. Berko Gleason (Ed.)
The development of language. Allyn &
Bacon, Boston.
Michael R. G. & Johnson, D. S.
(1979). Computers and
Intractability: A Guide to the
Theory of
NP-Completeness. W.H. Freeman,
ND2224, pp.211212.
Miller, G. A. (1956). The magical
number seven, plus or minus two:
Some limits on our capacity
for processing information.
Psychological Review, 63, 81-97.
Miller, N. E. & Dollard, J. (1941).
Social learning and imitation. Yale
University Press: New
Haven.
Moskowitz, B. A. (1978). The
acquisition of language. Scienitic
American, 239 (11), 92-108.
Navon, D. & Gopher, D. (1979). On
the economy of the human processing
system. Psychological
Review, 86, 214-255.
Neisser, U. (1967). Cognitive
Psychology. Appleton-Century-Crofts:
New York.
Newell, A. & Simon, H. A. (1972).
Human problem solving.
Prentice-Hall: Eaglewood Cliffs, NJ.
Norman, D. A. (1968). Toward a
theory of memory and attention.
Psychological Review, 75, 522-
536.
Payne, J. (1976). Task complexity
and contingent processing in
decision making: An information
search and protocol analysis.
Organizational Behavior and Human
Performance, 16, 366-387.
Perkins, D. N. & Grotzer, T. A.
(1997). Teaching intelligence.
American psychologist, 52, 1125-
1133.
Peterson, L. R. & Peterson, M. J.
(1959). Short-term retention of
individual verbal items. Journal of
Experimental Psychology, 58,
193-198.
Posner, J. K. (1982). The
development of mathematical
knowledge in two West African
societies.
Child Development, 53, 200-208.
Ramey, C. T. (1994). Abecedarian
Project. In R. J. Sternberg (Ed.)
Encyclopedia of human
intelligence (Vol. 1, pp. 1-3).
Macmillan: New York.
Rosch, E. H. (1978). Principles of
categorization. In E. Rosch & B. B.
Lloyd (Eds.) Cognition and
categorization. Erlbaum: Hillsdale,
NJ.
Rosch, E. H., Mervis, C. B., Gray,
W. B, Johnson, D. M., & Boyes-Braem,
P. (1976). Basic objects
in natural categories. Cognitive
Psychology, 8, 382-439.
Schaffer, H. R. (1977). Mothering.
Harvard University Press: Cambridge,
MA.
Schneider, W. & Shiffrin, R. M.
(1977). Controlled and automatic
information processing: 1.
Detection, search and attention.
Psychological Review, 84, 1-66.
Shatz, M., Wellman, H. M., & Silber,
S. (1983). The acquisition of mental
verbs: A systematic
investigation of the first reference
to mental state. Cognition, 14,
301-321.
Shepard, R. N. & Cooper, L. A.
(1982). Mental images and their
transformation. MIT Press:
Cambridge, MA.
Siegler, R. S. (1991). Children's
thinking. Prentice-Hall, Englewood
Cliffs, NJ.
Simon, H. A. (1973). The structure
of ill-structured problems.
Artificial Intelligence, 4,181-202.
Simon, H. A. (1957). Administrative
behavior (2nd. ed.) Littlefiel,
Adams: Totowa, NJ.
Simon, H. A. (1972). On the
development of the processor. In S.
Farnham-Diggory (Ed.),
Information processing in children.
Academic Press: New York.
Simon, H. A. (1974). How big is a
chunk? Science, 183, 482-488.
Simon, H. A. & Chase, W. G. (1973).
Skill in chess. American Scientist,
61,394-403.
Six, H.-W. (1999). Konzepte
imperativer Programmierung.
FernUniversität Hagen: Hagen,
Germany.
Snow, C. E. (1977). The development
of conversation between mothers and
babies. Journal of
Child Development, 4, 1-22.
Spada, H., Ernst, A. M., & Ketterer,
W. (1992). Klassische und operante
Konditionierung. In H.
Spada Allgemeine Psychologie. Huber:
Bern, Schweiz.
Spelke, E., Hirst, W., & Neisser, U.
(1976). Skills of divided attention.
Cognition, 4, 215-230.
Sperling, G. (1960). The information
available in brief visual
presentations. Psychological
Monographs, 74, 1-29.
Steinberg, R. J. (2000). Pathways to
psychology. Wadsworth: Thomson
Learning: Stamford, CT.
Stern, D. (1977). The first
Relationship: Mother and Infant.
Harvard University Press: Cambridge,
MA.
Sternberg, R. J. (1985). Beyond IQ:
A triarchic theory of human
intelligence. Cambridge University
Press: New York.
Sternberg, R. J., Okagaki, L., &
Jackson, A. (1990). Practical
intelligence for success in school.
Educational leadership, 48, 35-39.
Sternberg, R. J., Torff, B., &
Grigorenko, E. L. (1998). Teaching
for successful intelligence raises
school achievement. Phi Delta Kappa,
79, 667-669.
Sternberg, S. (1966). High-speed
scanning in human memory. Science,
153, 652-654.
Stoel-Gammon, C. & Otomo, K. (1986).
Babbling development of
hearing-impaired and normal
hearing subjects. Journal of Speech
and Hearing Disorders, 51, 33-41.
Swinney, D. A. (1979). Lexical
access during sentence
comprehension: (Re)consideration of
context effects. Journal of Verbal
Learning and Verbal Behavior, 18,
645-660.
Szagun, G. (1980). Sprachentwicklung
beim Kind. Urban & Schwarzenberg:
München, Germany.
Thompson, R. F. (1986). The
neurobiology of learning and memory.
Science, 233, 941-944.
Thorndike, E. L. (1898). Animal
Intelligence: An experimental study
of the associative processes in
animals. Psychological Monographs, 2
(8).
Thorndyke, R. W. & Hayes-Roth, B.
(1978). Spatial knowledge
acquisition from maps and
navigation. Paper presented at the
Psychonomic Society Meeting, San
Antonia, TX.
Tolman, E. C. & Honzil, C. H.
(1930). Insight of rats. University
of California Publications in
Psychology, 4, 215-232.
Tulving, E. (1972). Episodic and
semantic memory. In E. Tulving & W.
Donaldson (Eds.)
Organisation of memory. Academic
Press: New York.
Tulving, E. (1983). Elements of
episodic memory. Clarendon Press:
Oxford.
Tversky, A. & Kahnemann, D. (1973).
Availability: A heuristic for
judging frequency and
probability. Cognitive Psychology,
5, 207-232.
Wason, P. C. (1960). On the failure
to eliminate hypotheses in a
conceptual task. Quarterly Journal
of Experimental Psychology, 29,
129-140.
Weisberg, R. W. (1995). Prolegomena
to theories if insight in problem
solving: A taxonomy of
problems. In R. J. Sternberg & J. E.
Davidson (Eds.), The nature of
insight (pp. 157-196). MIT
Press: Cambridge, MA.
Wickens, D. D. (1972).
Characteristics of word encoding. In
A. W. Melton and E. Martin (Eds.).
Short-term memory. V. H. Winston:
New York.
Wickens, D. D., Born, D. G., &
Allen, C. K. (1963). Proactive
inhibition and item similarity in
short-term memory. Journal of Verbal
Learning and Verbal Behavior, 2,
440-445.
Woodworth, R. S. & Sells, S. B.
(1935). An atmosphere effect in
syllogistic reasoning. Journal of
Experimental Psychology, 18,451-460.
Zimbardo, P. (1995). Psychologie.
Springer: Heidelberg, Germany