Modularity and beyond

The debate

How does information in the outside world get stored in the child’s mind? The debate is over whether the infant mind is modular or not. What capabilities are innate?

Is development domain-specific? The debate is between constructivism (e.g. Piaget) and nativism (e.g. Chomsky). Chomsky said just because something’s happening in one domain, it’s not necessarily happening in another. Skinner and Piaget believed more in domain-general processes. They assumed that development is uniform across cognitive domains. The intrinsic properties of the mind are homogenous and undifferentiated.

Is development domain-specific? Innate domain specificity is a nativist concept. The content of knowledge differs between species. Are these cross-species differences only relevant to adult cognition, or do humans differ from other species from birth?

The modularity hypothesis

The modularity hypothesis is a set of hypotheses demarcating areas of investigation. It defines a research agenda. There is no a priori reason why the mind has to be that way. Indeed there is much cognitive theorising that the mind is much more unitary and seamless than the modularity hypothesis would have it.

a)      Which processes are mediated by modular architecture, and which are not?

b)      Where’s the boundary between modular or output systems and isotropic central processes?

Domains

A domain is the set of representations sustaining a specific area of knowledge. Domains include physics, vision, language, and number.

Is development domain-specific?

Examples of domains include: children may be spared or impaired in a single domain, e.g. theory of mind and autism, idiot savants with ability in one domain. Looking at adult brain damage we also find domain-specific impairment, e.g. losing the ability to recognise faces, spoken language… spoken language is a mini-domain within the domain of language.

 

Does this prove nativists right? No. The more hard-wired the brain the less we’re able to learn. Cognitive development is flexible.

Fodor

Fodor suggested that the mind is made up of genetically specified, independently functioning, special-purpose “modules” or input systems. Fodor proposed a 3-tiered system.

  1. The transducer level: information from the external environment passes through a system of sensory transducers, e.g. from the ears, eyes, which transform the data into formats that each special purpose input system can process.
  2. The input system modules, which translate. Input processes are vertical. The input systems can’t access the higher levels.
  3. Higher level cognitive functions, which process. Input systems don’t talk to eachother. If they’re domain-specific then we need more domain general systems, or, e.g. how would we link shape and colour? Fodor proposed specialised computational systems, hypothesis generators, e.g. for colour perception and shape in vision. Central systems look at input systems and memory and develop best hypotheses. These processes are usually unconscious and we know little about their operation. The central processes are horizontal. As they’re not informationally encapsulated, they can’t be modular.

The central processes feed information back down to the input systems, which become output systems. Lower animals don’t have these cognitive higher processes, so inputs map to outputs without higher processes mediating.

Input system modules

A module is an information-processing unit that encapsulates knowledge in one domain and the computations on it. A module is an input unit that domain-specific representations can be fed into. Modules are bottom-up, data-driven, fast, autonomous, mandatory, automatic. A special purpose input system accepts only a specific type of input. So information from eyes, ears, … is tranduced into suitable inputs, e.g. particular parts of the brain light up in motion processing. Input systems are inflexible, the unintelligent part of the brain.

It’s insensitive to central cognitive goals, i.e. informationally encapsulated or cognitively impenetrable. E.g. the Muller-Lyer Illusion


 


Major criteria

1)     Domain specificity

2)     Mandatoriness

3)     Informational encapsulation

4)     Speed

Arguments for domain-specificity of input systems

There are 6 traditional sensory/perceptual modes, including one for language. However each mode may have a number of different input systems, which are highly specialised. A lot of perceptual systems seem domain-specific, e.g. speech v/s non-speech perception. Different parts of our brain get switched on. Perceptual analysis of speech is distinctive in that they operate only on acoustic signals that are taken to be utterances. The structure of the system that recognises sentences is responsive to universal properties of language. Therefore, it is only in the domains that exhibit these properties that the system will work. Language universals distinguish utterances from non-utterances, switching on our domain-specific module specialised for this. It becomes specialised, e.g. you can only perceive phonemes in your own language. In the case of vision there are mechanisms for colour perception, analysis of shape, analysis of 3-D spatial relations. This indicates the level of grain at which input systems might be specialised. The more eccentric a stimulus domain, the more likely it is computed by a special-purpose mechanism.

Arguments for input systems being mandatory

These input systems are similar to reflexes. Our automatic comprehension of English sentences implies mandatoriness. When asked to concentrate on acoustic and phonetic properties of the input, people can’t help but identify the words. Processes are automatic and obligatory. However, you can train yourself to hear speech as noise, or just stop paying attention, e.g. by attending something else like your own thoughts, or put your fingers in your ears.

You can’t help seeing 3-D objects out of the 2-D array you actually see. Artists can learn to undo these perceptual constancies.

Input systems are fast

Input systems are fast, e.g. there’s a 250-millisecond lag when shadowing auditory input, e.g. singing along to a song you don’t know all the words to. Fast shadowers also understand what they repeat. So input systems must be very fast sending information to the central cogniser, which then sends information to the output systems.

We recognise pictures in, say a book, in a fraction of a second. Participants were given brief descriptions of an object or event that they may see. Then they were shown a sequence of slides. Participants identified that the slide was the one described after only 167 milliseconds, and were correct 96% of the time. What you see is translated into something for your central cogniser very quickly.

Evidence for input systems being informationally encapsulated

Informational encapsulation is different from domain-specificity in that it tells you what information’s been accessed during the use of the module. Domain specificity has to do with the circumstances in which the module will come into use. A cognitive process is informationally encapsulated if it has access only to the information represented within the local structures that subserve it, e.g. it is the lack of access about what your finger is doing that demonstrates the encapsulation of the visualAnalyser-headAndEyeMovement system. This is the most difficult of the central properties to test because it’s so fast. V5 is an area of the brain specialised for motion detection. V5 doesn’t see the shape of the moving object. It’s encapsulated in that it can’t tell you anything apart from the fact there’s movement.

Evidence against input systems being strictly modular       

If Fodor’s modularity hypothesis is true, then input systems are good for scientific study, and central processes not because they’re isotropic. But recent MRI studies that perhaps input systems are not strictly modular in a Fodorian sense. Magnetic resonance imaging has resulted in some caveats, e.g. some highly modular parts of the brain do have connections with other parts, e.g. V5 processes motion and V4 objects, and there’s cross-talk between the modules. V1 –the primary visual cortex- performs essential basic processing on incoming visual information, and its action can be modified by reciprocal connections to its different layers from many other higher visual areas. Connections between V4 and V5 can modify how V1 works.

Minor criteria

1)     Limited central access to mental representations

2)     Shallow outputs are necessitated by speed and informational encapsulation, e.g. we might have Rosch’s basic level categories as our primary output of a visual input. Although, how much information can be generated how quickly is an empirical question.

3)     Neural localisation

4)     Susceptibility to characteristic breakdown

5)     Development of abilities exhibits a species-universal characteristic pace and sequencing

These are minor criteria because of the youth of cognitive neuroscience. Now we have fMRI scans of, e.g. the visual system, to investigate these minor criteria.

Limited central access to the mental representations that input systems compute

Input mapping typically involves mediated mappings from transducer outputs onto percepts, which may be inaccessible to central processes or accessible at a price, e.g. attention. You have no choice about how you hear things. They have no access to the mind. Interlevels of input representations aren’t consciously accessible, e.g. when we tell someone the time we can’t recall the numeral type. We must know what the numerals are to read the clock, but if asked we can’t say what the numerals looked like. We act on things without knowing what we’re acting on, e.g. crossing the road or driving. We don’t consciously process a lot of information that we use unless we attend. Only quite high-level representations are stored in long-term memory. Many intermediate level representations are discarded, or retained at a cost. We only take in the information we need to take in.

Evidence for neural localisation of input systems

Specific neural architectures seem to be associated with, e.g. aphasias. The modularity hypothesis gains support from neuroscience in that the only cognitive systems that have been identified with particular pathways in the brain and with specific idiosyncratic structures are those most likely to be modular by other criteria, e.g. perceptual analysis systems, language, motor control.

Evidence for Susceptibility to characteristic breakdown

Schizophrenia used to be considered a central processing problem, but now it’s considered a problem of input modules. Memory and attention aren’t domain-specific, but you can still get breakdown.

The ontogeny of input systems exhibits a characteristic pace and sequencing

Ontogenetic sequencing of language acquisition and early visual capacities of infants are compatible with the notion that much of the development of the input systems is endogenously determined. Babbling onset rates and times are not culture-specific. Infants have specific perceptual capacities with regards to language. Linguistic capacities seem dependent on maturation and robust to environmental variation, e.g. even if your parents can’t speak a language children will still learn a pigeon language. These children will then go on to develop a grammatically structured Creole language. This picture is compatible with the notion that these mechanisms are instantiated in corresponding, specific, hard-wired neural structures. It’s also compatible with the suggestion of innate specification.

Central systems

Central processes are slow, optional, informationally porous, and general purpose, communicating freely amongst themselves and receiving input from and sending output to all other modular input and output systems. Higher-level processes have access to all information contained within the cognitive system when performing a given operation.

Fodor makes an analogy with inference in Science. The intuitions are as follows. The non-demonstrative fixations of belief in Science are isotropic. Isotropy is getting information from all over the place in order to solve a problem, we can look for information from anywhere in our mind to solve something else. Isotropy means we can use, e.g. knowledge of half-life in Physics to understand dosage in pharmacokinetics.

Karmiloff-Smith

Karmiloff-Smith feels modularity is a function of development. This differs from Fodor’s modules in that they’re not innate. Development is a process of gradual modularisation rather than prespecified modules. But the inputs are innate, selected, resulting in encapsulation with time. The modules come about as a result of a process from interaction with the environment, given these innate predispositions. Her opinion is thus neither constructivist nor nativist, but both.

 

Karmiloff-Smith distinguishes between modules for perception. She says there are microdomains, e.g. modules for perception of particular kinds of objects.

How does this information get stored in the child’s mind?

Representational redescription theory

The representational redescription process accounts for the increasing accessibility of children’s representations to higher thinking. The child may represent the environment or the language of adults. Information representations are made progressively more explicit. The process of representational redescription involves recoding information that is stored in one representational format into another. Each redescription is a more condensed version of the previous one. The process of representational redescription is domain-general, but it operates in each specific domain at different moments, and is constrained by the contents and level of explicitness of representations in each microdomain. Similar transformations occur across all types of knowledge:

 

  1. We have some detailed specifications and skeletal predispositions. Innate specifications are the result of an evolutionary process, which can either be specific or non-specific. The environment acts as a trigger for the organism to select one parameter or circuit over another. An innate predisposition is specified as a bias or skeletal outline. Environment in this case is more than a trigger; it influences the subsequent structure of the brain.
  2. The environment triggers an innate specification.
  3. The environment redescribes the representation epigenetically.

Development involves 3 recurrent phases:

1)     First, it’s like Fodor’s modules, i.e. data-driven.

2)     Then, there’s internally driven data, data the child believes to be true, i.e. a theory

3)     The internal representations and external data are reconciled.

There are 4 levels of knowledge being represented and rerepresented:

Implicit

First there’s knowledge we have implicitly from biological predispositions. Infants have level I representations.

 

The knowledge is procedurally embedded. Encodings are sequentially specified. New representations are independently stored, e.g. you can’t use your knowledge of maths to help you with biology. There are no intra- or inter-domain representational links. This is similar to Fodor’s input modules.

 

Information embedded in level-I representations is not available to other operators in the cognitive system; the behaviour generated from level-I representations is relatively inflexible.

 

The representational redescription model posits a subsequent reiterative process of representational redescription involving levels E1, E2 and E3

Explicit-1

E-1 descriptions are a redescription of the procedurally encoded representations at level-I. they have knowledge, but can’t access it consciously, i.e. talk about it. Only at levels beyond E1 are conscious access and verbal report possible.

 

E1 representations lose many of the details of the procedurally encoded information. When we redescribe information from level I to level E1 we save only the information we really need, e.g. children have many more connections in their brains than adults. By the age of 2/3 the number of connections has been weaned down.

 

The original level-I representations remain intact in the child’s mind and can continue to be called on for particular cognitive goals that require speed and automaticity.

Explicit-2

(E2) and (E3) are more restrictive levels, e.g. going from recognition of flowers to recognising the genus of flower. When you’re asked to justify your decisions, you have to call on E2 and E3 representations.

 

The representational redescription model claims that E2 representations are accessible to conscious access, but they are in a similar representation code as the E1 representations.

Explicit-3

When we get to E3 knowledge is recorded into a cross-system code.

 

It’s possible that some knowledge learned in linguistic form is stored immediately at level E3.

 

There’s a subsequent reiterative process across the levels. This goes round and round, perhaps only being initiated by a particular problem, then being brought into play again by another. The representational redescription is cross-domain, but it’s constrained by the limits of what’s possible in I. The environment is important for the learning outcomes, but not for the process they go through. These processes are ongoing for all the microdomains of knowledge.

U-shaped curves of development

Karmiloff-Smith says processes are different from behaviour. Behavioural change is often U-shaped. Representational changes are not. The behaviour suggests unlearning, when in fact they’re rerepresenting knowledge, developing theories, becoming lay theorists. At the low point of the U, they’ve developed a theory and they’re stuck in it, i.e. ignoring environmental data. Representational redescription says over time knowledge becomes more theoretical, and behaviour is U-shaped because we’re stubborn theorists, then theories come into line with physical properties in the world.

We’ll look at the child as a physicist as an example of theory building.

Consider the child as a physicist. There’s some object assistance, which must be there from the outset, says Karmiloff-Smith.

What type of representation does a young infant have?

Do infants have E1 representations or higher? They don’t recognise objects as a particular kind. They don’t have theories. Despite the coherence of their knowledge, it’s not a theory because inputs are directly mapped onto outputs, not mediated by higher processes. Early object recognition could be supported by level-I representations.

 

Information processing in the brain can be considered to be a cascaded relay system, where each of a number of highly organized stations acts as a separately relay that both does its own processing and is the input for the next stage. The importance of this discussion for developmental psychologists is that these mapping structures are not present at birth. A great deal of recent research has shown that while the potentiality for these maps is innate, the details of the maps themselves are created by experience. A famous experiment with kittens raised in deprived sensory conditions, showed, that if these areas of the brain are not used for the original functions for which they were intended, other functionalities will take over those areas, and that functionality will be lost. The fundamental maps between such systems as the eye and the primary visual cortex are being created in the first days and weeks of life. Such maps are highly individualistic, varying from individual to individual. They can also be modified by changes in the environment. For example, if a monkey looses a finger, these maps are modified to reflect that fact. The very simplest visual processing abilities, such as the ability to see a straight line, are not present at birth. The brain has an innate ability to create these abilities, but they need experience to be structured. If the biologists are right the first weeks of a child's life are dedicated to structuring these simple components of the perceptual motor programs. The great speed with which these connections are created has lead many authors to conclude that they must be innate.

 

But children can come to theorise about the world as a result of representational redescription of knowledge the child already has from interacting with the environment. Theory building need not be derived directly from linguistic encoding, as there are many examples of theory-like knowledge despite the young child not being able to encode it linguistically. Having said that, most infants begin to comprehend single words at the end of the first year of life; object individuation has been demonstrated with 9-month-old infants. It may be the case that language can influence cognition at a very early age. It takes time for children to access explicit knowledge, and when they do early theories often bear a resemblance to the constraints implicit in earlier behaviour. They can have a wrong representation that they test; hence they’re like scientists. Another scientist-like behaviour is their sticking to a theory despite contradictory evidence.

How is infant vision different from adult vision?

 

Adult testing: ask, “Can you see it? How well can you see it?”

Infant testing: Forced choice preferential looking, habituation.

1) Colour vision:  both infants and adults can see colour, but adults can discriminate a wider range of colours more easily; infants discriminate contrastive, bright colours more easily

2) Pattern:  infants see wide stripes well but cannot see narrow stripes as well as adults, since their photoreceptors are not completely developed; infants’ vision is blurry when objects are far away

3) Motion:  infants perform similar to adults; very young infants are good at motion perception

4) Face recognition:  infants are very good at face recognition; starts with outer, external features like hairline (1 mo), then internal features like eyes and eyebrows; babies like contrast and contours

-        2 months:  visual recognition of mom

-        3 months: can discriminate mom vs. stranger

-        5 months: respond to entire facial configuration

U-shaped development of the contribution of visual flow and body sense information to spatial orientation

Spatial orientation to a target was measured by the direction and extent of eye movements in anticipation of target appearance. Subjects were tested in a rotatable cylindrical surround, which allowed manipulation of visual and vestibular information separately and in conflicting combinations. The results revealed a U-shaped developmental pattern in which 6-month-olds, 12-month-olds, and young adults responded on the basis of visual flow information, whereas 9-month-olds responded predominantly in terms of vestibular information. These results may reflect increased dependence on vestibular information when locomotor experience first contributes to judgments of spatial orientation.

 

Innate specification: object persistence + U-shaped development of motion required to perceive objects


Infants perceive objects by analysing 3-D surface arrangements and by following the continuous motion display. Infants initially need motion. By 2 months of age they don’t need motion to perceive an object. By 4 months they need motion again. 4-month-old infants were shown image of a rod mowing back and forth but partly hidden by a rectangle. In one condition, the rectangle is removed to show a rod, and in the other one sees 2 rod fragments in coordinated motion (with a gap where the rectangle used to be). The first condition excited very little attention, but the second was highly attended to, suggesting that the discovery that there was no unified object was quite surprising.

 


5-6 month old infants were shown a "drawbridge" apparatus. In the "possible" condition, the child sees the object and then the drawbridge occludes it and stops at the point where it would collide with the object. In the "impossible" condition, the bridge closes 180 degrees as if the object behind it weren't there. There was much more attention to the second condition.

 

All of this suggests that very young infants already perceive a world of objects, and that learning may have very little to do with it. Some of these experiments have been replicated with 1 or 2 day old infants. The processes for infants perceiving objects operate before those for recognising and categorising objects. The ability to recognise an object is late occurring compared to following objects across time. This implies object persistence must be there from the outset.

Gravity and moments: U-shaped development

Which object will fall?


 
7-9 month olds look significantly longer at the right display, suggesting surprise that it doesn’t fall. They have some inbuilt knowledge about the laws of gravity.

 

They weren’t surprised by:

They only understand things that are symmetrical. They prefer symmetrical faces to non-symmetrical ones, symmetrical representations of faces to non-symmetrical ones. Between the ages of 4 & 5 children can balance blocks with an uneven distribution of weight on eachother. At 4-5 years they solve the problem by trial and error. Then they can’t do it again until they are 9 years old. Between 5 and 9 they think the centre of gravity is always in the geometrical centre. The information becomes explicit. This is a U-shaped curve of development. By 9 years they solve it using symmetry and balance. The representation has been redescribed. They don’t need knowledge from blocks to learn this because there’s lots of experience from the environment.

Figure-ground organisation

Adults perceive objects as separate from the background and easily see the object’s bounds, not the background. The border between an object and the background is perceived as bounding the object, not the background.

 

Infants’ perception of figure-ground relationships is studied by observing their patterns of reaching for and looking at displays which do not pass through one another or change path as they move. Reaching requires that the object is perceived as distinct and starts at 4½ months of age, and they reach for object boundaries, not the inside. This is evidence of object-background discrimination. Evidence includes presenting 6 month olds with a large and a small ring. They reached for each object by directing their hands to the borders. Their representations of object boundaries seemed to persist in the absence of visual information.

 

Infants use the relative motion patterns of surfaces and edges to perceive object boundaries. They see boundaries via motion; if something moves with the edge it’s bounded. If the inside moves differently to the edge, they don’t perceive it as bounded.

 

Infants perceive figure-ground relations by analysing the 3-D spatial arrangements (and motions) of surfaces: Infants were habituated on 2 objects. Either both or one of the objects was then shown in a new location. If they are perceived as separate objects, then infants will see the two objects displaced together as more novel. If they’re perceived as the same object, then infants will see the one object being moved separately as more novel. If the objects are separated visibly or in depth then they are perceived as separate units. Note, 4-month-old infants can segregate side-by-side spatially contiguous objects into two separate units. But it is not until after 8 months of age that infants regard stationary, adjacent objects stacked one on top of the other as composed of two separate pieces. Until then, the 2 objects are perceived as a single unit, regardless of colour, texture or edge alignment. Even older children say, “It’s like a lamp” instead of “It’s a triangle at the top of a rectangle”, evidencing gestalt (whole) theories.

E.g. this surprises an infant:

 


 

 


Infants don’t group things on colour or texture. They don’t perceive object boundaries by organising visual scenes into units that are homogenous in colour and texture. Nor do they make object boundaries maximally smooth and regular in shape; Infants perceive complex objects as single units just as readily as simple objects. Their innate specified principles must lead to these theories. We innately see objects, and from seeing these objects develop knowledge.

Animacy

Consider the microdomain of knowledge concerning animacy: autonomy, purposefulness and reactivity. An agent is something (like an animal or a human) that can move under its own power. Infants are sensitive to texture, context and motion for determining agency. Natural kinds tend to have complex surface gradients. An ability to classify things into animate and inanimate categories is developed very early, and so likely innate. Infants become upset and surprised when a face stops moving, but not when a ball stops moving. Examples of externally caused movements include: A stopped and B immediately began moving, i.e. a collision. Examples of autonomous movements include A chasing B round the screen. Six-month old infants were shown a sequence of presentations consisting of externally caused movements with one interposed autonomous movement, or the inverse. In either case the infants would show a startle response when the interposed presentation was shown. The perceptual system is attuned to goal-directed behaviour.

 

Do even six-month old infants construct causal models of their world and distinguish between external and internal causes? Either there are at least two levels of processing at work here: one, the level of basic perceptual processing, and another process of interpretation and explanation, which involves more conceptual thought. Or perceptual mechanisms are redescribed to become conceptual thought. The perceptual mechanisms may be able to provide the basic ability to distinguish between caused and uncaused motions, while the conceptual mechanisms, which develop later, have the task of fitting these perceptions into categories and constructing explanations for them.

 

Piaget's studies of children's thinking revealed a phenomenon he called childhood animism. This is the tendency of children to attribute properties normally associated with living things to the non-living, aliveness and consciousness.

Stage

Name

Description

Stage 0

No concept

random judgments

Stage 1

Activity

anything active is alive

Stage 2

Movement

only things that move are alive

Stage 3, reached at age 7 or 8

Autonomous movement

only things that move by themselves are alive

Stage 4

Adult concept (animals)

only animals (and plants) are alive.

This may appear to be in conflict with the studies on infants mentioned before, but the Piagetian studies operate purely in the conceptual realm, whereas the other researchers are studying the perceptual ability to detect distinctions. Children may have an innate ability to distinguish types of observed motion, but lack the ability to construct coherent explanations for such motion. The latter ability must be learned, and in some sense the Piagetian developmental sequence is a case of the conceptual mind lagging behind and eventually catching up to the distinctions generated innately by the perceptual system.

 

Considering the development of a child's concept of animacy, young children think monkeys and dogs are alive because they are visually similar to people, while trees and shrubs are not alive because they are not visually similar to people. They use similarity-based reasoning. Older children (and adults) have a richer knowledge base so they use category-based reasoning, a more exact and powerful mental tool. By the time children reach the age of 10, they have developed a deeper conceptual understanding of animacy, i.e. to be alive you must be some kind of a biological entity. This conceptual understanding is rooted in years of experience and thus the gain of a richer knowledge base. What is most important to note her is that the development of a definition-based understanding of a concept is the result of the enrichment of the knowledge base in that domain (e.g. animacy) and that domain only. Therefore the transition from similarity- to definition-based understanding of a concept occurs at different times for different concepts, so even adults use similarity based reasoning in contexts where they have poor knowledge base.

Conceptual development

Basic Level Categories

Children learn basic level categories first; adults define superordinate in terms of their basic category members. Rosch experiment - presented 3 objects: two from the same basic category and one unrelated or one superordinate, one basic and one unrelated object (e.g., two airplanes and toy dog or airplane, car, and toy dog). 3 yr olds put basic level together 99% of time but chose the superordinate and basic as going together only 55% of the time.

The role of perceptual similarity in the formation of infant categories

The debate is over, “Does the development of concepts have a perceptual or conceptual base”.

 

Infants from 6-13 months were familiarized with either a set of similar black-and-white or coloured land animals in an object-examining task. Familiarisation studies are stronger than habituation studies because you show them lots of pictures. Infants who were familiarized with the black-and-white set of animals dishabituated to all novel items that were not similar in colouring. However, infants who were familiarized with the variable set of animals dishabituated to, e.g. a truck, but not to novel animal exemplars.

 

In another study, half the 15-month-old infants saw a boatload of dogs followed by a dog and a bird, and half saw a bunch of cats followed by a cat and a bird. However, if you familiarise them to pictures of dogs, they do not dishabituate to a picture of a cat. If you control the pictures of the dogs so they show as little variability as cats do, then infants can form a conceptual representation of dogs that excludes cats. Hmmm, cats and dogs are pretty similar, how could babies be doing this? Well, could be overall gestalt, or certain features, or maybe, THE FACE. So, presented three conditions, full animal, just head, or just body. Found they could discriminate between cats and dogs based on the full body or the head alone, but they couldn't make the discrimination by body alone. So, to answer headless body critics did final set, which switched the heads, and infants preferred the new head.

 

This suggests that perceptual similarity plays an important role in determining the level of exclusivity of categories formed by infants.

Role of category level (superordinate versus basic) and category type (natural kind versus artifact)

 

According to the sequential touching task and familiarization-novelty preference procedure, 3-4 month olds can distinguish between basic level categories, e.g. cats/dogs, horses/zebras, which are perceptually different. Objects in the basic level category are perceptually similar, or functionally similar in the case of artefacts. Sequential touching means giving the infant a number of stimuli and looking at the sequence in which they actually touch the stimuli. If they touch a number of stimuli from the same category then we think they have categories. Infants were given, e.g., animal v/s vehicle and dog v/s fish. 4 month olds can tell dogs from fish, cars from trucks. They sequentially touched objects at the basic level, but the sequential touching wasn’t so clear at the superordinate level. Basic level descriptions are based on similarity. But by 9 months they no longer distinguish between basic level categories. In later years they’ll learn superordinate and subordinate categories.

 

Sequential touching shows that 7-9month-olds distinguish planes/birds, which are perceptually similar, but treat dogs/fish the same even though they are perceptually different.  9-month-old infants distinguish between superordinate categories such as animals v/s vehicles, but not basic level categories. The sequential touching task given was animals v/s vehicles and birds v/s aeroplanes because birds and planes look similar. The young toddlers put the planes with the cars and the birds with the animals. Perceptual similarity is no longer the basis of categorisation; superordinate category takes over. By 9 months they can disregard perceptual properties and have formed a knowledge base. Concepts involve meaningful higher-level representations. Similarity can’t be the decider for categorisation. By 9 months, they’ll put planes and birds in different categories.

 

In an experiment using 14 month olds, infant imitation of adult actions was used to see if imitation depends on perceptual similarity or conceptual domain. Concepts are meaningful representations, not solely mediated by perceptual information, which support inference. Children were given figurines to play with. They don’t mix up superordinate categories; they might put a bear or bird to bed, but not a plane.

1. Adult gives child (for example) dog cup cat/bike to see what it does spontaneously

2. Adult models action three times (dog drinks from cup) making sipping sound

3. Adult gives child dog and cat/plane, then gives cup and makes sipping sound to see if child imitates; also gives dog bird/bus, then makes sipping sound

MAIN RESULT: kids imitate action for appropriate general category regardless of how similar the generalization items are to the modelled action (dog-cat=dog-fish) and did NOT generalize to the inanimate items.

 

Later, young children know that changing the outward appearance of an animal does not change its expected biological properties. So painting a piglet to look like a cow will not cause it to grow up into something that says, "Moo". They also know that changing the animal's "insides" does change what one should expect of its biological properties. Very young children understand that artefacts (even cars) are different from animals and other objects. They also fail to use essentialist reasoning when it comes to artefacts. Changing a teapot to look like a birdhouse makes it into a birdhouse. Children also understand that artefacts satisfy basic goals. They view the artefact in terms of what it is for. So regardless of appearance, they classify something as a chair, as long as it satisfies the purpose of a chair: to provide a surface for sitting

 

This suggests that children make global distinctions in their early conceptual categories (3-4 month olds can distinguish natural kinds v/s artefacts), and slowly differentiate them over time. Early on in infancy conceptual representations are based on animacy vs. inanimacy, e.g. a dog v/s a chair, and perceptual information. This is combined with knowledge of the objects in a usefulness-oriented way. Perception guides the detection of cognitively significant information, Gibsonian affordances. However, objections to this reading of the data include the fact we don’t know if there’s more or less variability at the superordinate or basic level; perceptual similarity could be the basis of superordinate category discrimination. Infants attend basic level categories when given dissimilar stimuli, and attend superordinate categories if familiarised to similar stimuli.

 

There may be a double dissociation between the perceptual levels (tapped by the preference procedure), and global conceptual views (tapped by the object exploration procedures), that the formation of concepts occurs independently of, and in parallel with, the perceptual knowledge you see; there could be a different system for extracting meaningful information. The category distinctions these 3-4 month olds are making are the same as what adults do if given a short looking-time; we judge based on perception. If we’ve got a short looking time we look at the head, given more time the body, given more time the attached propositions. The Theory Theory says this too. Perception is fast, informationally encapsulated, while concepts are slow, non-mandatory. But, Cognitive Economy - Why have two different representation for each object category? Object Examining vs. Familiarization-Novelty Preference, are they really that different?

 

Karmiloff-Smith would say, “She’s redescribing her small knowledge base to a more concrete theory”.

 

Let’s consider form and motion processing in autism.

 

This is an investigation of modular processes. How does one module affect behaviour? In this study we identify the processes underlying behaviour, show that it is modular, and that one module’s effect doesn’t influence the other’s. Vision’s quite modular. Autists have perceptuo-motor deficits as well as communication and imaginative deficits. These deficits are normally interpreted in terms of cognitive processing, but the abnormalities may be at a lower perceptual level. Autists find it difficult to walk up and down stairs. Research looking at body sway on a forced platform indicates that individuals with autism have movement perception deficits. This experiment tries not to use cognitive tasks.

 

Beyond the visual cortex there’s a occipitotemporal/semantic/ventral stream (for object and place recognition) and a occipitoparietal/schema/dorsal pathway (for motion recognition). Evidence for this comes from brain scans during tasks. This part of the visual system is very modular. The psychophysical measurements used to test these 2 extra-striate streams were form and motion tasks. They tested autists and age-matched mainstream children on form and motion processing tasks. Both tasks required the integration of local visual signals to extract a global pattern.

Motion coherence thresholds were determined

E.g. some dots moved horizontally and reversed direction every third of a second. Noise dots were repositioned randomly every sixtieth of a second. Participants had to locate the target where the signal was in antiphase with the surrounding region. Motion coherence threshold==ratio of signal to noise dots


 

They have to get a global pattern, not look at any particular block, because a particular block doesn’t move constantly. The task doesn’t get cognitively harder when you reduce the number of blocks moving. A 2-up, 1-down method was used; if they get it right it gets harder -if they get it wrong it gets easier- to determine thresholds.

Form coherence

Lines were jittered randomly. Within a circular region, a proportion of lines were tangential to concentric circles. Participants had to locate this target. Again a 2-up, 1-down method was used.

 


Autists need more dots moving to recognise motion, but their form coherence was fine. Only one part of their visual system seems to have a deficit; they had higher thresholds for motion coherence. They have a dorsal stream deficit. It’s a pathway that develops over time:

 

 


Autism then has something in common with other developmental disorders (William’s syndrome kids have the same deficit) and early stages of normal development, which also show a deficit in dorsal relative to ventral function. This implies a greater vulnerability of the dorsal system compared to the ventral.

 

With imaging techniques we might be able to watch development with time.

 


Autists are good at whole object recognition because they can see a house embedded in a lot of other shapes:

 


Normal kids and adults find this hard. Autists find it easy.

 

Dyspraxic (clumsy) kids with coordination problems have the opposite problem; they are good at motion and bad at form processing. Having said that, maybe dyspraxia isn’t a cognitive developmental disorder.

Conclusion

So we do have innate capabilities that develop over time. Fodor’s strict modularity doesn’t explain development. It explains what but not how.

 

Home ] Up ]