38
Visual Cognition II Object Perception

Visual Cognition II Object Perception

  • Upload
    maris

  • View
    53

  • Download
    0

Embed Size (px)

DESCRIPTION

Visual Cognition II Object Perception. Theories of Object Recognition. Template matching models Feature matching Models Recognition-by-components Configural models. Template matching. TEST INSTANCE. “J” TEMPLATE. “T” TEMPLATE. - PowerPoint PPT Presentation

Citation preview

Page 1: Visual Cognition II Object Perception

Visual Cognition II

Object Perception

Page 2: Visual Cognition II Object Perception

Theories of Object Recognition

• Template matching models• Feature matching Models• Recognition-by-components• Configural models

Page 3: Visual Cognition II Object Perception

Template matching

Detect patterns by matching visual input with a set of templates stored in memory – see if any template matches.

TEST INSTANCE

“J” TEMPLATE “T” TEMPLATE

Page 4: Visual Cognition II Object Perception

Problem:

what if the object differs slightly from the template? E.g., it is rotated or scaled differently?

Solution:

use a set of transformations to best align the object with a template (using translation, rotation, scaling)

TEST INSTANCE

“J” TEMPLATE “T” TEMPLATE

rotation

Page 5: Visual Cognition II Object Perception

Template-matching works well in constrained environments

Page 6: Visual Cognition II Object Perception

Figure 2-15 (p. 58)Examples of the letter M.

Problem: template matching is not powerful enough for general object recognition

Page 7: Visual Cognition II Object Perception

Feature Theories

• Detect objects by the presence of features• Each object is broken down into features• E.g.

A = + +

Page 8: Visual Cognition II Object Perception

Problem

• Many objects consist of the same collection of features

• Need to also know how the features relate to each other structural theories

• One theory is recognition by components

Different objects, similar sets of features

Page 9: Visual Cognition II Object Perception

Recognition by Components

• Biederman (1987): Complex objects are made up of arrangements of basic, component parts: geons.

• “Alphabet” of 24 geons

• Recognition involves recognizing object elements (geons) and their configuration

Page 10: Visual Cognition II Object Perception
Page 11: Visual Cognition II Object Perception

Why these geons?

• Choice of shape vocabulary seems a bit arbitrary• However, choice of geons was based on non-accidental

properties. The same geon can be recognized across a variety of different perspectives:

except for a few “accidental” views:

Page 12: Visual Cognition II Object Perception

Viewpoint Invariance

Page 13: Visual Cognition II Object Perception

• Viewpoint invariance is possible except for a few accidental viewpoints, where geons cannot be uniquely identified

Page 14: Visual Cognition II Object Perception

Prediction

• Recognition is easier when geons can be recovered

• Disrupting vertices disrupts geon processing more than just deleting parts of lines

ObjectDeleting

line segments

Deleting vertices

Page 15: Visual Cognition II Object Perception

Evidence from priming experiments

Page 16: Visual Cognition II Object Perception

Problem

• Theory does not say how color, texture and small details are processed. These are often important to tell apart similar objects. E.g.:

Page 17: Visual Cognition II Object Perception

Configural models of recognition

• Individual instances are not stored; what is stored is an “exemplar” or representative element of a category

• Recognition based on “distance” between perceived item and prototype

prototype

match

“Face space”

no match

Page 18: Visual Cognition II Object Perception

Configural Models

Page 19: Visual Cognition II Object Perception

Configural effects in face processing

Do these faces have anything in common?

How about these ones? By disrupting holistic processing, it becomes easier to process the individual parts

Page 20: Visual Cognition II Object Perception

Face superiority effect

Farah (1994)

Page 21: Visual Cognition II Object Perception

Results

Page 22: Visual Cognition II Object Perception

Face superiority effect

• Parts of faces are not processed independently. The context of other face parts (e.g. mouth) influences recognition of a particular part (e.g. nose)

• Face superiority effect disappears when face is inverted

Page 23: Visual Cognition II Object Perception

Context and Object Recognition

Page 24: Visual Cognition II Object Perception

What do you see?

Page 25: Visual Cognition II Object Perception

Top-down vs. Bottom up

Visual Input

Low Level Vision

High Level Vision

Bottom-up processing

Stimulus driven

Knowledge

Page 26: Visual Cognition II Object Perception

Top-down vs. Bottom up

Visual Input

Low Level Vision

High Level Vision

Top-down processing

Knowledge drivenContext Effects

Knowledge

Page 27: Visual Cognition II Object Perception

Top-down effects: knowledge influences perception

Page 28: Visual Cognition II Object Perception

Slide from Rob Goldstone

Page 29: Visual Cognition II Object Perception

Problem for many object recognition theories. How to model role of context? Context can often help in

identification of an object

Later identification of objects is more accurate when object is embedded in coherent context

Page 30: Visual Cognition II Object Perception

Context can alter the interpretation of an object

Page 31: Visual Cognition II Object Perception

Context Effects in Letter Perception

The word superiority effect: discriminating between letters is easier in the context of a word than as letters alone or in the context of a nonword string.

DEMO:http://psiexp.ss.uci.edu/research/teachingP140C/demos/demo_wordsuperiorityeffect.ppt (Reicher, 1969)

Page 32: Visual Cognition II Object Perception

• Word superiority effect suggests that information at the word level might affect interpretation at the letter level

• Interactive activation model: neural network model for how different information processing levels interact

• Levels interact– bottom up: how letters combine to form words– top-down: how words affect detectability of letters

Page 33: Visual Cognition II Object Perception

The Interactive Activation Model

• Three levels: feature, letter, and word level

• Nodes represent features, letters and words; each has an activation level

• Connections between nodes are excitatory or inhibitory

• Activation flows from feature to letter to word level and back to letter level

(McClelland & Rumelhart, 1981)

Page 34: Visual Cognition II Object Perception

The Interactive Activation Model

• PDP: parallel distributed processing

• Bottom-up:– feature to word level

• Top-down: – word back to letter level

• Model predicts Word superiority effect because of top-down processing

(McClelland & Rumelhart, 1981)

Page 35: Visual Cognition II Object Perception

Predictions of the IA model – stimulus is “WORK”

• At word level, evidence for “WORK” accumulates over time• Small initial increase for “WORD”

WORK

WORD

WEAR

Page 36: Visual Cognition II Object Perception

Predictions of the IA model – stimulus is “WORK”

• At letter level, evidence for “K” accumulates over time – boost from word level

• “D” is never activated because of inhibitory influence from feature level

K

R

D

Page 37: Visual Cognition II Object Perception

For a demo of the IA model, see:

http://www.itee.uq.edu.au/~cogs2010/cmc/chapters/LetterPerception/

Page 38: Visual Cognition II Object Perception

Take-home message

• What you see is not what is out there in the outside world (ie., not like “taking a picture”), but instead a result of visual computation -- only those computations that are critical for survival, shaped by the evolution.