42

Visual thinking colin_ware_lectures_2013_8_navigation

Embed Size (px)

Citation preview

Do these make any sense?

Navigation

Metaphors and methods Affordances Ultimately about getting information

Geographic Space Non-metaphoric navigation

The affordance concept

Term coined by JJ Gibson (direct realist) Properties of the world perceived in terms

of potential for action (physical model, direct perception)

Physical affordances Cognitive affordances

World-in-hand

Virtual scene

6 df HandleController

a

Flying Vehicle Control

Virtual scene

JoystickController

d

Walking interface

Virtual scene

TreadmillController

c

Walking-on-the-spot interface

Use in virtual reality system Actually a head bobbing interface.

Real-walking both more natural and better presence than either flying or walking on the spot.

Evaluation

Exploration and Explanation Cognitive and Physical Affordance Task 1: Find areas of detail in the scene Task 2: Make the best movie

For examples see classic 3D user interaction techniques for immersive virtual reality revisited

Non-metaphoric Focus+Context

Problem, how not to get lost: Keep focus while remaining aware of the

context. Classic paper:

Furnas, G. W., Generalized fisheye views. Human Factors in Computing Systems CHI '86 Conference Proceedings, Boston, April 13-17, 1986, 16-23.

Non metaphoric Interfaces

ZUIs Bederson Focus in context

Using 3D to give 2D context

Dill, Bartram, Intelligent zoom

Perspective wall

www.thebrain.com

Table Lenshttp://www.nass.usda.gov/research/Crop_acre97.html

POI Navigation MacKinlay

Point of interest. Select a point of interest Move the viewpoint to that point.

VP

+ View direction reorientation.

Dist =start

Ct

COW navigation

COW navigation Move objects to the center of the workspace.

Zoom about the center. Initially object-based became surface-based exponential scale changes d = kt

: a factor of 4 per second (10 sec ~ scale by a million)

Better for rotations (people like to rotate around points of interest)

COW Navigation in Graph Visualizer 3D Viewpoint

COW

The Concept: Translate to center of workspace then scale

GeoZui3DZooming + 2 dof rotationsTranslate point on surface to centerThen scale. Or translate and scale. (8 x per second)

Navigation as a Cost of Knowledge. How much information can we gain per unit time

Intra-saccade (0.04 sec) (Query execution) An eye movement (0.5 sec) < 10 deg : 1 sec>

20 deg. A hypertext click (1.5 sec but loss of context) A pan or scroll (3 sec but we don’t get far) Walking (30 sec. we don’t get far) Flying (faster , but can be tuned) Zooming, t = log (scale change) Fisheye (max 5x). DragMag (max 30x)

How to navigate large 2 ½D spaces? (Matt Plumlee) Zooming Vs Multiple Windows

Key problem: How can we keep focus and maintain context.

Focus is what we are attending to now. Context is what we may wish to attend to.

2 solutions: Zooming, multiple windows

When is zooming better thanmultiple windows

Key insight: Visual working memory is a very limited resource. Only 3 objects

GeoZui3D

Task: searching for target patterns that match

Cognitive Model (grossly simplified)

Time = setup cost + number of visits*time per visit

Number of visits is a function of number of objects (& visual complexity)

When there are too many multiple visits are needed

Prediction Results

As targets (and visual working memory load) increases, multiple

Windows become more attractive.

Generalized fisheye viewsGeorge Furnas

A distance function. (based on relevance) Given a target item (focus) Less relevant other items are dropped

from the display.

#include <GL/glut.h>

void redraw( void )

void motion(int x, int y) {

rx = x; ry = winHeight - y; }

void mousebutton(int button, int state, int x, int y) {

if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)

{ rx = x; ry = winHeight - y;

} }

void keyboard(unsigned char key, int x, int y)

int main(int argc, char *argv[]){

glutMouseFunc( mousebutton);

glutMainLoop(); }

Custom Navigation in TrackPlot

Data Centered Magic Keys Widgets Time bar Play mode

Map:ahead-upversustrack-up

NN

a b c

North-up for shared environment

Ahead-up for novices

View marker gives best of both

Mental maps

How do we encode space?

Seigel and White

Three kinds of spatial knowledge

1) Categorical (declarative) knowledge of landmarks.

2) Topological (procedural) knowledge of links between landmarks

3) Spatial (a cognitive spatial map).

Acquired in the above order

Colle and Reid’s study

Environment with rooms and objects Test on relative locations of objects Results show that relative direction was

encoded for objects seen simultaneously but not for objects in different rooms

Implications: can generate maps quickly: should provide overviews. (ZUIs are a good idea)

Lynch: the image of the cityLynch’s Types

Examples Function

Path Street, canal,Transit line

Channel for movement

Edges Fence, Riverbank

District limits

Districts Neighborhood ReferenceRegion

Nodes Town square,Public building

Focal point fortravel

Landmarks Statue Reference point

Vinson’s design guidelines

There should be enough landmarks so that a small number are visible.

Each Landmark should be visually distinct from others

Landmarks should be visible at all navigable scales

Landmarks should be placed on major paths and intersections of paths

A tight loop between user and dataRapid interaction methods

Brushing. All representations of the same object are highlighted simultaneously. Rapid selection.

Dynamic Queries. Select a range in a multi-dimensional data space using multiple sliders (Film finder: Shneiderman)

Interactive range queries: Munzner, Ware Magic Lenses: Transforms/reveals data in a

spatial area of the display Drilling down – click to reveal more about some

aspect of the data

Parallel coordinates

For multi-dimensional discrete data

Inselberg

Event Brushing - Linked Kinetic Displays

Scatterplot - victim vs. city

Event distribution in space

Highlighted events

move in all displays

Active Timeline Histogram

Security Events in Afghanistan

Motion helps analysts see relations of patterns in time and space

Worldlets – 3D navigation aidsElvins et al.Worldlets can be rotated to facilitate RecognitionSubjects performedsignificantly better

World-in-handVirtual scene

6 df HandleController

a

Good for discrete objects

Poor affordances for looking scale changes – detail

Problem with center of rotation when extended scenes

Flying Vehicle ControlVirtual scene

JoystickController

dHardest to learn but most flexible

Non-linear velocity control

Spontaneous switch in mental modelThe predictor as solution

Eyeball in handVirtual scene

6 df HandleController

bEasiest under some circumstances

Poor physical affordances for many views

Subjects sometimes acted as if model were actually present