55
SPECULATIVE COMPUTING: Aesthetic provocations in humanities computing With roots in computational linguistics, stylometrics, and other quantitative statistical methods for analyzing features of textual documents, humanities computing has had very little use for analytic tools with foundations in visual epistemology. In this respect humanities computing follows the text-based, (dare I still say -- logocentric?) approach typical of traditional humanities. "Digital" humanities are distinguished by the use of computational methods, of course, but they also make frequent use of visual means of information display (tables, graphs, and other forms of data presentation) that have become common in desktop and Web environments. Two significant challenges arise as a result of these developments. The first is to meet requirements that humanistic thought conform to the logical systematicity required by computational methods. The second is to overcome humanists' long-standing resistance (ranging from passively ignorant to actively hostile) to 1

Speculative Computing - csdl.tamu.edufuruta/courses/06c_689dh/SPEC… · Web viewSPECULATIVE COMPUTING: Aesthetic provocations in humanities computing. With roots in computational

  • Upload
    hathien

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

SPECULATIVE COMPUTING: Aesthetic provocations in humanities computing

With roots in computational linguistics, stylometrics, and other quantitative statistical

methods for analyzing features of textual documents, humanities computing has had very

little use for analytic tools with foundations in visual epistemology. In this respect

humanities computing follows the text-based, (dare I still say -- logocentric?) approach

typical of traditional humanities. "Digital" humanities are distinguished by the use of

computational methods, of course, but they also make frequent use of visual means of

information display (tables, graphs, and other forms of data presentation) that have

become common in desktop and Web environments. Two significant challenges arise as a

result of these developments. The first is to meet requirements that humanistic thought

conform to the logical systematicity required by computational methods. The second is to

overcome humanists' long-standing resistance (ranging from passively ignorant to

actively hostile) to visual forms of knowledge production. Either by itself would raise a

host of issues. But the addition of a third challenge – to engage computing to produce

useful aesthetic provocations-- pushes mathesis and graphesis into even more unfamiliar

collaborations. Speculative approaches to digital humanities engage subjective and

intuitive tools, including visual interfaces, as primary means of interpretation in

computational environments. Most importantly, the speculative approach is premised on

the idea that a work is constituted in an interpretation enacted by an interpreter. The

computational processes that serve speculative inquiry must be dynamic and constitutive

in their operation, not merely procedural and mechanistic.

1

To some extent these notions are a radical departure from established practices in

digital humanities. As the articles in this volume attest, many of the practices in digital

humanities are becoming standardized. Technical and practical environments have

become more stable. So have procedures for developing meta-data, for content

modelling, for document classification and organization, search instruments, and the

various protocols of ordering, sorting, and accessing information in digital formats. In its

broadest conceptualization, speculative computing is premised on the conviction that

logical, systematic knowledge representation, adequate though it may be for many fields

of inquiry, including many aspects of the humanities, is not sufficient for the

interpretation of imaginative artifacts.

Intellectual debates, collateral but substantive, have arisen as digital humanists

engage long-standing critical discussions of the "textual condition" in its material,

graphical, bibliographical, semantic, and social dimensions. No task of information

management is without its theoretical subtext, just as no act of instrumental application is

without its ideological aspects. We know that the "technical" tasks we perform are

themselves acts of interpretation. Intellectual decisions that enable even such

fundamental activities as keyword searching are fraught with interpretive baggage. We

know this -- just as surely as we understand that the front page of any search engine (the

world according to "Yahoo!") is as idiosyncratic as the Chinese Emperor's Encyclopedia

famously conjured by Borges and commented upon by Foucault. Any "order of things" is

always an expression of human foibles and quirks, however naturalized it appears at a

particular cultural moment. We often pretend otherwise in order to enact the necessary

day-to-day "job" in front of us, bracketing out the (sometimes egregious) assumptions

2

that allow computational methods (such as mark-up or data models) to operate

effectively.

Still, we didn't arrive at digital humanities naively. Though work in digital

humanities has turned some relativists into pragmatists under pressure of technical

exigencies, it has also reinvigorated our collective attention to the heart of our intellectual

undertaking. As the applied knowledge of digital humanities becomes integrated into

libraries and archives, providing the foundation for collection management and delivery

systems, the ecological niche occupied by theory is called on to foster new self-reflective

activity. We are not only able to use digital instruments to extend humanities research,

but to reflect on the methods and premises that shape our approach to knowledge and our

understanding of how interpretation is framed. Digital humanities projects are not simply

mechanistic applications of technical knowledge, but occasions for critical self-

consciousness.

Such assertions beg for substantiation. Can we demonstrate that humanities

computing isn't "just" or "merely" a technical innovation, but a critical watershed as

important as deconstruction, cultural studies, feminist thinking? To do so, we have to

show that digital approaches don't simply provide objects of study in new formats, but

shift the critical ground on which we conceptualize our activity. The challenge is to

structure instruments that engage and enable these investigations, not only those that

allow theoretically glossed discussion of them. From a distance, even a middle distance

of practical engagement, much of what is currently done in digital humanities has the

look of automation. Distinguished from augmentation by Douglas Engelbart, one of the

pioneering figures of graphical interface design, automation suggests mechanistic

3

application of technical knowledge according to invariant principles. Once put into

motion, an automatic system operates, and its success or benefit depends on the original

design. By contrast, Engelbart suggested that augmentation extends our intellectual and

cognitive -- even imaginative -- capabilities through prosthetic means, enhancing the very

capabilities according to which the operations we program into a computer can be

conceived. Creating programs that have emergent properties, or that bootstrap their

capabilities through feedback loops or other recursive structures, is one stream of

research work. Creating digital environments that engage human capacities for subjective

interpretation, interpellating the subjective into computational activity, is another.

Prevailing approaches to humanities computing tend to lock users into

procedural strictures. Once determined, a data structure or content model becomes a

template restricting interpretation. Not in your tag set? Not subject to hierarchical

ordering? Too bad. Current methods don't allow much flexibility -- a little like learning to

dance by fitting your feet to footsteps molded into concrete. Speculative computing

suggests that the concrete be replaced by plasticine that remains malleable, receptive to

the trace of interpretive moves. Computational management of humanities documents

requires that "content" has be subjected to analysis and then put into conformity with

formal principles.

Much of the intellectual charge of digital humanities has come from the

confrontation between the seemingly ambiguous nature of imaginative artifacts and the

requirements for formal dis-ambiguation essential for data structures and schema. The

requirement that a work of fiction or poetry be understood as a "ordered hierarchy of

content objects" (Allen Renear's oft-cited phrase of principles underlying the Text

4

Encoding Initiative) raises issues, as Jerome McGann has pointed out. Productive as these

exchanges have been, they haven't made the shrug of resignation that accompanies

acceptance of such procedures and presses them into practice into anything more than a

futile protest against the Leviathan of standardization. Alternatives are clearly needed,

not merely objections. The problems are not just with ordered hierarchies, but with the

assumption that an artifact is a stable, constant object for empirical observation, rather

than a work produced through interpretation.

Speculative computing is an experiment designed to explore alternative

approaches. On a technical level, the challenge is to change the sequence of events

through which the process of "dis-ambiguation" occurs. Interpretation of subjective

activity can be formalized concurrent with its production -- at least, that is the design

principle we have used as the basis of Temporal Modelling.

By creating a constrained visual interface, Temporal Modelling puts subjective

interpretation within the system, rather than outside it. The subjective, intuitive

interpretation is captured and then formalized into a structured data scheme, rather than

the other way around. The interface gives rise to XML exported in a form that can be

used to design a DTD or to be transformed through use of XSLT or other manipulations.

A description of the technical parameters that realize these conceptual premises is

described in the case study below. The project is grounded on convictions that subjective

approaches to knowledge representation can function with an intellectual rigor

comparable to that usually claimed by more overtly formal systems of thought. This

experimental approach has potential to expand humanities computing in theoretical scope

and practical application.

5

Our path into the "speculative" has been charted by means of aesthetic

exploration, emphasizing visual means of interpretation. These are informed by the

history of aesthetics in descriptive and generative approaches, as well as by the

anomalous principles of 'pataphysics, that invention of late 19th century French poet-

philosopher Alfred Jarry. An outline of these aesthetic, literary, and critical traditions,

and their role in the ongoing development of digital humanities, forms the first part of

this paper. This is followed by a discussion of the project that demonstrates the working

viability of the precepts of speculative computing, Temporal Modelling. I invited

Bethany Nowviskie to author this second section, since the conceptual and technical

development of this research project has proceeded largely under her intellectual

guidance.

Aesthetics and digital humanities

The history of aesthetics is populated chiefly by descriptive approaches. These are

concerned with truth value, the specificity of individual media and activity “proper” to

their form, the development of taste and knowledge, and the capacity of aesthetics to

contribute to moral improvement -- and, of course, notions of beauty and the aesthetic

experience. These concerns are useful in assessing the aesthetic capabilities of digital

media -- as well as visual forms of knowledge production – even if only because of the

peculiar prejudices such traditional approaches have instilled in our common

understanding. For instance, long-standing tensions between images and text-based forms

of knowledge production still plague humanist inquiry. A disposition against visual

6

epistemology is deeply rooted in conceptions of image and word within their morally and

theoretically charged history in western philosophy. A schematic review of such key

traditional issues provides a useful preface to understanding current concerns, particularly

as visual tools become integrated into digital contexts as primary instruments of

humanities activity.

Fundamental distinctions differentiate descriptive modes from the intellectual

traditions that inform our project: generative aesthetics, ‘pataphysics, speculative thought

and quantum poetics. Generative approaches are concerned with the creation of form,

rather than its assessment on grounds of truth, purity, epistemological, cognitive, or

formal value. Speculative aesthetics is a rubric hatched for our specific purposes, and

incorporates emergent and dynamic principles into interface design while also making a

place for subjectivity within the computational environment. ‘Pataphysics inverts the

scientific method, proceeding from and sustaining exceptions and unique cases, while

quantum methods insist on conditions of indeterminacy as that which is intervened in any

interpretive act. Dynamic and productive with respect to the subject-object dialectic of

perception and cognition, the quantum extensions of speculative aesthetics have

implications for applied and theoretical dimensions of computational humanities.

Before plunging into the vertiginous world of speculative and 'pataphysical

endeavors, some frameworks of traditional aesthetics provide useful points of departure

for understanding the difficulties of introducing visual means of knowledge

representation into digital humanities contexts. To reiterate, the themes of descriptive

aesthetics that are most potently brought to bear on digital images are: truth value,

"purity" or capabilities of a medium, the cognitive values of aesthetics, and the moral

7

improvement aesthetic experience supposedly fosters. Debates about beauty I shall leave

aside, except in so far as they touch on questions of utility, and the commonplace

distinction between applied and artistic activity.

The emphasis placed on the distinction between truth-value and imitation in

classical philosophy persists in contemporary suspicions of digital images. The

simulacral representations that circulate in cyberspace (including digital displays of

information in visual form) are so many removes from "truth" that they would be charged

with multiple counts of aesthetic violation in any Socratic court. Platonic hierarchies, and

their negative stigmatization of images as mere imitations of illusions, are famously

entrenched in western thought. Whether we consider the virtual image to be a thing in-

itself, with ontological status as a first-order imitation, or as a mimetic form and thus yet

another remove from those Ideas whose truth we attempt to ascertain, hardly matters. The

fixed hierarchy assesses aesthetic judgment against a well-marked scale of authenticity.

From a theological perspective, images are subject to negative judgment except

when they serve as instruments of meditation, as material forms whose properties

function as a first rung on the long ladder towards enlightenment. Such attitudes are

characterized by a disregard for embodied intelligence and of the positive capacities of

sensory perception. Denigrating the epistemological capacity of visualisation, they

assume that art and artifice are debased from the outset -- as deceptive, indulgent acts of

hubris -- or worse, temptations to sinful sensuality. But if images are necessarily inferior

to some "Idea" whose pale shadow they represent, digital images are redeemed only

when they bring the ideal form of data into presentation. The difficulty of such reasoning,

8

however, is that it collapses into questions about what form data has in a disembodied

condition.

Aristotle’s concern with how things are made, not just how "truthful" they are,

suggested that it was necessary to pay attention to the properties particular to each

medium. The idea of a "proper" character for poetry was opposed to -- or at least distinct

from -- that of visual forms. Likewise, sculpture was distinguished from painting, and so

on, in an approach dependent on specificity of media and their identifying properties.

This notion of “propriety” led to differentiation among aesthetic forms on the basis of

media, providing a philosophical foundation for distinctions that resonate throughout

literary and visual studies. Investigation of the distinct properties of media was

formulated most famously in the modern era by Gotthold Lessing, (Laocöon, 1766). The

value judgements that lurk in his distinctions continue to surface in disciplinary and

critical activity to the present day. Such boundaries are well policed. But "new media"

challenge these distinctions through the use of meta-technologies and inter-media

sensibility. In addition, artistic practices forged from conceptual, procedural, and

computational realms can't be well-accommodated by aesthetic structures with "purist"

underpinnings. If a data file can be input from a typewriter keyboard and output as

musical notes then the idea of the "purity" of media seems irrelevant.

Critics trained in or focused on the modern tradition (in its 20th century form and

reaching back into 18th century aesthetics) have difficulty letting go of the long-standing

distinction between textual and visual forms of representation – as well as of the

hierarchy that places text above image. The disjunct between literary and visual

modernism, the very premise of an autonomous visuality freed of literary and textual

9

referents, continues to position these approaches to knowledge representation within

separate domains. The consequences are profound. Intellectual training in the humanities

only rarely includes skills in interpretation of images or media in any but the most

thematic or iconographic terms. The idea that visual representation has the capacity to

serve as a primary tool of knowledge production is an almost foreign notion to most

humanists. Add to this that many latter-day formalists conceive of digital objects as

"immaterial" and the complicated legacy of hierarchical aesthetics becomes a very real

obstacle to be overcome. Naivete aside (digital artifacts are highly, complexly material),

these habits of thought work against conceptualizing a visual approach to digital

humanities. Nonetheless, visualization tools have long been a part of the analysis of

statistical methods in digital humanities. So long as these are kept in their place, a

secondary and subservient "display" of information, their dubious character is at least

held in check. Other conceptual difficulties arise when visual interfaces are used to

create, rather than merely present, information.

In a teleologically grounded aesthetics, forms of creative expression are

understood to participate in spiritual evolution, moral improvement -- or its opposite.

Whether staged as cultural or individual improvements in character through

exposure to the "best that has been thought" embodied in the artifacts of high, fine art, the

idea lingers: the arts, visual, musical, or poetical, somehow contribute to moral

improvement. Samuel Taylor Coleridge, Matthew Arnold, and Walter Pater all reinforced

this sermon on moral uplift in the 19th century. Even now the humanities and fine arts

often find themselves justified on these grounds. The links among ideas of progress and

10

the application of "digital" technology to humanities continues to be plagued by

pernicious notions of improvement.

Hegel wrote a fine script for the progressive history of aesthetic forms. The

cultural authority of technology is insidiously bound to such teleologies – especially

when it becomes interlinked with the place granted to instrumental rationality in modern

culture. The speculative approach, which interjects a long-repressed streak of subjective

ambiguity, threatens the idea that digital representations present a perfect match of idea

and form.

But Hegel’s dialectic provides a benefit. It reorients our understanding of

aesthetic form, pivoting it away from the classical conception of static, fixed ideal. The

interaction of thesis and antithesis in Hegelian principles provides a dynamic basis for

thinking about transformation and change -- but within a structure of progress towards an

Absolute. Hegel believed that art was concerned "with the liberation of the mind and

spirit from the content and forms of finitude" that would compensate for the "bitter labour

of knowledge". Aesthetic experience, presumably, follows this visionary path. If an

aesthetic mode could manage to manifest ideal thought, presumably in the form of "pure

data" -- and give it perfect form through technological means, then descriptive aesthetics

would have in its sights a sense of teleological completion. Mind, reason, aesthetic

expression – all would align. But evidence that humankind has reached a pinnacle of

spiritual perfection in this barely new millennium is in short supply. In this era following

post structuralism, and influenced by a generation of deconstruction and post-colonial

theory, we can't really still imagine we are making "progress". Still, the idea that digital

11

technology provides a high point of human intelligence, or other characterisations of its

capabilities in superlative terms, persists.

18th century aestheticians shifted their attention to the nature of subjective

experience and away from an earlier focus on standards of harmonious perfection for

objectified forms. In discussions of taste, subjective opinion comes to the fore. Well

suited to an era of careful cultivation of elite sensibility, this discussion of taste and

refinement emphasizes the idea of expertise. Connoisseurship is the epitome of

knowledge created through systematic refining of sensation. Alexander Baumgarten

sought in aesthetic experience the perfection proper to thought. He conceived that the

object of aesthetics was "to analyse the faculty of knowledge" or " to investigate the kind

of perfection proper to perception which is a lower level of cognition but autonomous and

possessed of its own laws". The final phrase resonates profoundly, granting aesthetics a

substantive, rather than trivial role. But aesthetic sensibilities -- and objects -- were

distinguished from those of techne or utility. The class divide of laborer and intellectual

aesthete is reinforced in this distinction. The legacy of this attitude persists most

perniciously, and the idea of the aesthetic function of utilitarian objects is as bracketed in

digital environments as it is in the well-marked domains of applied and "pure" arts.

In what is arguably the most influential work in modern aesthetics, Immanuel

Kant elevated the role of aesthetics -- but at a price. The Critique of Judgment (1790),

known as the "third" critique -- since it bridged the first and second critiques of Pure

Reason (knowledge) and Practical Reason (sensation)-- contained an outline for

aesthetics as the understanding of design, order, and form. But this understanding was

meant to grasp "Purposiveness without purpose". In other words, appreciation of design

12

outside of utility was the goal of aesthetics. Knowledge seeking must be "free",

disinterested, without end, aim, he asserted. In his system of three modes of

consciousness -- knowledge, desire, and feeling. (Pure Reason, Practical Reason, and

Judgement)-- Kant positioned aesthetics between knowledge and desire, between pure

and practical reasons. Aesthetic judgment served as a bridge between mind and sense.

But what about the function of emergent and participatory subjectivity? Subjectivity that

affects the system of judgement? These are alien notions. For the enlightenment thinker,

the objects under observation and the mind of the observer interact from autonomous

realms of stability. We cannot look to Kant for anticipation of a“quantum" aesthetics in

which conditions exist in indeterminacy until intervened by a participatory sentience.

In summary, we can see that traditional aesthetics bequeath intellectual

parameters on which we can distinguish degrees of truth, imitation, refinement, taste, and

even the "special powers of each medium" that are contributing strains to understanding

the knowledge-production aspects of visual aesthetics in digital media. But none provide

a foundation for a generative approach, let alone a quantum and/or speculative one.

Why?

For all their differences these approaches share a common characteristic. They are

all descriptive systems. They assume that form pre-exists the act of apprehension, that

aesthetic objects are independent of subjective perception -- and vice versa. They assume

stable, static relations between knowledge and its representation -- even if epistemes

change (e.g. Hegel's dialectical forms evolve, but they do not depend on contingent

circumstances of apperception in order to come into being.) The very foundations of

digital media, however, are procedural, generative, and iterative in ways that bring these

13

issues to the fore. We can transfer the insights gleaned from our understanding of digital

artifacts onto traditional documents -- and we should -- just as similar insights could have

arisen from non-digital practices. The speculative approach is not specific to digital

practices -- nor are generative methods. Both, however, are premised very differently

from that of formal, rational, empirical, or classical aesthetics.

Generative aesthetics

The Jewish ecstatic traditions of gematria (a method of interpretation of letter patterns in

sacred texts) and Kabala (with its inducement of trance conditions through repetitive

combinatoric meditation) provide precedents for enacting and understanding generative

practices. Secular literary and artistic traditions have also drawn on permutational,

combinatoric, and other programmed means of production. Aleatory procedures

(seemingly at odds with formal constraints of a "program" but incorporated into

instructions and procedures) have been used to generate imaginative and aesthetic works

for more than a century, in accord with the enigmatic cautions uttered by Stéphane

Mallarmé that the "throw of the dice would never abolish chance." In each of the

domains just cited, aesthetic production engages with non-rational systems of thought --

whether mystical, heretical, secular, or irreverent. Among these, 20th-century

developments in generative aesthetics have a specific place and relevance for digital

humanities.

Generative aesthetics is the phrase used by the German mathematician and visual

poet Max Bense to designate works created using algorithmic processes and

14

computational means for their production. "The system of generative aesthetics aims at a

numerical and operational description of characteristics of aeesthetic structures," Bense,

wrote in his prescription for a generative aesthetics in the early 1960s. Algorithmically

generated computations would give rise to data sets in turn expressed in visual or other

output displays. Bense's formalist bias is evident. He focused on the description of formal

properties of visual images, looking for a match between their appearance and the

instructions that bring them into being. They were evidence of the elegance and formal

beauty of algorithms. They also demonstrated the ability of a "machine" to produce

aesthetically harmonious images. His rational, mathematical disposition succeeded in

further distancing subjectivity from art, suggesting that form exists independent of any

viewer or artist. Bense's systematic means preclude subjectivity. But his essay marks an

important milestone in the history of aesthetics, articulating as it does a procedural

approach to form giving that is compatible with computational methods.

Whether such work has any capacity to become emergent at a level beyond the

programmed processes of its original conception is another question. Procedural

approaches are limited because they focus on calculation (manipulation of quantifiable

parameters) rather than symbolic properties of computing (manipulation at the level of

represented information), thus remaining mechanistic in conception and execution.

Reconceptualizing the mathematical premises of combinatoric and permuatational

processes so they work at the level of symbolic, even semantic and expressive levels, is

crucial to the extension of generative aesthetics into speculative, 'pataphysical, or

quantum approaches.

15

Generative aesthetics has a different lineage than that of traditional aesthetics.

Here the key points of reference would not be Baumgarten and Kant, Hegel and Walter

Pater, Roger Fry, Clive Bell, or Theodor Adorno -- but the generative morphology of the

5th century BC Sanskrit grammarian Panini, the rational calculus of Leibniz, the

visionary work of Charles Babbage, George Boole, Alan Turing, Herbert Simon, and

Marvin Minsky. Other important contributions come from the traditions of self-

consciously procedural poetics and art such as that of Lautréamont, Duchamp, Cage,

Lewitt, Maciunas, Stockhausen and so on. The keyword vocabulary in this approach

would not be comprised of beauty, truth, mimesis, taste, and form -- but of emergent,

autopoeitic, generative, iterative, algorithmic, speculative, and so on.

The intellectual tradition of generative aesthetics inspired artists working in

conceptual and procedural approaches throughout the 20th century. Earlier precedents can

be found, but Dada strategies of composition made chance operations a crucial element of

poetic and visual art. The working methods of Marcel Duchamp provide ample testimony

to the success of this experiment. Duchamp's exemplary "unfinished" piece, The Large

Glass, records a sequence of representations created through actions put into play to

create tangible traces of abstract thought. Duchamp precipitated form from such activity

into material residue, rather than addressing the formal parameters of artistic form-giving

according to traditional notions of beauty (proportion, harmony, or truth, for instance).

He marked a radical departure from even the innovative visual means of earlier avant-

garde visual traditions (Post-Impressionism, Cubism, Futurism and so forth). For all their

conceptual invention, these were still bound up with visual styles.

16

In the 1960s and 1970s, many works characteristic of these approaches were

instruction based. Composer John Cage made extensive use of chance operations,

establishing his visual scores as points of departure for improvisational response, rather

than as prescriptive guidelines for replication of ideal forms of musical works. Fluxus

artists such as George Brecht, George Maciuas, Robert Whitman, or Alison Knowles

drew on some of the conceptual parameters invoked by Dada artists a generation earlier.

The decades of the 1950s and 1960s are peopled with individuals prone to such

inspired imaginings: Herbert Franke and Melvin Pruitt, Jascia Reichardt, and the

heterogenous research teams at Bell Labs such as Kenneth Knowlton, Leon Harmon, and

dozens of other artists worked in robotics, electronics, video, visual and audio signal

processing, or the use of new technology that engaged combinatoric or permutational

methods for production of poetry, prose, music, or other works. The legacy of this work

remains active. Digital art-making exists in all disciplines and genres, often hybridized

with traditional approaches in ways that integrate procedural methods and material

production.

One of the most sustained and significant projects in this spirit is Harold Cohen's

Aaron project. As a demonstration of artificial aesthetics, an attempt to encode artistic

creativity in several levels of instructions, Aaron is a highly developed instance of

generative work. Aaron was first conceived in 197?, and not surprisingly its first iteration

corresponded to artificial vision research at the time. The conviction that perceptual

processes, if sufficiently understood, would provide a basis for computational models

predominated in research done by such pioneers as David Marr in the 1970s. Only as this

work progressed did researchers realize that perceptual processing of visual information

17

had to be accompanied by higher order cognitive representations. Merely understanding

"perception" was inadequate. Cognitive schema possessed of the capacity for emerging

complexity must also be factored into the explanation of the way vision worked.

Aaron reached a temporary impasse when it became clear that the methods of

generating shape and form within its programs had to be informed by such world-based

knowledge as the fact that tree trunks were thicker at the bottom than at the top. Vision,

cognition, and representation were all engaged in a dialogue of percepts and concepts.

Programming these into Aaron's operation pushed the project towards increasingly

sophisticated AI research. Aaron did not simulate sensory perceptual processing (with its

own complex mechanisms of sorting, classifying, actively seeking stimuli as well as

responding to them), but the cognitive representations of "intellectualized" knowledge

about visual forms and their production developed for Aaron made a dramatic

demonstration of generative aesthetics. Aaron was designed to create original expressive

artifacts -- new works of art. Because such projects have come into being as generative

machines before our very eyes, through well-recorded stages, they have shown us more

and more precisely just how that constitutive activity of cognitive function can be

conceived.

'Pataphysical sensibilities and quantum methods

Before returning to speculative computing, and to the case study of this essay, a note

about 'pataphysics is in order. I introduced pataphysics almost in passing in the

introduction above, not to diminish the impact of slipping this peculiar gorilla into the

18

chorus, but because I want to suggest that it offers an imaginative fillip to speculative

computing, rather than the other way around.

An invention of the late 19th century French physicist poet Alfred Jarry,

'pataphysics is a science of unique solutions, of exceptions. 'Pataphysics celebrates the

idiosyncratic and particular within the world of phenomena, thus providing a framework

for an aesthetics of specificity within generative practice. (This contrasts with Bense's

generative approach, which appears content with generalities of conception and formal

execution.)

The original founder of 'pataphysics, Alfred Jarry, declared the principles for the

new science in the fantastic pages of his novel Dr. Faustroll, 'Pataphysician: "Faustroll

defined the universe as that which is the exception to oneself". In his introduction to

Dr.Faustroll Roger Shattuck described the three basic principles of Jarry's belief system:

clinamen, syzygy, and ethernity. Shattuck wrote: "Clinamen, an infinitesimal and

fortuitous swerve in the motion of an atom, formed the basis of Lucretius's theory of

matter and was invoked by Lord Kelvin when he proposed his 'kinetic theory of matter.'

To Jarry in 1898 it signified the very principle of creation, of reality as an exception

rather than the rule." Just as Jarry was proposing this suggestive reconceptualization of

physics, his contemporary Stéphane Mallarmé was calling the bluff on the end game to

metaphysics. Peter Bürger suggests that Mallarmé's conception of "the absolute"

coincides with a conception of aesthetic pleasure conceived of as a technological game,

driven by a non-existent mechanism. The substantive manifestation in poetic form shows

the workings of the mechanism as it enacts, unfolds. Generative and speculative

aesthetics are anticipated in the conceptualization of Mallarme's approach.

19

What has any of this to do with computing?

Without 'pataphysical and speculative capabilities, instrumental reason locks

computing into engineering problem-solving logical sensibility, programs that only work

within the already defined parameters. The binarism between reason and its opposite,

endemic to western thought, founds scientific inquiry into truth on an empirical method.

Pledged to rational systematic consistency, this binarism finds an unambiguous

articulation in Book X of Plato's Republic. "The better part of the soul is that which trusts

to measure and calculation." The poet and visual artist "implant an evil constitution" --

indulging the "irrational nature" which is "very far removed from the true." Ancient

words, they prejudice the current condition in which the cultural authority of the

computer derives from its relation to symbolic logic at the expense of those inventions

and sensibilities that characterize imaginative thought. By contrast, speculative

approaches seek to create parameter-shifting open-ended, inventive capabilities --

humanistic and imaginative by nature and disposition. Quantum methods extend these

principles. Simply stated, quantum interpretation notes that all situations are in a

condition of indeterminacy distributed across a range of probability until they are

intervened by observation. The goal of 'pataphysical and speculative computing is to keep

digital humanities from falling into mere technical application of standard practices

(either administered/info management or engineering/statistical calculations). To do so

requires finding ways to implement imaginative operations.

Speculative computing and the use of aesthetic provocation

20

Visual or graphic design has played almost no part in humanities computing, except for

the organized display of already structured information. Why should this be necessary?

Or continue to be true? What are the possibilities of integrating subjective perspectives

into the process of digital humanities exists. And though emergent systems for dynamic

interface are not realizable, they are certainly conceivable. Such perspectives differentiate

speculative approaches from generative ones.

The attitude that pervades information design as a field is almost entirely

subsumed by notions that data pre-exists display, and that the task of visual form-giving

is merely to turn a cognitive exercise into a perceptual one. While the value of intelligent

information design in the interpretation of statistical data can't be underestimated, and

dismissing the importance of this activity would be ridiculous, the limits of this approach

also have to be pointed out. Why? Because they circumscribe the condition of knowledge

in their apparent suggestion that information exists independently of visual presentation

and just waits for the "best" form in which it can be represented. Many of the digital

humanists I've encountered treat graphic design as a kind of accessorizing exercise, a

dressing up of information for public presentation after the real work of analysis has been

put into the content model, data structure, or processing algorithm. Arguing against this

attitude requires rethinking of the way embodiment gives rise to information in a primary

sense. It also requires recognition that embodiment is not a static or objective process, but

one that is dynamic and subjective.

Speculative computing is a technical term, fully compatible with the mechanistic

reason of techno-logical operations. It refers to the anticipation of probable outcomes

along possible forward branches in the processing of data. Speculation is used to

21

maximize efficient performance. By calculating the most likely next steps, it speeds up

processing. Unused paths are discarded as new possibilities are calculated. Speculation

doesn't eliminate options, but, as in any instance of gambling, the process weights the

likelihood of one path over another in advance of its occurrence. Speculation is a

mathematical operation unrelated to metaphysics or narrative theory, grounded in

probability and statistical assessments. Logic based, and quantitative, the process is pure

techné, applied knowledge, highly crafted, and utterly remote from any notion of poiesis

or aesthetic expression. Metaphorically, speculation invokes notions of possible worlds

spiralling outward from every node in the processing chain, vivid as the rings of radio

signals in the old RKO studios film logo. To a narratologist, the process suggests the

garden of forking paths, a way to read computing as a tale structured by nodes and

branches.

The phrase "speculative computing" resonates with suggestive possibilities,

conjuring images of unlikely outcomes and surprise events, imaginative leaps across the

circuits that comprise the electronic synapses of digital technology. The question that

hangs in that splendid interval is a fundamental one for many areas of computing

application: can the logic-based procedures of computational method be used to produce

an aesthetic provocation? We know, of course, that the logic of computing methods does

not in any way preclude their being used for illogical ends -- or for the processing of

information that is unsystematic, silly, trivial, or in any other way outside the bounds of

logical function. Very few fully logical or formally systematic forms of knowledge exist

in human thought beyond those few branches of mathematics or calculation grounded in

unambiguous procedures. Can speculation engage these formalized models of human

22

imagination at the level of computational processing? To include an intuitive site for

processing subjective interpretation into formal means rather than circumscribing it from

the outset? If so, what might those outcomes look like and suggest to the humanities

scholar engaged with the use of digital tools? Does the computer have the capacity to

generate a provocative aesthetic artifact?

Speculative computing extends the recognition that interpretation takes place

from inside a system, rather than from outside. Speculative approaches make it possible

for subjective interpretation to have a role in shaping the processes, not just the

structures, of digital humanities. When this occurs, outcomes go beyond descriptive,

generative, or predictive approaches to become speculative. New knowledge can be

created.

These are big claims. Can they be substantiated?

Temporal modelling (Bethany Nowviskie)

Temporal Modelling is a time machine for humanities computing. Not only does

it take time and the temporal relations inherent in humanities data as its computational

and aesthetic domain, enabling the production and manipulation of elaborate,

subjectively-inflected timelines, but it also allows its users to intervene in and alter the

conventional interpretive sequence of visual thinking in digital humanities.

The Temporal Modelling environment, under ongoing development at SpecLab

(University of Virginia), embodies a reversal of the increasingly-familiar practice of

generating visualizations algorithmically from marked or structured data, data that has

23

already been modelled and made to conform to a logical system. The aesthetic

provocations Johanna Drucker describes are most typically understood to exist at the

edges or termini of humanities computing projects. These are the graphs and charts we

generate from large bodies of data according to strict, pre-defined procedures for

knowledge representation, and which often do enchant us with their ability to reveal

hidden patterns and augment our understanding of encoded material. They are, however,

fundamentally static and (as they depend on structured data and defined constraints)

predictable, and we are hard-pressed to argue that they instantiate any truly new

perspective on the data they reflect. Why, given the fresh possibilities for graphesis the

computer affords, should we be content with an after-the-fact analysis of algorithmically-

produced representations alone? Temporal Modelling suggests a new ordering of

aesthetic provocation, algorithmic process, and hermeneutic understanding in the work of

digital humanities, a methodological reversal which makes visualization a procedure

rather than a product and integrates interpretation into digitization in a concrete way.

How concrete? The Temporal Modelling tools incorporate an intuitive kind of

sketching -- within a loosely constrained but highly defined visual environment -- into the

earliest phases of content modelling, thereby letting visualization drive the intellectual

work of data organization and interpretation in the context of temporal relations.1

Aesthetic provocation becomes dynamic, part of a complex dialogue in which the user is

required to respond to visualizations in kind. Response in kind, that is, in the visual

language of the Temporal Modelling toolset, opens up new ways of thinking about digital

objects, about the relation of image to information, and about the subjective position of

1

24

any interpreter within a seemingly logical or analytic system. Our chief innovation is the

translation of user gestures and image-orderings that arise from this iterative dialogue

into an accurate and expressive XML schema, which can be exported to other systems,

transformed using XSLT, and even employed as a document type definition (DTD) in

conventional data-markup practices. The sketching or composition environment in which

this rich data capture takes place (the Temporal Modelling PlaySpace) is closely wedded

to a sister-environment, the DisplaySpace. There, we provide a set of filters and

interactive tools for the manipulation and display of more familiar, algorithmically-

generated visualizations, derivative from PlaySpace schemata or the already-encoded

data structures of established humanities computing projects. Like the PlaySpace,

though, the Temporal Modelling DisplaySpace emphasizes the flux and subjectivity

common to both our human perception of time and our facility for interpretation in the

humanities. We have not rejected display in favor of the playful engagement our

composition environment fosters; instead, we hope to show that a new, procedural

understanding of graphic knowledge enhances and even transfigures visualization in the

older modes.

Our work in building the PlaySpace, with which we began the project in the

Summer of 2001 and which now nears completion, has required a constant articulation of

its distinction from the DisplaySpace -- the implementation of which forms the next

phase of Temporal Modelling. What quality of appearance or use distinguishes a display

tool from an editing tool? At their heart, the mechanisms and processes of the PlaySpace

are bound up in: the positioning of temporal objects (such as events, intervals, and points

in time) on the axis of a timeline; the labelling of those objects using text, color, size, and

25

quality; the relation of objects to specific temporal granularities (the standards by which

we mark hours, seasons, aeons); and, in complex interaction, the relation of objects to

each other. Each of these interpretive actions -- the specification of objects and

orderings, their explication and interrelation -- additionally involves a practice we

designate inflection. Inflection is the graphic manifestation of subjective and interpretive

positioning toward a temporal object or (in a sometimes startling display of warping and

adjustment) to a region of time. This positioning can be on the part of the prime

interpreter, the user of the PlaySpace, or inflections can be employed to represent and

theorize external subjectivities: the inferred interpretive standpoint of a character in a

work of fiction, for instance, or of an historical figure, movement, or Zeitgeist. The

energies of the PlaySpace are all bound up in enabling understanding through iterative

visual construction in an editing environment that implies infinite visual breadth and

depth. In contrast, the DisplaySpace channels energy into iterative visual reflection by

providing a responsive, richly-layered surface in which subjectivity and inflection in

temporal relations are not fashioned but may be reconfigured.

I want to focus here on some specific qualities and tools of Temporal Modelling,

especially as they relate to the embeddedness of subjectivity, uncertainty, and

interpretation in every act of representation, which we take as a special province of

speculative computing. Our very process of design self-consciously embodies this

orientation toward information and software engineering. We make every effort to work

from imagery as much as from ontology, coupling our research efforts in the philosophy

and data-driven classification of temporal relations with the intuitive and experimental

work of artists of whom we asked questions such as: "What does a slow day look like?"

26

or "How might you paint anticipation or regret?" As our underlying architecture became

more stable and we began to assemble a preliminary notation system for temporal objects

and inflections, we made a practice of asking of each sketch we floated, "What does this

imply?" and "What relationships might it express?" No visual impulse was dismissed out

of hand; instead, we retained each evocative image, frequently finding use for it later,

when our iterative process of development had revealed more about its implications in

context.

In this way, the necessity of a special feature of the Temporal Modelling Project

was impressed on us: a capacity for expansion and adjustment. The objects, actions, and

relations defined by our schemata and programming are not married inextricably with

certain graphics and on-screen animations or display modes. Just as we have provided

tools for captioning and coloring (and the ability to regularize custom-made systems with

legends and labels), we have also made possible the upload and substitution of user-made

standard vector (SVG) graphics for the generic notation systems we've devised. This is

more than mere window-dressing. Our intense methodological emphasis on the

importance of visual understanding allows the substitution of a single set of graphics

(representing inflections for, say, mood or foreshadowing) to alter radically the

statements made possible by Temporal Modelling's loose grammar. Users are invited to

intervene in the interpretive processes enabled by our tool almost at its root level.

A similar sensibility governs the output of a session in the PlaySpace

environment. PlaySpace visualizations consist of objects and inflections in relation to

each other and (optionally) to the metric of one or more temporal axes. The editing

process involves the placement and manipulation of these graphics on a series of user-

27

generated, transparent layers, which enable groupings and operations on groups (such as

zooms, granularity adjustments, panning and positioning, simple overlays, and changes in

intensity or alpha-value) meant to enhance experimentation and iterative information

design inside the visual field. When the user is satisfied that a particular on-screen

configuration represents an understanding of his data worth preserving, he may elect to

save his work as a model. This means that the PlaySpace will remember both the

positioning of graphic notations on-screen and the underlying data model (in the form of

an XML schema) that these positions express. This data model can then be exported and

used elsewhere or even edited outside the PlaySpace and uploaded again for visual

application. Most interesting is the way in which transparent editing layers function in

the definition of PlaySpace models. The process of saving a model requires that the user

identify those layers belonging to a particular, nameable interpretation of his material.

This means that a single PlaySpace session (which can support the creation of as many

layers as hardware limitations make feasible) might embed dozens of different

interpretive models: some of which are radical departures from a norm; some of which

differ from each other by a small, yet significant, margin; and some of which are old

friends, imported into the PlaySpace from past sessions, from collections of instructional

models representing conventional understandings of history or fiction, or from the efforts

of colleagues working in collaboration on research problems in time and temporal

relations. A model is an interpretive expression of a particular dataset. More

importantly, it is what the interpreter says it is at any given point in time. We find the

flexibility inherent in this mode of operation akin to the intuitive and analytical work of

the traditional humanities at its best.

28

Our policies of welcoming (and anticipating) the upload of user-designed graphic

notation and of enforcing the formalization of interpretive models in the loosest terms

possible are examples of Temporal Modelling's encouragement of hermeneutic practice

in computational contexts. In some sense, this practice is still external to the visual

environment we have built, even as it forms an integral part of the methodology

Temporal Modelling is designed to reinforce. I wish to close here with a description of a

new and exciting tool for encoding interpretation and subjectivity within the designed

Temporal Modelling environment: a mechanism we call the nowslider.

Nowsliding is a neologism for a practice all of us do constantly -- on which, in

fact, our understanding of ourselves and our lives depends. Nowsliding is the subjective

positioning of the self along a temporal axis and in relation to the points, events,

intervals, and inflections through which we classify experience and make make time

meaningful. You nowslide when you picture your world at the present moment and,

some ticks of the clock later, again at another ever-evolving present. You nowslide, too,

when you imagine and project the future or interpret and recall the past. Our toolset

allows a graphic literalization of this subjective positioning and temporal imagining, in

the shape of configurable, evolving timelines whose content and form at any given

"moment" are dependent on the position of a sliding icon, representative of the subjective

viewpoint. Multiple independent or interdependent points of view are possible within the

context of a single set of data, and the visual quality of nowsliding may be specified in

the construction of a particular model.

At present, two display modes for the nowslider are in development. The first is a

catastrophic mode, in which new axial iterations (or imagined past- and future-lines)

29

spring in a tree structure from well-defined instances on a primary temporal axis. In this

way, PlaySpace users can express the human tendency to re-evalute the past or make

predictions about the future in the face of sudden, perspective-altering events. New

subjective positions on the primary axis of time (and new happenings) can provoke more

iterations, which do not supplant past imaginings or interpretations, but rather co-exist

with them, attached as they are to a different temporal locus. In this way, timelines are

made to bristle with possibility, while still preserving a distinct chronology and single

path. Our nowsliders also function in a continuous mode -- distinct from catastrophism --

in which past and future iterations fade in and out, change in position or quality, appear

or disappear, all within the primary axis of the subjective viewpoint. No new lines are

spawned; instead, this mode presents time as a continuum of interpretation, in which past

and present are in constant flux and their shape and very content are dependent on the

interpretive pressure of the now.

Our taking of temporal subjectivity and the shaping force of interpretation as the

content and over-arching theme of the PlaySpace and DisplaySpace environments we

have built is meant to reinforce the goal of the Temporal Modelling tools, and by

extension, of speculative computing. Our goal is to place the hermeneut inside a visual

and algorithmic system, where his or her very presence alters an otherwise mechanistic

process at the quantum level. Humanists are already skilled at the abstract classification

and encoding that data modelling requires. We understand algorithmic work and can

appreciate the transformative and revelatory power of visual and structural deformance.

We at least think we know what to do with a picture or a graph. What we haven't yet

tried in a rigorous and systematic way is the injection of the subjective positioning any

30

act of interpretation both requires and embodies into a computational, self-consciously

visual environment. If speculative computing has a contribution to make to the methods

and outcomes of digital humanities, this is it.

Johanna Drucker

Bethany Nowviskie

see [PATACRITICISM] see also [HOW COMPUTERS WORK].

References for Further Reading

Amelunxen, H.V. and S. Iglhaut, F. Rötzer (Eds). (1996). Photography After

Photography. Munich: G&B Arts.

Beardsley, M. (19 66). Aesthetics. Tuscaloosa: University of Alabama Press.

Bense, M. (1971) The projects of generative aesthetics. In J. Reichardt (Ed.).

Cybernetics, Art and Ideas. NY: Graphics Society Limited.

Bök, C. (2002). Pataphysics: The Poetics of an Imaginary Science. Evanston, IL:

Northwestern University Press.

Bürger, P. (1998). Mallarmé. In M. Kelly (Ed). Encyclopedia.of Aesthetics. Oxford, UK,

and NY: Oxford University Press, 1998. p.177-178.

31

Dunn, D. (1992) (Ed). Pioneers of Electronic Art. Santa Fe: Ars Electronica and The

Vasulkas.

Engelbart, D. (1963) A Conceptual framework for the augmentation of man's intellect.

Howarton and Weeks (Eds). The Augmentation of Man's Intellect by Machine, Vision in

Information Handling, vol. 1. Washington DC: Spartan Books.

Franke, H. and H. S. Helbig. (1993). Generative Mathematics: Mathematically described

and calculated visual art. In M. Emmer (Ed). The Visual Mind. Cambridge: MIT Press.

Glazier, L.P. (2002). Digital Poetics: The Making of E-Poetics. Tuscaloosa, AL and

London, UK: University of Alabama Press.

Hockey, S. (2000) Electronic Texts in the Humanities. Oxford, UK and New York:

Oxford University Press.

Holtzman, S. (1994). Digital Mantras. Cambridge: MIT.

Jarry, A. (1996). Exploits and Opinions of Dr. Faustroll, Pataphysician. Boston: Exact

Change.

32

Lister, M. (Ed.), (1995). The Photographic Image in Digital Culture. London, UK and

NY: Routledge.

Marr, D. (1982).Vision. (NY: W.H. Freeman and Co.

McGann, J. J. (2001). Radiant Textuality. Hampsire, UK, and New York: Palgrave.

Prueitt, M. (1984). Art and the Computer. NY: McGraw-Hill.

Renear, A. (1997). Out of Praxis: Three (Meta) Theories of Textuality. In K. Sutherland

(Ed). Electronic Text. Oxford, UK: Oxford, Clarendon Press.

Ritchin, F. (1990). In Our Own Image. NY: Aperture.

Shanken, E. (n.d.). The House that Jack Built. http://www.duke.edu/~giftwrap/

http://mitpress.mit.edu/e-journals/LEA/ARTICLES/jack.html.

Sutherland, K. (Ed.). (1997). Electronic Text. Oxford, UK: Oxford, Clarendon Press.

Zhang, Y, L. Rauchwerger, J. Torrellas. (1999). Hardware for speculative reduction

parallelization and optimization in DSM multiprocessors. Proceedings of the 1st

Workshop on Parallel Computing for Irregular Applications, 26

http://citeseer.nj.nec.com/article/zhang99hardware.html  

33

Temporal Modelling is freely available at http://www.speculativecomputing.org , and is

delivered to the Web using XML-enabled Macromedia Flash MX and Zope, an object-

oriented open-source application server.

34