24
Semiotic Methods for Enterprise Design and IT Applications R. Stamper , K. Liu , L. Sun , S. Tan , H. Shah , B Sharp and D. Dong Faculty of Computing, Engineering and Technology Staffordshire University, United Kingdom [email protected]; {h.shah; B.Sharp; d.dong}@staffs.ac.uk Informatics Research Centre The University of Reading, United Kingdom {k.liu; lily.sun; b.k.s.tan}@reading.ac.uk, www.irc.rdg.ac.uk Abstract. Investment in information technology (or IT) does not necessarily bring improvement in business performance. One important cause of this is the divorce between the design of the IT systems and business organisations. An approach of organisational semiotics, based on the understanding of organisations as systems of social norms emphasises the central role of the people, their responsibility and the organisation in the analysis and design of IT applications. Business organisations are analysed as systems of meanings, intentional communication and interpretation that create knowledge, which sooner or later leads to action. Effective methods for organisational analysis and systems design (MEASUR) have been developed on the basis of this theory. However, to make these methods ready for practical, industrial use, they need further work, especially the creation of supporting aids and tools. The recently established research project, SEDITA, addresses these issues and invites collaboration both within business and research institutions. Keyword: Organisation, semiotics, information systems, requirements engineering, systems analysis and design, MEASUR, methodology, semantics, ontology, pragmatics, knowledge. “The method of science is the method of bold conjectures and ingenious and severe attempts to refute them” - Sir Karl Popper, Objective Knowledge, p.81 1 Introduction: The Issue of Scientific Method Organisational Semiotics has the potential to confer huge economic advantages upon those who adopt its approach because it places improved organisational effectiveness before the efficient application of IT. Such a provocative statement invites people to challenge it: Can any methods based on theories of organisational semiotics ensure

Semiotic Methods for Enterprise Design and IT Applications

Embed Size (px)

Citation preview

Semiotic Methods for Enterprise Design and IT Applications

R. Stamper†, K. Liu‡, L. Sun‡, S. Tan‡,

H. Shah†, B Sharp† and D. Dong†

†Faculty of Computing, Engineering and Technology Staffordshire University, United Kingdom

[email protected]; {h.shah; B.Sharp; d.dong}@staffs.ac.uk

‡ Informatics Research Centre The University of Reading, United Kingdom

{k.liu; lily.sun; b.k.s.tan}@reading.ac.uk, www.irc.rdg.ac.uk

Abstract. Investment in information technology (or IT) does not necessarily bring improvement in business performance. One important cause of this is the divorce between the design of the IT systems and business organisations. An approach of organisational semiotics, based on the understanding of organisations as systems of social norms emphasises the central role of the people, their responsibility and the organisation in the analysis and design of IT applications. Business organisations are analysed as systems of meanings, intentional communication and interpretation that create knowledge, which sooner or later leads to action. Effective methods for organisational analysis and systems design (MEASUR) have been developed on the basis of this theory. However, to make these methods ready for practical, industrial use, they need further work, especially the creation of supporting aids and tools. The recently established research project, SEDITA, addresses these issues and invites collaboration both within business and research institutions.

Keyword: Organisation, semiotics, information systems, requirements engineering, systems analysis and design, MEASUR, methodology, semantics, ontology, pragmatics, knowledge. “The method of science is the method of bold conjectures and ingenious and severe attempts to refute them” - Sir Karl Popper, Objective Knowledge, p.81

1 Introduction: The Issue of Scientific Method

Organisational Semiotics has the potential to confer huge economic advantages upon those who adopt its approach because it places improved organisational effectiveness before the efficient application of IT. Such a provocative statement invites people to challenge it: Can any methods based on theories of organisational semiotics ensure

such benefits? Do the underlying theories stand up to critical, empirical testing? This paper elaborates on that implied invitation in relation to a set of analytical methods called ‘MEASUR’ and their underlying theory of information fields. But the choice of scientific method applies broadly to the whole organisational semiotics field. Perhaps the most exacting method of conducting a scientific programme was proposed by Popper. At the time when this research1 began in the early 1970s at the London School of Economics, he was the professor of scientific method. Looking for a demarcation line between science and pseudo-science, Popper proposed that genuine scientific theories must be formulated clearly and precisely enough to risk being falsified by empirical observation, unlike pseudo-scientific theories, which are immune to falsification because their meanings are vague and adjustable to fit more or less any observations (Poppper cites astrology and psychoanalysis as examples). Moreover, he advocated the formulation of ‘bold hypotheses’ that correct the earlier theories while incorporating them as approximations (the advances in physics illustrate this well). Ad hoc explanations lack generality and coherence, but bold hypotheses should exhibit generality and aim to explain a very wide range of facts, and ideally present a novel view on the problem domain. Of all the disciplines in the information systems area, Organisational Semiotics has the best chance of living up to the severe standard that Popper prescribed for empirical theories. The formal sciences of computing live or die by their own criteria of internal consistency; because they do not deal with any world outside themselves, they belong in quite a different category from the empirical sciences. The difficulties of using refutationism in the social sciences are acknowledged [6, 19, 5], and certainly apply to the studies of the organisational and social impacts of IT. Perhaps the same objections may also apply to our field. Nevertheless, we believe that Popper’s stringent epistemological criteria can help us develop the body of knowledge we are attempting to establish. We offer our particular approach to Organisational Semiotics as a test case, if only to provoke a debate about the scientific method our new discipline should adopt.

2 A Lengthy Research Programme

The seeds of this lengthy research programme germinated in a study of the meaning of the vague term ‘information’ [43]. The language of any theory must make reliable contact with the empirical world through well-defined operations that people can perform. ‘Information’, as that word is normally used even among the professionals, combines so many muddled meanings that it is useless for building theories unless its meaning is narrowly constrained. Unlike ‘information’, the term ‘sign’ does have a clear operational definition and signs have numerous, very precisely definable

1 Supported by UK Research Councils, EPSRC and ESRC; and the companies IBM and Digital.

properties that capture all the varied meanings of ‘information’ that the literature commonly jumbles together. The next step was to put the sharpened understanding of information as properties of signs to work understanding organisational information systems better. This programme to investigate organisations regarded as information systems began in 1971, when the study of information was completed. Currently it has funding2 to prepare some of its results for mainstream use in IT applications and business design, and, in doing so, to test their practical effectiveness. We are focusing on MEASUR, a set of new methods and tools for systems analysis, design and specification. This Project is called Semiotic Enterprise Design for IT Applications, (SEDITA). As a way of testing the theories underlying MEASUR, we wish to encourage our scientific colleagues to devise “ingenious and severe attempts to refute them”. Of course, this short paper can only provide an overview of the research programme, indicate where to find the results, and draw attention to a few key issues. The motivation for a study of organisations regarded as information systems arose from encounters with many IT system failures that would almost certainly grow as the power of IT increased. Stamper formed a team with the aim of specifying organised behaviour with the greatest possible, appropriate rigour. They adopted the hypothesis that all organised behaviour tends to conform to social norms and began to develop a series of LEGally Oriented Languages (Legol) for representing norms, using legislation as empirical material. This research, though apparently similar to contemporary AI work on legal norms [42], gradually diverged radically from it over the issues of meanings, intentions, responsibilities and time. Working with law as empirical material forced the research into contentious philosophical areas. The greatest difficulties we encountered in the research came from the metaphysical assumptions we had brought with us; the same ones lying tacit behind the AI tools our colleagues were using. The most important advances were made by discarding them – never without a struggle – in favour of disconcerting new ones. For example, in the social domain, logical inference has only limited value: unlike logical premises, social norms are not absolute; they only apply over limited ranges of relevance. Relevance, of course, depends upon human judgement, and in courts of law, those of the judge. Legal reasoning has more to do with the responsible choice of relevant norms and the adjustment of meaning to achieve the intentions of the law than – as in mathematics and logic – with the application of mechanical procedures to deduce some truths already implicit in the premises. Consequently, we needed a new theory where responsibility replaced truth as the fundamental notion; truth would then arise when responsible people agreed to adopt certain propositions as a sound basis

2 At the Universities of Reading and Staffordshire, funded by the British Engineering and Physical Sciences Research Council, EPSRC, for the three years from mid-2002.

for action in a limited domain. That considerable philosophical shift inaugurated others. Another crisis arose because we assumed initially that norms deal with an objective reality, although, working with legal norms it could not be denied that the norms themselves also give rise to part of the reality we inhabit (most obviously in the case of the social reality). This halfway house espoused by Searle [41] brought complications that, eventually, we felt able to discard with the help of Gibson [16] who shows that human and other organisms also have to construct their perceptions of their physical reality. The starting point in 1971 was a theory treating all organised behaviour as governed by the norms that people share. To make this theory open to refutation, we cast it as a formalism and claimed that it could represent any given set of social norms. Each such Legol formalism was pitted against samples of legal norms and quickly refuted, and replaced by the next in a series of gradually improving hypotheses that faced progressively more complex legislation. Each successful refutation (or failed theory) exposed some underlying and unnoticed structures of norms, thus prompting refinements to the theory. Making each hypothesis precise, even at the risk of being wrong, was essential to the process of discovery. Working on dozens of concrete problems drawn from diverse areas of legislation and actual organisations, not only refined the theory but also led to the discovery and gradual refinement of methods for analysing complex systems of norms. These MEASUR methods have continued to evolve over the last 30 or more years [40, 49, 25, 46, 54, 58, 31]. Tribute is due to the many researchers who have contributed to MEASUR over this period; Ades deserves special mention, having applied the methods to a large, computer-based system with conspicuous success [1, 64]. Numerous practical applications of MEASUR have so far addressed the difficult fragments of larger problems, many have dealt with analysis and design and a few have gone as far as the implementation of computer-based solutions. Nevertheless, all the essential techniques have been tested individually on real problems, and the work continues in the refutationist spirit. In summary, then, we would probably not have reached the present position without attempting to adhere to the refutationist method of enquiry. Shifting from a technical focus on programming machines to the ‘programming’ of people could easily have led to a ‘soft’ sociological picture of norms and their relationships with information requirements. We have produced sharp, rigorous tools that, on first acquaintance, seem difficult to use, with the constant temptation to blunt them in the name of practicality. Continuing to be bold – even challenging – we contend that the theory and methods can be applied to many aspects of organisational semiotics on which our colleagues are working.

3 The Theory Underlying MEASUR

For details, the reader must go to the papers cited, but let us select those key hypotheses that invite ingenious and severe attempts to refute them. In that spirit, this section examines the underlying theory. Initially:

• The study of organisations and organised behaviour can be translated into the study of social norms.

Where counter examples appear to refute this statement, they should contribute to clarifying and probably qualifying the meanings of its terms. The ‘translation’ process may require several layers of analysis in the case of the more subtle aspects of organisation. If the ingenious attempts at refutation result in a better focus for the hypothesis, then it should be adopted for subsequent use. We then take the step of asserting that

• It is sufficient, for purposes of empirical observation, to examine the norms that have been expertly formulated explicitly as legal norms or as scientific ‘laws’.

But this does not lend itself readily to refutation against empirical observation; it amounts to an article of faith that such materials are sufficient and that any other norms that govern the conduct of people informally, if brought to the surface and formulated in words and numbers, would display no additional logical properties. Like an iceberg, the totality of social norms exposes only its formulated tip above the surface; we then work on the assumption that studying that empirically accessible material will meet our needs. Note: the written representation of a norm is not the norm itself; that only exists when understood by a person who is subject to the norm. The subject may need a written form to help remember and understand some complex norms.

• To have any practical value, the meaning of a written norm has to be determined by (chains of) operations relating the language in which it is expressed to appropriate aspects of the behaviour of people.

In the case of scientific laws, these operations are embedded in relevant experimental and observational procedures evolved within the expert community, and, without too much difficulty, we can locate and consult the right experts. In the case of norms of a legal character, rules of evidence guide us along the chain through observations and reports to establish facts, while definitions of responsibility and authority lead us to the agents who exercise any relevant value judgements. Where the ordinary meanings of terms come into question, then we can draw upon the judgements of a sample of ordinary people, acting as a kind of jury. This statement does not seem to be one capable of refutation so much as a guiding principal to the correct expression of a norm: if we cannot find the right people who know the necessary procedures, we don’t have clear operational meanings, so we’d better elaborate the norm until we have.

• All norms have a common structure: if condition then norm subject adopts attitude towards something

where the subject understands the condition as the state of affairs, where the attitude may govern behaviour, belief, evaluation or perception, depending upon which, the something may be an action, a state of affairs or anything that could be valued or perceived.

This not unfamiliar hypothesis exhibits boldness in its combination of generality and strict definition of form. It has the advantage of leading to unambiguous classifications of norms derived from the strict classifications possible for conditions, attitudes and things.

• An attitude constitutes the special case of a norm without a conditional part. This observation reveals the broad scope of the norm concept and helps to unify what might otherwise need separate sub-theories for norms and attitudes.

• Definition: knowledge = norms. Does this introduce clarity into a subject that too often is handled in a fuzzy manner, or not? We can view knowledge/norms from two sides. Information about the driving condition leads the norm subject to act upon or communicate the resulting attitude. On the other side, the information a person receives interacts with a norm to generate an attitude, which amounts to a specific item of knowledge; but some information interacts with the norms that govern learning and this can lead to the creation of new norms, which amount to knowledge of a universal character.

4 The Concept of an Information Field

• Definition: A group of people who share a set of norms that enable them collaborate for some purpose, constitute and information field, where the norms serve to determine the information the subjects need to apply them.

Information plays a secondary role to the norms. Information produces no relevant physical effects (ignore the high tenor who can shatter a wineglass) but derives its value from its social effects. It changes people’s attitudes: their beliefs, their values, their perceptions or their sense of obligation. Attitudes dispose one towards but do not compel action. People change their attitudes with the aid of norms that say what attitude to adopt in various situations. We define organisational knowledge as norms plus attitudes, and in the information field, in the cycles of autopoiesis they continually reproduce themselves, and correct themselves while also performing their productive role. Norms are driven by information; information drives the norms; and, sooner or later, some attitudes produced by the norms generate occasional actions. So:

• The norm structure of an information field defines a pattern of organisation more fundamentally and with far greater stability than any pattern of information flow.

• All organisms, including human agents construct their perceptions of the

only world they can know through their actions; they have to discover (or be taught, or inherit by instinct) what invariant repertoires of behaviour the world affords them (= the affordances); then they populate their reality with those affordances that help them to survive.

This is the metaphysical assumption of ‘actualism’ we have adopted to replace the more common belief in an objective reality. We began with the same mixed position

as Searle – objectivists concerning physical reality, who admit that we construct our social world – until encountering James Gibson’s Theory of Affordances [16, 17]3. Gibson, by showing how we construct our perceptions of the physical world, helps us to unifying the treatment of the physical and the social worlds through the assumption we call actualism. This extends the theory of affordances into the social sphere where the norms play the part of affordances. To escape from some of the arbitrary distinctions adopted in the information systems community – such as entity / attribute / relationship – we have adopted the term ‘affordance’ for every thing, physical or social, that we perceive. Actualism then led to a most fruitful set of hypotheses. The actualist ontology says that the only reality we can possibly know involves the actions of a knowing agent. These actions reveal the patterns of behaviour they ‘afford’ the agent, who selects the invariant and useful patterns (that Gibson called ‘affordances’) as the things the organism perceives. (Anyone feeling uncomfortable at adopting this position should note that actualism does not prevent one, for all practical purposes remaining an objectivist, the only difference is that one takes responsibility for holding that position, which one should be prepared to justify in concrete situations.) This analysis suggests a formalism (Norma) with the basic sentence structure representing the agent knowing something directly: Agent affordance (1) Notice the radical difference between this well-formed formula and the formulas of predicate logic. The logic tacitly assumes that the world consists of independently existing, identifiable individuals. The relationship between the signs in the logical language and reality depends entirely upon the mapping from their identifiers (names) to the individuals (the real things, in an objectivist sense). This allows predicate and relation names (such as person & owns) to be defined precisely as sets (PERSON & OWNS) with a membership of individuals or ordered pairs of individuals that can be listed – the extensional definition – so the two logical statements : person (x) and owns (x, y) mean respectively x ε PERSON and (x, y) ε OWNS (2) and the statements are true for those values of x and y, ranging over everything, past, present and future, when the individuals belong to the defining sets. In this reality, properties and relationships do not exist independently, they are constructed as named sets of listed [tuples of] individuals. A quite different ontological assumption about recognises reality in which ‘person’ and ‘owns’, for example, have – in a suitable sense – existences independent of particular individuals; this calls for an ability to provide intensional definitions of ‘person’ and ‘owns’. The notion of affordances supplies one suitable sense for the operational, intensional definition of properties and relationships.

3 On the theory of affordances or the school of ‘ecological perception’, see also Michaels and Carello (1981).

Formula (1) makes no statement but serves only as a surrogate for some agent realising an affordance, i.e. having available that invariant repertoire of behaviour. Another agent could use a sign

“Agent affordance” to make a statement, as shown below. Notice that, whereas predicate logic makes sentences that are true or false, sentences with negations, having the opposite truth values, Norma formulas are neither true nor false because they stand for the existence of patterns of behaviour that cannot be negated. As an agent realising an affordance is also an agent, but a modified one: thus a person wielding an axe is still a person but a modified and more dangerous agent. This applies recursively. (((Agent affordance) affordance)) affordance) . . . (3) Also an agent realising two affordances simultaneously may find that the world affords her something extra: Agent (affordance while affordance) affordance (4) For example: Jane (paper while pencil) draw (5) The evolution of this formalism has some way to go but the guiding principle is clear: it expresses the metaphysical principle of actualism. Note:

• Definition: when an agent has available an affordance or invariant repertoire of behaviour, we say that it is realised.

We shall state the set of hypotheses and illustrate them lest they seem too abstract. • Except for one – the root affordance – each realised affordance only exists

given the coexistence of others. (Definition: this is the notion of ontological dependency)

The discovery of the new logical relationship of ontological dependency resulted, as the discussion above attempted to indicate, from the key break-though in the early 1980s with the rejection of objectivism reported in Stamper [53]. An important

refinement of ontological dependency came with the application of the constraint: • No affordance can be directly dependent ontologically on more than two

other affordances. The severe limitation to a maximum of two antecedents is justified for a variety of reasons too complex to present here. Suffice it to say that the constraint has always worked and, in doing so, it has always forced to the surface aspects of the semantics that laxer rules would have left invisible. The formulas (3) and (4) combined, plus the constraint on antecedents, provide the basic structures for an ontology chart with the vital canonical property discussed below. It follows from the hypothesis, that the only reality we know depends upon some individual or group discovering, through their actions, significant affordances. This accumulated knowledge depends upon the responsibility of the agent making the perception, and upon the responsible of anyone accepting it when communicated to them. Thus the formalism should associate the responsible agent with every meaningful expression. In particular, every affordance starts or finishes its existence under some authority.

• The existence of every affordance is bounded by a start and a finish, each of which is governed by a norm (simple or complex) that places the

responsibility for these events upon one or more agents. Even when we cannot identify the exact individual with whom responsibility lies we can often identify the precise group agent (e.g. firm) or role (e.g. finance director) or provide an approximation. Norms distribute responsibility over several agents (as a copyright starts with the publication of a literary work, so the publisher, the author and the law-makers share responsibility). Knowing where start and finish responsibilities lie, tells one where to find the relevant information sources.

• Between the start and finish, the affordance does not change; any changes are accounted for by the realisation of its own ontological dependents.

All the dynamic features find their expression in the start and finish events while their authorities serve as the motors of change. The rules governing dependencies determine the geometry of all schemas based on them. A schema will be a directed graph without cycles. We can be more exact if we introduce another constraint.

• A full schema of ontological dependencies is a semi-lattice with a single root affordance.

• Within the schema, each affordance and the root define a chain of ontological antecedents between the root and affordance, this lattice we call the stem of the affordance.

• Society, the ultimate agent occupies the role of root affordance. Equating the root of all semantic schemas to Society surprises many people. Society is always the ultimate responsible agent. Every feature of the world as we know it has been the responsibility of someone or other, with the discoverer’s identity very often lost in the mists of time. But to reach us, Society has maintained that perceptual norm and passed it on to us. For purposes of operational definitions of common sense meanings, we have access to Society through a sample of its members – just as the law relies on a jury to supply common sense interpretations of the non-legal terms it employs. Performing the analysis of perceptual norms using these methods soon forces one to realise how utterly dependent we are upon the part of society in which we are raised to understand how the world is made up of many physical and social affordances that our ancestors have created over decades, centuries and even millennia. This idea usually provokes discussion, often a rather heated, political discussion. Of course, the various communities within human Society distinguish their own local meanings by choosing their own start and finish authorities. However, the ontological dependency schemas do appear to apply universally: not surprisingly because they represent only the coarsest of constraints on human behaviour. As an illustration of these ideas, see Figure 1, a fragment of the NAMAT system for university administration. This shows that a student’s suspension entails a studentship or enrolment, which entails the co-existence of a person and a university, which, in turn, entails a nation state necessary for the university’s legal existence.

Society

person

nationuniversity

enrolment

suspension

order

Fig. 1. An example to show ontological dependencies.

This illustration introduces another concept that makes explicit the roles played by signs. The schema of ontological dependencies among these universals (person or nation, leaving out particulars such as John and Mary, or Portugal and Great Britain) represents affordances currently realised – but with one exception. Any suspension that is ordered will not exist now but in some future period; what does exist now is only a sign that stands for it. The broken line in the figure indicates a dependency on a sign that stands for a suspension, not the suspension itself. In business or social domains, people have to construct their reality; for the most part they do this using information in the form of speech acts expressing their intentions. Indeed, we can view information systems as instruments for making, maintaining and operating our social reality by means of speech acts and conversational protocols (thus a student’s enrolment results from an offer and an acceptance).

• A schema built of ontological dependencies represents the here-and-now, which is the only reality accessible to us empirically. Past and future realities only exist because we can construct them using signs that do exist here and now.

This statement is a consequence of earlier ones but it shows how this method of analysis forces one to provide a detailed account of how worlds that have no present existence depend upon our use of signs (or information): an excellent discipline for the science of organisational semiotics. The Norma formalism accommodates a sign meaning: Agent affordance (6) in the normal conventional manner, thus: “Agent affordance” (7) A speech act is an affordance of an agent making use of a sign at her disposal: (Agent “Agent affordance”) speech act (8) Only by using signs this way can the semantic schema break out of the here-and-now to arrive at models of past and future times. For example, the small schema in Figure 1, above, supports an obligation created by the dean, Dean Swift. We can express this in a formula that starts with the affordance on the right of the schema:

Order (Dean Swift, “suspension (enrolment (John Jones, Reading University (United Kingdom)))”) (9)

written, you will notice, without including Society, the Root, because it can be read from the schema. Moreover, the order could be represented by: Order (Dean Swift, “suspension (John Jones)”) (10)

by using the schema to remove and ambiguity. If the expression were ambiguous – suppose he were enrolled in two institutions – the user would be offered the alternatives and asked to choose. A semantic database can help to make a language succinct. Note the marked difference between the analysis of an order in the figure and the original Language Action Paradigm’s approach [73] based on conversational protocols. The order shown here is not a communication act but an obligation upon the organised group to give effect to the suspension. An order in this sense probably exists for quite an extended period, perhaps until rescinded by the university or until discharged. It would, of course, be brought into existence by the appropriate authority – the Dean, for example – performing an illocutionary act, confusingly also called an ‘order’, but an order in this sense would exist only at the moment when the agent expresses his or her intention with a spoken utterance or by signing a document. Then, if a document records that intention, we have yet another, separate affordance (probably also called an ‘order’), with quite a different period of existence from either the obligation or the illocutionary act! Notice also the benefit we obtain from this rigorous method of analysis. Without it, we would too easily use the term ‘order’ in any one of those three meanings, sliding unconsciously from one to another. To clarify meanings two principles can help, both of them derived from hypotheses stated earlier.

• To discover the affordance represented by a term used in the problem domain, one must establish when it starts and finishes; thus one term may stand for more than one affordance, distinguished by their different starts and finishes.

• Similarly, the meaning of a term also depends upon the authority (norm or agent) governing the start and finish of the affordance it stands for.

Careful consideration of when an order actually starts and finishes its period of existence and the related authorities soon eliminate semantic confusion hiding behind the term ‘order’. Time has a privileged position in the theory and the semantic schema plays a key role in capturing temporal constraints. Every affordance or invariant repertoire of behaviour we perceive and name has a start and a finish, defining its period of existence; only then can one experience it, do things with it, and gaining knowledge about it directly. Affordances realised in the past or planned for in the future can only be known through information about them. For example, to order the suspension of a student’s enrolment some time in the future, the order will not have as an antecedent a suspension currently in force but one that intended to exist in the future. An affordance in a schema represents only the reality as it exists here-and-now. If, in an attempt to apply the rule of linking words and numbers to empirically observable reality, you try to experience a moment in time, you will fail. Moments or intervals of time have no existence independent of some affordance that we can directly experience. As soon as its start or finish comes along, it has gone! All we can hope to do is talk about or use signs that represent starts and finishes of things. If we know something directly, such as a student’s suspension, we can use its start and finish to

create names for these as instants of time (“She fell ill just as her suspension finished”.), or we can speak less precisely by reference to a period of existence (“She fell ill during her suspension.”). Of course we make use of calendars and chronometers to construct names for all those instants or intervals of time that we need to talk about, but, strictly speaking, those names are inseparable from their timepiece. This treatment of time, in practice, compels the analyst to make a most exacting analysis of the information being used: it forces one to clarify the meaning of a term by explaining precisely when it starts or finishes its existence. Past and future reality exists only in the present, and by virtue of the signs we use to represent them; to understand them, we must know how they have been constructed. These ontological dependency relationships make possible the capture of very stable, canonical semantic structures [47, 54]. Gradually, the strict constraints imposed by the theory on the geometry of the ontological dependency schemas revealed their power to impose unique structures on them. This meant that two or more analysts modelling the same pattern of organised activity would produce exactly the same schema. Or expressed as a hypothesis:

• For a given problem, there will be a unique schema, the Semantic Normal Form, provided that the rules governing the ontological dependencies are obeyed.

Should two analysts produce different schemas then a) one is or both are in error, or b) they are modelling different patterns of organisation. Close inspection of their work should reveal one of these causes for the discrepancy. Notice the careful way we must distinguish the three homonyms of ‘order’, that could all appear in the schema, to represent three quite different real-world affordances. The terms are signs that we can manipulate in a computer system, should we wish, but the affordances are patterns of behaviour that involve people. Analysis of the ontological dependencies among the affordances serves to disambiguate the terminology: thus the three different positions where ‘order’ appears in the schema explicate its three different meanings. By extension, this observation removes the worry we often have about strict uniformity of terminology. The same affordance (identified as a pattern of behaviour by some arbitrary code) can be labelled in the vocabularies of any number of languages or registers of the same language. What does matter operationally is that the users of the schema can all interpret the labelling uniformly, in the sense that the operations they would perform to locate the affordance would have exactly the same outcome; so another university using the same schema could name the order-as-obligation as a ‘plan’, the communication-order as a ‘directive’, and the order-document as a ‘form-SUP-1’. As with every information system, the correct interpretation of the labelling depends upon the informal understandings among the community of users. And, as a corollary we deduce that the syntactic categories of the labels have little or no significance. These hypotheses concerning the schema, especially the SNF, have profound consequences for the field of organisational semiotics. Whereas the analytical methods of information modelling currently practiced in the information systems field

permit a diversity of interpretations of a given problem domain, each one justified as the “creative element” [72] in the work of an experienced professional, the SNF hypothesis asserts that there exists a unique solution for a given domain. This looks like a dangerously extreme form of authoritarianism but quite the contrary. By all means, give reign to the creative designers of systems that manipulate strings of characters (computer systems), who use conventional information analysis, with all its permitted arbitrariness. However, do respect the agreement about the meanings and ways of behaving that the people in an organisation have established. The SNF registers the uniformity that people must establish in their work and their talk in order to collaborate effectively. If each person engaged in some organised activity is free to adopt quite different meanings for the words in the messages exchanged with colleagues, organisation will decay into anarchy. As explained a little later, this does not exclude individual interpretations within the constraints of the uniform structure of ontological dependencies. So we assert:

• In any single problem domain, a Semantic Normal Form governs the perceptual norms evolved by the people engaged in it.

Among the experimentalists, observers, theorists and engineers in any field of the natural sciences, collaboration depends upon a very high degree of uniformity in their perceptions of and procedures for dealing with their subject matter. This does not absolutely rule out the possibility for any individuals introducing “creative elements” into their work. However, a scientist introducing novel ideas will have to explain to colleagues in operational terms, how they are refining the standard categories or procedures. Colleagues will then test these refinements operationally in their observations and experiments, before accepting the implied changes in their own perceptual norms. The constraints imposed by the SNF only apply to a restricted core of the structure of affordances, and hence the meanings of the terms that label them. The refinement of this structure and of the associated meanings comes from the norms that determine the start and finish regulating the existence of each affordance. So, in the little example above, the meaning of ‘university’ will vary, depending on the complex affordance it stands for, from nation to nation, from period to period as defined in the statutes that govern its existence and the affordances granted to it. This leads to the hypothesis:

• The addition of the authorities starting and finishing the existence of the affordances in the SNF completes the specification of the pattern of organisation.

The description of the organisation in concrete detail will also entail, of course, adding details of the particular people, offices, actual obligations etc, etc. These fit into the same patterns already introduced. The SNF has another important consequence for the design of databases. This practical spin-off does not constitute a part of the theory but it has theoretical implications. A restatement of points already introduced may be stated as an hypothesis about the representation of affordances:

• Intrinsic to and inseparable from every affordance are the following attributes that must be recorded in any symbolic surrogate for an invariant

repertoire of behaviour: an identity, a sort, one or two ontological antecedents, a start, a finish and the authorities for the start and finish.

If we want to use a computer to handle these surrogates, we can imagine them in the form of relations in a relational database, one for each affordance but every affordance having exactly the same attribute structure. (Incidentally, the SNF constraint eliminates any need to consider the problems of normalisation that play a key role in the design of relational databases.) A glance at the structure of a norm, shown above, reveals the condition expression, which may describe any state of affairs and, hence, will have to be expressed in a language with complete manipulation powers over the population of surrogates. Given a Semantic Temporal DataBase (STDB) of this kind, we can now consider the problem of a manipulation language in the light of the SNF hypothesis. That brings us back to the starting point for this research programme: the development of a language in which to represent legal norms. Finally we discovered that we had started the work at the wrong end! The combination of clarity and brevity that we aimed to achieve remained an illusion until we had the SNF to guarantee the integrity of the high level, temporal operators. These operators, such as

a while b, a whenever b, a before b and so on, assume the consistency of the periods of existence embedded in any complex operands, such as ‘order’ in the university illustration. The practical Legol language, and his theoretical sister, Norma, require much further work on their formal properties. Resources for that have not been available to date. We have had to be content with making sure Legol works in practice, for which a series of computer interpreters have been produced [45, 38, 39, 22, 8, 68, 69, 26, 27]. With an SNF-compliant schema we can begin to formulate norms using these operands. We find that non-technical users have little difficulty understanding the resulting expressions because they are not remote from natural language; SQL, on the other hand, needs a whole disconcerting page to say “a while b”. Work on the latest version of the language is currently proceeding.

5 MEASUR

This collection of techniques for eliciting, analysing and specifying user requirements evolved from work on the theory outlined above. Various techniques were tentatively reported over the years [44, 49, 51, 56, 59] and then outlined as a coherent method4 in Stamper [54] and [58], as offering three main groups of techniques. They have all been tested separately in both public and private sector organisations and show great promise; however, they have not yet been put into a form to make them readily accessible to the wide range of potential users, which is one purpose of the current SEDITA Project.

4 More often called a ‘methodology’ in this context but that term we reserve for its more accurate use to refer to the comparative and critical comparison of methods.

MEASUR is based on the hypothesis: the organisation = the information system. Some grasp of the above theory can help when applying MEASUR, however it is not essential, and the SEDITA project aims to minimise the need for the theory among practitioners. Most important is to see it as a method for precisely and formally specifying an organisation’s requirements in terms of meanings, intentions, authorities, responsibilities, and norms and knowledge. The result is an IT-independent specification that ordinary users can easily verify as they only contain business concepts. The designs based on the SNF are stable, so easy to maintain. Their formality and rigour make them easy to implement, with the potential for automatic translation from the business specification. All the techniques contribute to the analysis of norms, as one might guess from the account of the theory. However, ‘Norm Analysis’ is reserved for the part of MEASUR that deals with the precise specification of the norms and authorities determining the starts and finishes of affordances. In discussing the three main groups of techniques in the normal sequence in which they are applied, we shall draw some comparisons with other methods.

5.1 Phase I: Problem Articulation

These techniques reveal the gross architecture of a business conceived as a norm system. Especially when applied to ill-defined problems, these soft system techniques elucidate the problem and help in conjecturing solutions that can be subjected to detailed analysis by the other techniques. Problem Articulation techniques are relatively easy for users to adopt. Experiments comparing PA with one company’s well-established methods demonstrated marked improvements in performance over a whole range of characteristics, from finding more alternative solutions and being able to judge them more thoroughly to reducing disputes among the team. They form an essential part of the final methodology but are not a major focus of attention in the SEDITA project, so we shall limit ourselves to mentioning two techniques and outlining one other that are essential preliminaries to the next phase. The technique of Collateral Analysis reveals the position of the problem solution in its socio-technical environment to create a full checklist of collateral problems that have to be solved. Unit Systems Analysis picks out one manageable system, analyses it key parameters and passes the work to Communication and Control Analysis. Analysing legislation taught us the third technique. The core tasks are covered by a relatively small set of substantive norms that exclude all issues of communication and control. MEASUR then works on this small kernel system. Only then can control and communication norms be expressed. The information flow methods currently in use rush ahead to work on the flows of message without addressing the substantive business activities, so they ignore what their messages are talking about.

5.2 Phase II: Semantic Analysis

Now begins the analysis of the terminology the users employ to get their work done. With the aid of documents, forms, interviews, the analyst helps the users to select the terms that are candidate names for affordances. They build the schema of ontological dependencies for the business domain under investigation. Fortunately, the users can normally make much better sense of these schemas than of the documentation produced by contemporary methods such as UML. This was demonstrated by the CONTEST system [32]. During the design of this commercially available software package for generating tests of skills and knowledge, the professional sponsors, whose knowledge was to be made available via the package, were baffled by the documentation, and had lost contact with the likely product. Turning to MEASUR, the SNF cut documentation by a factor of 20 and reversed the technology-push that threatened to kill the project, enabling the sponsors to understand and tune the design to their liking. The analyst has a more difficult task. Without considerable experience, semantic analysis is difficult. (Ironically, trained IS personnel have more difficulty than others because they attempt to treat the schema as a flow- or sequence-model.) However, because it produces re-usable structures that will serve any similar problem domain, semantic analysis, in most normal cases, can be performed relatively easily with the support of a ‘dictionary’ of these re-usable patterns. Semantic analysis produces a Semantic Normal Form, the most stable core of the business system, and the key to the major economic advantages of the methods. The same SNF may apply across many systems with varying local features, which further underpins the wide relevance of the semantic dictionary. The full analysis of meanings depends upon the detailed norms associated with each affordance in the SNF. Popper’s refutationism plays a key role in semantic analysis. One must treat each attempted schema as a scientific hypothesis about the way people in the domain being studied understand it and do their work within it. Validating the SNF consists of “ingenious and severe attempts to refute” it. If it withstands many attempts, one grows more confident that it will fit into the growing corpus of schemas in the dictionary and will provide a firm foundation for business designs and robust computer applications. We have already noted that semantic analysis has a totally different function from conventional information-, data-, object-modelling, despite a superficial resemblance. The arbitrariness of data models makes them unable to capture meanings; they define relations among character strings and rely upon intuition and informal definitions to provide whatever is known about meanings. MEASUR largely removes the dependence on intuition about meanings, intentions and responsibilities. Thus MEASUR makes possible the thoroughly rigorous treatment of the IT system at the non-technical semiotic levels. Other claims to handle meanings appear elsewhere in the IS literature, but one finds they treat semantics in a computational or mentalistic sense. As examples: in Berners-

Lee’s Semantic Web, a machine passes the semantic test if will do “the right thing” with the data that it receives; Project Cyc [11] (www.cyc.com) does not attempt to leave the world of language objects, as their “KB consists of terms--which constitute the vocabulary of CycL--and assertions which relate those terms”; Gruber (1993) adopts a mentalistic strategy to solve the semantic question, “a formal, explicit specification of a shared conceptualisation”; Kowalski [23] explicitly rejects any need to concern ourselves with this ‘exterior’ world. But anyone running a business depends critically upon the link that meanings forge between the information they use and the real things and events for which they carry responsibility. For them, “What does ‘meaning’ mean?” is a question of great practical importance and must be answered in terms of the reality the business deals with. As already noted in the theoretical discussion, this simple practical requirement raises the philosophical problems we encountered in this research. As meanings always relate signs to the real things they stand for, we need a clear ontological commitment concerning the nature of reality5. Before we could find an answer, we had to discard the simplistic notion that we live in a world of ready-made individuals: it does not work in the social domain. The solution has huge practical implications. SNF was used to develop and implement two IT systems, NAMAT and CONTEST, with marked success. The large student administration system, NAMAT [1, 14, 2, 64], is comparable with a well-established, flexible software package that has been implemented worldwide on well over 200 sites. At one site, after 6 years, NAMAT cost only a 7th of their costs for support and maintenance; though implementation costs could not be compared directly, it seems indicative that they had 80 R&D staff for work on the package, whereas NAMAT was built from scratch by a team of five and maintained by one, delivering precisely what the users needed, unlike the package. Much the same appears to be true of another, UK implementation of the package, on the basis of a short enquiry. This experience lends confidence to our research results.

5.3 Phase III: Norm Analysis

Whereas Semantic Analysis deals with perceptual norms, Norm Analysis, in the restricted sense used here, concerns the authorities for each affordance’s start and finish. These events determine where the dynamic rules of the system belong. The conventional methods certainly could benefit by adopting the technique recommended in the discussion of the theory: that of making clear precisely when things come into and go out of existence. Norm Analysis in this sense is no more than legal draftsmen have been doing over the five-and-a-half-thousand years they have been writing laws. Our Norm Analysis is quite different.

5 Note that, in this sense, an ontology is a metaphysical position, not, as in the parlance of AI, a data model.

The special feature of Norm Analysis in MEASUR arises from its exploitation of the SNF, which must be created first. When an SNF-compliant schema is available, the norms can be expressed using the Legol formalism with its temporal operators that depend upon the exact ordering of existence relationships among the affordances. From that point on, at least in principle, system generation is direct and simple.

6 Phases IV and Beyond: Specifying More of the Unit System and More Units

No further techniques are involved in this and later Phases. So far the work has applied only to the kernel of one unit system. Now we begin to add the communication and control norms in a recursive procedure. We shall need to communicate about controls, control communications, communicate about communications, apply controls to the controls, and so on and on, as far as necessary. Using legislation as the empirical material makes this structure clear: in any Act of Parliament, the kernel of substantive norms needs only a few pages, while the bulk of the Act consists of the many layers of communication and control norms. This is where the precise classification of norms, mentioned in the theoretical discussion, comes into play by supporting an orderly series of analytical stages. The kernel of the system already lays the foundations. We saw in Figure 1 the order-as-obligation that determines when a student’s enrolment will be suspended. To create this obligation the Dean, we suggested, would perform an illocutionary act by issuing an order to that effect. This treats only the meaning of a message type, which UML, for example, would deal with as information that flows. To be able to work with the meanings of message abstracted from any form they might adopt, both simplifies the analysis and avoids the common error of pre-empting design decision. There is no problem about introducing the messages into the analysis; indeed one can do so in two stages. The first stage recognises the larger sign-types that may aggregate several meanings (an audit report, for example) to be handled as a unit for passing information through a channel. The second stage looks at the representation of these formats in some physical medium to create the actual sign-tokens that consume paper or electricity and receive authorising signatures.

Implementation and the full system life-cycle The unit systems arrived at by Problem Articulation serve as the elements in a project-planning network. To estimate the resources needed, the SNF schema supplies an accurate framework and checklist for gathering the relevant data: the numbers of affordances at the universal and particular levels, their periods of existence, their rates or probabilities of starting and finishing in various circumstances. These data can be handled conveniently by a meta-model built using exactly the same techniques.

MEASUR specifications can be implemented by translating them into conventional documentation. Ades planned to do this with the NAMAT system but recognised the waste of effort when, instead, direct implementation was feasible: it worked very well. The later stages of the life-cycle cost far less than usual for a system based on an SNF-compliant schema. The canonical schema means that the system built so far needs no reprogramming when it is extended. One augments the existing system with any new affordances and their functionality; very often users only need functions already built because the SNF has forced one to do so, they only need instructions on how to apply them. Because of the uniform structure of every affordance, a default interface is available to serve any new function, making possible its rapid introduction followed by later refinement. One of the most interesting developments (not within the SEDITA project) will be the direct generation of computer applications from the formal MEASUR specification. Throughout the research programme, interpreters have been built for each of the Legol versions [45, 39, 38, 22, 8, 68, 69, 21, 27, 26, 2]. The earlier NAMAT system was built by programming a small number of powerful modules that were easily assembled; a later version implemented meta-system for building flexible applications at the object level. The ontological schemas were shown to support an innovative Semantic Temporal Data Base (STDB) with Legol serving as a manipulation language. Linking the norms to the authority attributes in the surrogates, and extending the manipulation language, turns the STDB into a NormBase, which has received some prototype development [55]. Much remains to be done in these areas.

Other Uses Unlike conventional tools for system development, MEASUR specifies the knowledge and functions essential to the human organisation under investigation. The formally specified norms, entirely expressed in terms of human activities, need never lead to a computer system. Nevertheless, they provide, from the earliest stages of analysis, valuable information about the way the organisation functions. MEASUR is worth applying if only to obtain a better understanding of the problem. The solution may be better implemented by changing the organisational culture rather than further IT investment. Between those extremes measures of re-engineering can be considered, especially as MEASUR provides a systematic approach. By defining the kernel system confined to the substantive activities, one has a benchmark for the tasks essential to the business; the information flow can be re-engineered to achieve better performance in any way that meets the kernel requirements of the business [36]. In the research programme, the opportunities to carry on right through to program specifications for those tasks deemed suitable for automation have been few. However, many dozens of cases have been investigated and reached the specification

stage. That work itself has often produced insight that management could implement with significant immediate benefit. This contrasts with the conventional methods for IS systems development that tend to leave the organisation waiting a long time until the automation begins to deliver some savings. With COTS and ERP solutions the up-front costs can be even higher. The design of complex rules or legal norms such as legislation or contracts has received attention [8, 10, 51, 56, 60]. Many systems of legal norms – such as legislation, insurance policies – also need to be implemented through systems of routine administration: MEASUR allows these tasks to proceed in parallel.

7 An Agenda for Further Research

The ambition of SEDITA is to integrate the MEASUR methods in a form that makes them ready for industrial use. Much more effort will be required if this ambition is to be totally achieved, because the scope and resources of the project are limited. By way of a conclusion consider how this new scientific paradigm opens up a huge range of research topics; some have been difficult or impossible to address rigorously before. Here is a speculative list of possibilities arranged under the Semiotic Framework:

• Social level: test the hypothesis that knowledge=norms; develop a full taxonomy of norms; apply the taxonomy to the architecture of various forms of organisations; attempt to apply the information field model to complex social interactions with probability estimates and expectations of the intentions and judgements of others; develop tools to assist the analysis and design of legislation and the supporting information systems; etc.

• Pragmatic Level: detailed modelling of the semantic constraints on a wide range of speech acts; relating attitudes to the speech acts that make and change them; use the formal tools to investigate responsibilities and how they interrelate; model the provenance of data through many layers of reporting over extended time periods from first observation or expressed intention to their registration in a computer system; etc.

• Semantic Level: how radically must a world view change before a SNF has to be adjusted? (e.g. phlogiston versus oxygen theory of combustion? or migration between two widely separated cultures); continue to search for counter examples to the rule allowing a maximum of two antecedents; diagnostics for SNF structures; improve the theory of higher level sorts or categories; use semantic analysis for knowledge elicitation; empirical studies of the semantic dissonances across legacy systems; possibility of public semantic models for much improved EDI; the nature of semantic reasoning – the adjustment of meanings to achieve a desirable implication; handling data with semantic equivocation, such as international economic and social statistics; etc.

• Syntactic Level: application of category theory to the validation of ontological dependency schemas; Norma and Legol formalisms; SNF

semantics-based translation; traversing the SNF to generate utterances in different languages; SNF applied to syntactic structures such as interface specifications; application of SNF and Legol to the programming of parallel computing devices; etc.

• Empirics Level: organisational metrics based on SNF; metrics for data, transaction activity, storage requirements etc.; encoding of data with varying levels of redundancy and the use of SNF to disambiguate encoding; different meanings of probability and their application to all these metric issues; patterns of errors and the effects of making amendments on the processing of data; complexity of systems problems in different socio-technical contexts (use of Problem Articulation); etc.

• Physical Level: Use of start and finish authorities to map the movements of messages; the addressing act in communications; semantics of key physical concepts such as space, time, causality etc.; modelling sign-token activities and the physical tracking of data; access and security; storage problems given that data may not be deleted; relationship between identifiers of surrogates within the system and the physical processes of identification externally; deployment of resources in systems engineering, strategies such as Just-in-Time development, system invariance during migration to a new technical platform; etc.

Collaborators are most welcome and will receive the willing assistance of the SEDITA team. Helpful materials on MEASUR, its underlying theory and its applications will be mounted on a portal (www.orgsem.net) that has been developed to support our collaboration

Acknowledgements

The SEDITA project is supported by the British Engineering and Physical Science Research Council (EPSRC) (Grant number GR/S04840).

References

1. Ades, Y.M., Eid, A., Namat User Manual, Doha, University of Qatar (1989) 2. Ades, Y. M., “Semantic Normal Form: Compliance”, Proc. 2nd Workshop on Organisational Semiotics, Enschede (1999) 12-14 3. Alderson, A., J. Yap, K. Liu and H.U. Shah, Relating Organisational Semiotics, Process Modelling and Stakeholder Viewpoints to Elucidate and Record Requirements. Proc. Workshop on Systems Modelling for Business Process Improvement, University of Ulster, Coleraine (1999) 4. Andersen, P.B., A Theory of Computer Semiotics, Cambridge, Cambridge University Press

(1997) 5. Blaug, Mark, The Methodology of Economics, Cambridge, Cambridge University Press

(1980) 6. Bloor, David, Knowledge and Social Imagery, London, Routeledge & Kegan Paul (1976)

7. Cabinet Office, Successful IT: Modernising Government in Action, CSSA June 2000 Getting IT Right for Government, London (2000)

8. Cook, S., P.J. Mason, C. Tagg, LEGOL-2.l: User Guide, Working Paper, London School of Economics (1980)

9. Cook, S., R.K. Stamper, "LEGOL as a Tool for the Study of Bureaucracy", in The Information Systems Environment, Lucas et al, (eds) North Holland, Amsterdam (1980) 10. Crooks, R.M., LEGOL as an Aid for the Drafting of Legislation, MSc Project Report, London

School of Economics (1981) 11. Davies J,D., F van Harmelen (eds), Towards the Semantic Web: Ontology-driven Knowledge

Management, New York, John Wiley (2002) 12. Dowty, D.R., R.E. Wall and S. Peters, Introduction to Montague Semantics, Dordrecht,

Reidel (1981) 13. Eid, A., M. Higazi, H. Shashea, and Y.M. Ades, Namat User Manual, NAMAT Project,

Computer Centre, Doha, Qatar University (1989) 14. Eid, A. and R. Al-Kobesi, Namat 5 Maintenance Manual Prototype 3.0, NAMAT Project,

Computer Centre, Doha, Qatar University (1991) 15. Falkenberg et al (eds), A Framework of Information Systems Concepts: The FRISCO Report,

IFIP, Geneva, (Web edition: ftp://ftp.leidenuniv.nl/pub/rul/fri-full.ftp) (1998) 16. Gibson, J.J., The senses considered as perceptual systems, Allen & Unwin, London (1968) 17. Gibson, J., The Ecological Approach to Visual Perception, Boston, Houghton Mifflin

Company (1979) 18. Gruber, T. R., Toward Principles for the Design of Ontologies Used for Knowledge Sharing Knowledge Systems Laboratory, Stanford University (1993) 19. Hindess, Barry, Philosophy and Methodology in the Social Sciences, Hassocks, Harvester

Press (1977) 20. Holmqvist, B., P.B. Andersen, H. Klein, R. Posner, Signs of Work, Berlin, De Gruyter

(1996) 21. Heyun, Liu, "Implementation of Legol 2.0", Working paper, Computer Science Research

Department, Coventry Polytechnic (1987) 22. Jones, S., P.J. Mason, R.K. Stamper, "LEGOL-2.0: A Relational Specification Language for

Complex Rules", in Information Systems (1979) 4, 4 23. Kowalski, R., Logic of Problem Solving, North Holland, Amsterdam (1979) 24. Michaels, C.F. and C. Carello, Direct Perception, Englewood Cliffs, Prentice-Hall (1981) 25. Law, D and R.K. Stamper, Criteria for Comparing Systems Requirements, GMD Bonn/NCC

Manchester (1984) 26. Liu, K., Boekkooi-Timminga, L. Sun, "Systems Analysis of a Computerized Test Construction System (CONTEST)", Working paper, Enschede (1990) 27. Liu, K., Semiotics Applied to Information Systems Development, PhD thesis, University of

Twente, Enschede ISBN: 90-9006076-6 (1993) 28. Liu, K., A. Alderson and Z. Qureshi. Requirements Recovery of Legacy Systems by

Analysing and Modelling Behaviour, Proc. International Conf. on Software Maintenance, IEEE, Computer Society, Los Alamitos, (1999) pp3-12.

29. Liu, K. and L. Sun, Capturing Temporality and Intentionalilty in Information Systems, Proc. of Workshop of Language Action Perspectives, Mareike Schoop (ed), Tech. U. of Aachen (2000)

30. Liu, K., L. Sun, J. Barjis and J. Dietz, Capturing Organisational Behaviour with Dynamic Modelling, Proc. IFIP Conf. on IT for Business Management, Beijing (2000) 21-25

31. Liu, K., Semiotics in Information Systems Engineering. Cambridge University Press, Cambridge (2000)

32. Liu, K., P. Klarenburg, F. van Slooten, "Basics of Legol-3.0", Working paper,

Enschede, University of Twente (1990) 33. Liu, K., R.K. Stamper, P. Anderson and R. Clarke (eds), Proceedings of 2nd International

Workshop of Organisational Semiotics, September (1999), Almolo, (also by Kluwer Academic Publishers in 2000).

34. Liu, K., R. Clarke, R.K. Stamper and P. Anderson (eds), Proceedings of 3rd International Workshop of Organisational Semiotics, (2000) Stafford, (also by Kluwer Academic Publishers in 2001).

35. Liu, Kecheng, Lily Sun, Keith Bennett, Co-Design of Business and IT Systems, J. of Information Systems Frontiers, 4(3), (2002) 251-256.

36. Liu, K., (to appear), Virtual, Distributed and Flexible Organisations - Studies in Organisational Semiotics, Kluwer Academic Publishers (2004)

37. Liu, Xiaojia, Employing MEASUR Methods for Business Process Re-engineering in China, PhD Thesis, Enschede, University of Twente (2001)

38. Mason, P.J., R.K. Stamper, "The LEGOL Implementation", IBM UK Peterlee Scientific Centre Report (1978) 39. Mason, P.J. (ed.), "Manual of the LEGOL-2.0 Language", Working Paper, London School of

Economics (1978) 40. Mason, P.J., A Systems Analysis Workbench, in Computer Bulletin 2(23), p. 26, 27, 3l

Parliamentary Select Committee on Public Accounts, 2000, Improving the Delivery of Government IT Projects (House of Commons, HC 65) (1980).

41. Searle, John, The Construction of Social Reality, Harmondsworth, Penguin (1995) 42. Sergot, M.J., F. Sadri, R. Kowalski, F. Kriwaczek, P. Hammond and H.T. Cory, The British

Nationality Act as a Logic Programme, Communications of the A.C.M. (1986) Vol 29, No 4. 43. Stamper, R.K., Information in Business and Administrative Systems, Wiley, New York and

Batsford, London (1973) 44. Stamper, R.K., "The LEGOL Project and Language", Proc. of Datafair Conf., British

Computer Society, London (1973) 45. Stamper, R.K., "The LEGOL-1 Prototype Systems and Language", in Computer Journal 20, 2

(1977) 46. Stamper, R.K., P.J. Mason and C. Tagg, Design Methodology and Control of the Systems Life

Cycle Using LEGOL, Proc. of the 5th European Conf. of the European Datamanager User Group, Paris (1979)

47. Stamper, R.K., Towards a Semantic Normal Form, in Database Architecture, G. Bracchi, G.M. Nijssen, North Holland, Amsterdam (1979)

48. Stamper, R.K., P.J. Mason, C. Tagg, "Design Methodology and Control of the Systems Life Cycle Using LEGOL", Proc. 5th Euro Conf. of the Datamanager User Group, Paris (1979) 49. Stamper, R.K., Evolutionary Development of Large Systems, in Infotech State of the Art

Report: System Design, London Pergamon Press (1981) 50. Stamper, R.K., Information Analysis in LEGOL, in Information Modelling, J.A. Bubenko (ed.), Studenlitteratur/Chartwell Bratt, Lund (1983) 51. Stamper, R.K., "Collateral Systems", "General Sub-systems" & "Comparison of Requirements

Analysis Methods", in Criteria for Comparing Systems Requirements, D. Law, & R.K. Stamper (eds.), GMD Bonn/NCC Manchester (1984)

52. Stamper, R.K., "Legal Drafting and Semantic Analysis", in Gesetzgebung und Computer, Th. Ohlinger (ed.), J. Schweitzer Verlag, München (1984) 53. Stamper, R.K., Knowledge as Action: A Logic of Social Norms and Individual

Affordances. In Gilbert, G. N. and Heath, C., (eds.), Social Action and Artificial Intelligence. Grower Press, Aldershot (1985)

54. Stamper, R.K., J. Backhouse, S. Marche and K. Althaus, Semantic Normal Form?, in Meaning: the Frontier of Informatics, K.P. Jones (ed), ASLIB London (1988)

55. Stamper, R.K., K. Liu, M. Kolkman, Y.M. Ades and C. van Slooten, From Database to Normbase, in International Journal of Information Management

(1991) 11, 3, p. 67-84 56. Stamper, R.K., Kolkman, M., "Problem Articulation: A sharp-edged soft systems approach", in Journal of Applied Systems Analysis 18, (1991) pp. 69-76 57. Stamper, R.K., The Parallel Development of Laws and Their Related Information Systems, in Wetgeving en Informatievoorziening. I. Snelling (ed.), Erasmus University Rotterdam, for the Netherlands government (1992) 58. Stamper, R.K., Social Norms in Requirements Analysis - an outline of MEASUR. In

Jirotka, and M., Goguen, J. (eds.), Requirements Engineering, Technical and social aspects. Academic Press, N.Y. (1994)

59. Stamper, R.K., K. Liu, K. Huang, Organisational Morphology in Re-engineering, Proc, 2nd ECIS, Nijenrode University, ISBN 90-73314-24-0, (1994) pp. 729-737. 60. Stamper, R.K., 1996, Ontological Dependency, Proc.12th European Conf. on AI, Budapest,

(1996) 11-16 61. Stamper, R.K. Signs, Information, Norms and Systems, in Holmqvist et al, (1996) pp.349-

397. 62. Stamper, R.K., “Information Systems as a Social Science: An Alternative to the FRISCO

Formalism” in Falkenberg et al (eds), Information System Concepts, Boston, Dordrecht, London, Kluwer Academic (2000)

63. Stamper, R.K. Relating Semantics and Communication Acts, Proc. 5th Int’l Workshop on the Language-Action Perspective on Communication Modelling, Aachen, (2000) 14-16

64. Stamper, R.K. and Y.M. Ades, Semantic Normal Form and Enhanced System Quality, Working Paper accepted for publication (2004)

65. Strassmanm, P.A., Information Payoff, Free Press, New York, (1985) 66. Strassmann, P.A. The Business Value of Information Technology, Strassmann Inc, New

Canaan, Conn (1990) 67. Strassmann, P.A. The Squandered Computer, Information Economics press, New Canaan,

Connecticut (1997) 68. Tagg, C., LEGOL-2.l: Reference Manual, Working Paper, London School of Economics (1979) 69. Tagg, C., The LEGOL-2.1 Prototype, Working paper, London School of Economics (1980) 70. Timminga, E and L. Sun, CONTEST: A Computerised Test Construction System. In J.

Hoogstraten and W.J. van der Linden (eds), Onderwijsresearchdagen'91, Stichting Centrum voor Onderwijsonderzoek, Amsterdam University, Amsterdam, (1991) pp. 69-76.

71. US Department of Commerce, Digital Economy 2000, Washington DC, http://www.itl.nist.gov/iad/highlights/DigitalEcomny/index.htm (2000)

72. Veryard, R, 1992, Information Modelling: Practical Guidance, New York, London, Prentice Hall 73. Winograd, Terry. and Carlos F. Flores Understanding Computers and Cognition, Norwood,

N.J., Ablex Publishing Co. (1986)