11
The XForms 2.0 Test Suite Steven Pemberton, CWI, Amsterdam Abstract XForms 1.0 and 1.1 both had test suites that consisted largely of static XForms documents. To run the tests you had to manually activate them one by one, and then visually conrm that the output matched the description of what should have been produced. If you wanted to add more cases to a test, it involved adding to the set of documents, or editing the individual documents. The test suite for XForms 2.0 now being constructed takes a diferent approach, the idea being that the tests should check themselves that they have passed; most tests have a similar structure so that only the data used needs to be altered to check new cases. Of course, for a language designed for user-interaction, some tests have to be based on physical interaction. But once you have conrmed that clicking on a button does indeed generate the activation event, all subsequent tests can generate the activation event without user intervention. The introspection needed for tests to check the workings of the processor doing the testing can raise some challenging problems, such as how to test that the initial start-up event has been sent when the facilities for recording that fact have not yet been initialised. This paper considers the techniques used to create a self-testing XForms test suite, some of the problems encountered, and gives examples of how some of them were solved. 165

The XForms 2.0 Test Suite Steven Pemberton ... - core.ac.uk · whose e fect is to calculate the res attribute for

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • The XForms 2.0 Test SuiteSteven Pemberton, CWI, Amsterdam

    Abstract

    XForms 1.0 and 1.1 both had test suites that consisted largely of static XForms documents. To run thetests you had to manually activate them one by one, and then visually conrm that the output matchedthe description of what should have been produced. If you wanted to add more cases to a test, it involvedadding to the set of documents, or editing the individual documents.

    The test suite for XForms 2.0 now being constructed takes a diferent approach, the idea being that thetests should check themselves that they have passed; most tests have a similar structure so that only thedata used needs to be altered to check new cases.

    Of course, for a language designed for user-interaction, some tests have to be based on physicalinteraction. But once you have conrmed that clicking on a button does indeed generate the activationevent, all subsequent tests can generate the activation event without user intervention.

    The introspection needed for tests to check the workings of the processor doing the testing can raisesome challenging problems, such as how to test that the initial start-up event has been sent when thefacilities for recording that fact have not yet been initialised.

    This paper considers the techniques used to create a self-testing XForms test suite, some of the problemsencountered, and gives examples of how some of them were solved.

    165

  • Introduction

    1. IntroductionXForms [1] is a W3C standard XML-based markup language which, as the name would lead you to expect, was originallydesigned for a new generation of forms on the Web. Indeed, version 1 was exactly that, and to a large extent mirrored whatcould be achieved with HTML forms, while adding extra facilities. However, afer a short period of experience, it wasrealised that with a small amount of generalisation, XForms would be suitable for other sorts of interactive applicationsas well.

    And so was born XForms 1.1, a declarative, Turing-complete programming language, applicable for interactiveapplications (and forms) both on and of the web.

    Since then XForms has been used by a broad, international user population, from small companies to multinationals,with more than six implementations available from around the world.

    One of the surprises has been that experience has repeatedly shown that using XForms for applications reducesapplication development time by an order of magnitude, with concomitant reductions in cost. This is largely thanks to thedeclarative style of programming that XForms uses: much of the administrative side of regular procedural programmingis taken care of automatically by the XForms system.

    Now, XForms 2.0 is in development [2], a further generalisation of the previous version.

    2. Introduction to XFormsThe essence of XForms is state, which means that it meshes extremely well with the REST (Representational StateTransfer) architectural style [3].

    An XForms application has two parts: a model, and a user-interface.

    The model contains all the data being used, stored in instances, along with descriptions of properties of, and relationshipsbetween, the data. For instance, data can be dened inline:

    2018-01-20 2018/01/20

    or can be imported from an external source:

    Descriptions of data properties are done using bind statements that bind properties to data values:

    0"/>

    Controls in the user-interface then bind to the data to allow interaction:

    Afer initialisation the system is in stasis: the data matches the descriptions, and the relationships between data are up-to-date. Afer that, events occur, either system-generated or user-initiated, causing data values to change, to which theXForms system responds in order to return the system to stasis. While in general the system responds to events in standardways, applications can also catch events and specify actions that dene how to respond to particular events in particularways:

    true

    166

  • Test Suites

    As a --simple-- example, a map application might keep the x and y coordinates of a location and a zoom level for lookingat the map:

    10 511 340

    The URL of the relevant map tile for that location can then --automatically-- be kept up-to-date whenever and howeverany of the values change, by using a bind:

    See [4] for a further introduction to XForms, and [5] for a fully worked-out mapping application in XForms.

    3. Test SuitesAs part of the process of dening a new standard, it is recognised good practice to dene a test suite to go along withthe specication. The principle reason for having a test suite of course is to allow implementers to check that theirimplementations correctly interpret the denitions in the specication. However, users of implementations can gaincondence in the implementation they use by seeing the results from the test suite. And nally, it is useful for thespecication writers themselves: by forcing them to think of test examples, it can expose corners of the specication thathave not yet been suciently well dened.

    4. The XForms 1.* test suiteThe original XForms 1.0 and 1.1 test suites [6] were dened as a large collection of les, each le testing one feature of thelanguage. Each test consisted of some output, plus a description of what you should see if the test had passed. Problemsexperienced with this approach included:

    • It was tedious to run each test one by one,

    • It was concentrated work deciding if a test had passed,

    • Adding tests involved authoring a complete le for each test.

    5. The XForms 2.0 test suiteThe problems experienced with the earlier versions of the test suite led us to approach the test suite for the new versionof XForms in a totally diferent way.

    Firstly, the test suite is now One Big XForm, that loads the details of the tests into an XForms instance, and uses that todrive the test suite interface, running the tests as sub-forms . The tests are divided over the chapters of the specication,and each chapter contains a test case for each feature, each test case containing any number of tests for that feature.

    The interface allows you to step through the chapters, or to select a particular chapter, and for each chapter to step throughthe tests, or select a particular test.

    Secondly, as far as possible, the tests are all self testing: apart from a description of what is being tested, and the outputof the results, tests include a big green PASS or a big red FAIL indication at the top of the output, signifying whether theoutput values match with what was expected, with each individual failing test within that one test case also being agged.

    However, not all tests can be self-testing. Of course, for a language designed for user-interaction, some tests have tobe based on physical interaction: for instance, does clicking on a button generate the correct event? But once you haveconrmed it does indeed, subsequent tests can generate the activation event without user intervention.

    167

  • The Generic Structure of the Tests

    Similarly, some tests can only be conrmed by inspection: does the now() function indeed generate today's date andtime? Once you have conrmed that, other tests that depend on the date and time can use that function without furtherinspection.

    Some tests are particularly introspective. Are expressions evaluated in the correct evaluation context? Such tests areparticularly hard to formulate, since the very act of introspection alters the context that they are being evaluated in.

    Finally, testing initialisation can be tricky, because before the system has initialised, there is little you can do.

    6. The Generic Structure of the TestsDespite the caveats above regarding tests that cannot be tested automatically, the vast majority can, and almost all use astandard template.

    To demonstrate the workings of this template, let us consider an example test case for a function, in this case for thefunction boolean-from-string().

    We want to test that a function calls such as boolean-from-string('true') return the correct result.

    To do this, the parameter value is enclosed in an element:

    true

    and we add attributes where the required result, the actual result, and whether the test case passes or not will be stored:

    true

    As many such test cases as necessary are then gathered together in an instance:

    true TRUE tRue 1 false FALSE faLse 0 qwertyuiop ...

    A bind is then used to calculate the individual results:

    whose efect is to calculate the res attribute for all test elements.

    Another bind, independent of which function is being tested, calculates if the computer result matches the expectedvalue:

    and nally a bind for the attribute on the outmost element records if all tests have passed:

    which says that if there is a test element whose pass attribute does not have the value yes, then the test set fails, andotherwise it passes. We may in future also add a percentage pass value, that counts the number of passed tests:

    168

  • The Generic Structure of the Tests

    With this structure, every test form has an identical set of controls, that output the name of the test, an optionaldescription (which, following XForms rules, is only displayed if present in the instance), whether all tests have passed, forquick inspection, and the list of each test with an indication of what was expected when it has failed:

    Note that with the statement

    this only selects the req attribute if its value does not match that of the res attribute on the same element. If theymatch, the req is not selected, and nothing is output.

    This looks like this when run:

    Here is an example of a fail (and in this case with a description as well):

    169

  • Datatypes

    If we want to test a function with more than one parameter, we structure the test elements slightly diferently, forinstance for the compare() function which has two parameters, by enclosing each parameter in an element of its own:

    compare(apple, orange)

    and then modify the bind that calculates the results:

    Note that in the general template structure, these are the only places where there are diferences between test cases: in thedata, and in the bind calculating the result. The rest remains identical.

    7. DatatypesTo test datatypes, we want to do something similar. We can collect in the test elements a, possibly large, group ofvalues of the datatype to be tested, and then see if the system thinks they are valid values for that type. For instance forthe date datatype:

    2018-01-202018/01/20-2018-01-20+2018-01-2002018-01-2012018-01-20

    and so on, and then calculate with the bind for the results:

    170

  • Datatypes

    This would work ne, but for our purposes it has a diculty. The valid() function is an XForms 2.0 addition, andwe want as litttle as possible of the test suite infrastructure to depend on XForms 2.0 features -- if anything is likely to failit is the new features of the language before the old features; all the more so since many of the tests will still work witholder versions of XForms and so can still be used on older implementations.

    In XForms 1.1 (and later), if a control is bound to a value, and its value changes, an event is dispatched to the controlannouncing its validity. The standard XForms response is to change the display of the value to make it clear that it is nownewly valid or invalid, but we can catch the event and record that it has happened for that value. The only diculty thatwe have to deal with is that the event is only generated when the value changes.

    So what we do is initially set the test value to some random value, it doesn't matter what it is, nor whether it is a validor invalid value for the datatype, and when the system has initialised, only then change all the values to the data we areactually interested in. Then when the value changes, and the event is generated, we catch it and save the result. Somethingalong these lines:

    Store the value we are interested in in an attribute:

    Add an attribute to the root element to record whether the system has been initialised yet:

    And then use a bind to calculate the value of the elements:

    This ensures that initially the test elements have the value 'xxx', until the started attribute is changed, which wedo on initialisation, by catching the xforms-ready event:

    yes

    Then in the output section, we can catch the validity events, and record them:

    valid invalid ...

    Which might look like this:

    171

  • Events

    8. EventsIn the previous example, we saw some events that happen during processing: the xforms-ready event that isdispatched when the system has nished initalising, and the xforms-valid and -invalid events that aredispatched when a value bound to a control changes.

    In fact, when such a value changes several states are announced via events: whether the control is enabled or not, whetherthe value is optional or required, whether the value is readonly or not, as well as the two we have already seen. The testto check that these events are sent correctly starts by assembling test values that are all zero:

    00000000

    and binding to the values properties, so that each positive property (such as valid) is the case when the value is 1, and theopposite property (such as invalid) is the case when the value is 0:

    When initialisation is nished, all the test values are ipped to 1:

    1

    which causes all the properties to ip. The resultant events are then caught in the output section:

    172

  • Actions

    Event ...

    Note that by concatenating the result, we catch the case where the event is (incorrectly) sent more than once.

    Here is an example of the output:

    9. ActionsActions ofen cause a change to the data. A good example of such an action is insert, that inserts new elements orattributes into a data structure.

    Afer an insert, the system has to restore stasis, and goes through a number of steps to do that: rebuild: possibly updatinginternal data structures, recalculate: recalculating dependent values, revalidate: checking changed values for validity,refresh: updating the user interface.

    At each step of restoring stasis an event is dispatched. Although seldom needed, these events allow applications to doextra steps if necessary.

    173

  • Initialisation

    Doing extra processing during these stages requires care, because, by denition, the system is not yet up to date. Inparticular, changing values during these stages necessitates you manually doing an extra recalculate aferwards, since thesystem may not be aware of the changes you have made. It also means for instance that we can't keep an index of howmany events received, and use that to index into a list of events received.

    So what we do, is start of with the test instance containing elements for the expected events, except the very last (refresh):

    then on xforms-ready, we use insert to add the missing element. With no further parameters, an insert ona list just duplicates the last element, so we need to update the req attribute:

    refresh

    Then we catch all the events we are expecting, and store them at the locations they should be in if the events come inin the right order:

    insert

    rebuild

    recalculate

    revalidate

    refresh

    A setvalue such as

    recalculate

    selects the test element afer the one whose res attribute is rebuild, and sets the value of its res attribute only ifit has not already been set. Therefore, if the rebuild event has not yet been received, we won't record the recalculate.

    10. InitialisationThe big obstacle to testing initialisation is that during initialisation almost nothing is available: you are unable to useinstance values, or calculations. The original 1.* test suite just displayed a dialogue box to announce that the initialisationevents had been received:

    xforms-model-construct received

    However, in the version 2 test suite, we have succeeded in nding a way to record that the event happened, in an instancevalue, so that the test can self-check.

    When we receive the initialisation event, all we do is dispatch a new event with a delay long enough to allow initialisationto complete (1000 milliseconds is more than enough):

    174

  • Conclusion

    When the new event is caught, initialisation will have nished, and we can then record that the initial event was seen:

    xforms-model-construct

    11. ConclusionWe wanted to create a test suite that was easy to use, easy to create tests for, and easy to update, and we are very happy withthe results so far. This is still work in progress; we estimate that about half the tests have been written to date. However,we already have good experience with the suite, and its easy modifyability helps us with constructing it.

    Future work, apart from writing the remaining tests, will look at the possibilities of running the tests automatically, withminimum human intervention.

    Bibliography[1] XForms 1.1. John M. Boyer. W3C. 2009. https://www.w3.org/TR/xforms/ .

    [2] XForms 2.0. Erik Bruchez et al.. W3C. 2018. https://www.w3.org/community/xformsusers/wiki/XForms_2.0 .

    [3] "Chapter 5: Representational State Transfer (REST)". Architectural Styles and the Design of Network-basedSoware Architectures (Ph.D.).. Roy Thomas Fielding. University of California, Irvine.. 2000. https://www.ics.uci.edu/~elding/pubs/dissertation/rest_arch_style.htm .

    [4] XForms: an Introduction, . Steven Pemberton. CWI, Amsterdam. 2018. https://www.cwi.nl/~steven/xforms/ .

    [5] XForms: an Unusual Worked Example. Steven Pemberton. CWI, Amsterdam. 2015. https://www.cwi.nl/~steven/Talks/2015/11-05-example/ .

    [6] XForms Test Suites. Steven Pemberton. W3C. 2008. https://www.w3.org/MarkUp/Forms/Test/ .

    [7] XForms 2.0 Test Suite. Steven Pemberton. CWI, Amsterdam. 2018. https://www.cwi.nl/~steven/forms/TestSuite/index.xhtml .

    175

    https://www.w3.org/TR/xforms/https://www.w3.org/community/xformsusers/wiki/XForms_2.0https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htmhttps://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htmhttps://www.cwi.nl/~steven/xforms/https://www.cwi.nl/~steven/Talks/2015/11-05-example/https://www.cwi.nl/~steven/Talks/2015/11-05-example/https://www.w3.org/MarkUp/Forms/Test/https://www.cwi.nl/~steven/forms/TestSuite/index.xhtmlhttps://www.cwi.nl/~steven/forms/TestSuite/index.xhtml

    Markup UK 2018 ProceedingsTable of ContentsShared Tag Sets as Social Constructs1. Introduction2. From the Mouth of a Novice3. The Bare Bones XML specification4. Advantages of Shared Vocabularies4.1. Of course All is not sweetness and light4.2. Mixing Vocabularies (by cut and paste not namespaces)4.3. Natural/Intuitive Tagging (Influences of Shared Vocabularies on Thinking)

    5. Shared Wisdom: the culture and practice of XML5.1. Tag Set Cultures5.2. Different use cases make for different cultures5.3. Community coercion: All is still not sweetness and light5.4. But ...

    6. ConclusionBibliography

    How to Make a Flying Start with S1000D – Lessons Learned at the Airport1. Introduction to S1000D1.1. What is S1000D?1.2. History1.3. Principal Concepts1.3.1. Data Modules1.3.2. Publication Modules1.3.3. Common Source Database

    2. Introduction to the Knowledge Warehouse Project3. Migration Planning3.1. Data Modules3.1.1. The Data Module Code3.1.2. Model Identification Code3.1.3. System Difference Code3.1.4. Standard Numbering System3.1.5. Other Codes3.1.5.1. Disassembly Code and Disassembly Code Variant3.1.5.2. Information Code and Information Code Variant3.1.5.3. Item Location Code

    3.1.6. Putting It All Together

    3.2. Data Module Title

    4. Publication Modules4.1. The Publication Module Code4.1.1. Model Identification Code4.1.2. Issuing Authority4.1.3. Publication Number4.1.4. Volume Number4.1.5. Putting It All Together

    4.2. Publication Module Entry Title

    5. Business Rules5.1. Business Rule Decision Points5.2. Business Rules Exchange (BREX)5.2.1. Context Rules5.2.2. Context Independent Rules5.2.3. SNS Rules

    6. Lessons Learned6.1. Spreadsheet It!6.2. Hearts and Minds

    7. ConclusionA. Appendix AReferences

    Two (and a half) models for markup of bibliographic references1. Background2. Bibliographic reference formats3. Online functionality of bibliographic references4. OUP data model5. BITS data model6. Converting the OUP data model to BITS7. Outline of the XSLT8. Limitations of the conversionBibliography

    The Cryptic Crossword Corpus Project: first steps in establishing a markup vocabulary1. Introduction2. A brief explanation of cryptic crosswords3. Developing the markup vocabulary for CCCP4. ConclusionBibliography

    Modern amendment drafting - The road to XML1. Context2. Introduction2.1. Validating amendments2.2. Ordering amendments2.3. Simulation

    3. Problem definitions3.1. Problem 1: Segment amendments in an amendments document3.2. Problem 2: Recognize the location, action and operand information3.3. Problem 3: Order the amendments according to the rule set3.4. Problem 4: Generate the simulation

    4. Implementation4.1. Segmenting amendments (Problem #1)4.1.1. Linear Chain Conditional Random Fields (CRF)4.1.2. Model description

    4.2. Recognizing location, action and operand information (Problem #2)4.3. Validating amendments4.4. Ordering the amendments (Problem #3)4.5. Simulating the effect (Problem #4)

    5. Human in the loop6. Conclusions and further workBibliography

    Introduction to CSS for Paged Media1. Introduction2. Web and Paged Media2.1. @media Rule2.2. Specifying a Print Style Sheet2.2.1.  Element2.2.2. @import Rule2.2.3. Media attribute of and elements

    2.3. Differences Between Screen and Paged Media2.3.1. Design approach2.3.2. Breaks2.3.3. Floats2.3.4. Navigation2.3.5. Left and right pages2.3.6. The printed book

    3. Page Setting3.1. Western Page Design3.2. Japanese Page Design3.3. @page Rule3.4. Named Page : page property3.5. Crop and Registration Marks3.5.1. Page bleed area3.5.1.1. Bleed region width : bleed property

    4. Headers and Footers4.1. Margin Boxes4.2. Running Headers and Page Numbers4.2.1. Running header setting : string-set property and string() function4.2.2. Variable strings : string-set property4.2.3. string()4.2.4. Move elements to header/footer : running() position value4.2.5. Insert a running element: element()4.2.6. Page number : counter(page)4.2.7. Total pages : counter(pages)

    4.3. Left and Right Page Headers: :left and :right4.4. Last and only pages: :last and :only

    5. PDF Output5.1. PDF versions5.2. Tagged PDF5.3. PDF/UA5.3.1. Matterhorn Protocol5.3.2. PAC 3 PDF/UA checker

    5.4. Document properties5.4.1. Extensible Metadata Platform (XMP)

    6. Colour Specification6.1. Printing colour6.2. Text Colour : color property6.2.1. CMYK colours6.2.2. Opacity6.2.3. rgb-icc()6.2.4. Grayscale6.2.5. PANTONE® spot colours6.2.6. Other spot colours

    7. Counters7.1. Numbering Chapters and Sections7.2. Inserting Characters : content property7.3. Incrementing Counters : counter-increment property7.4. Counter Reset : counter-reset property7.5. Page counter

    8. Lists8.1. Counter styles8.2. Defining Custom Counter Styles : @counter-style rule8.3. Predefined Counter Styles

    Bibliography

    The Wolfenbüttel emblem2rdf Pipeline1. Introduction2. Emblematica Online – Linked Open Emblem Data2.1. Encoding emblems2.2. The publication process as XProc pipeline2.2.1. Data entry and initial transformation2.2.2. TEI to Emblem Schema2.2.3. Emblem Schema documents

    2.3. Running the emblem2rdf pipeline

    3. SummaryBibliography

    CREPDL: Protect Yourself from the Proliferation of Unicode Characters1. Introduction2. Subsets in Unicode3. Subsets in ISO/IEC 106463.1. Collections3.1.1. Code Points and Ranges3.1.2. Open Collections and Fixed Collections3.1.3. References to Other Collections3.1.4. Grapheme Clusters3.1.5. User-defined Subsets3.1.5.1. Subsets Defined by Governments3.1.5.2. Subsets for Describing Font Coverage

    4. Existing Machine-readable Notations for Describing Subsets4.1. Unicode Regular Expressions4.1.1. Referencing other subsets4.1.2. Performance

    4.2. A Notation for Character Collections for the WWW

    5. Design and Implementation of CREPDL5.1. Language Design5.2. Implementation

    6. Concluding Remarks and Future WorksBibliography

    Rethinking transformation – the potential of code generation1. Introduction2. Rethinking document-to-document transformation3. Source navigation based model3.1. Basic concepts3.2. Primitive operations3.3. Composing document-to-document transformation

    4. Metadata model SNAT (Source Navigation Annotated Target tree)4.1. Metadata item model4.2. Advanced features4.2.1. Value mappings4.2.2. User-defined functions4.2.3. User-defined variables

    4.3. Code generator SNAT

    5. Source alignment based model5.1. What is an alignment?5.2. Representation versus information5.3. Annotated target tree (SAAT)5.4. Mapping alignments to navigation5.5. Inferring context propagation5.6. Semantic versus structural relationships5.7. Shortest-path-principle5.8. Future work

    6. Metadata model SAAT (Source Alignment Annotated Target tree)6.1. Minimal SAAT6.2. Alignment qualifiers6.3. Code generator SAAT

    7. AT Map Machine7.1. Does SNAT presuppose XML?7.2. Does SAAT presuppose XML?7.3. Scheme for building SNAT-based code generators7.3.1. Metadata item model7.3.2. Metadata value model7.3.3. Code assembly model

    7.4. Using SNAT as a meta model

    8. Proof of concept: RDF-to-XML, SNAT-based8.1. Metadata item model8.2. Metadata value model8.3. Code assembly model8.4. Example SNAT and source code

    Bibliography

    Non-XML workflows with XProc 3.01. Introduction2. Reprise: Non-XML documents in XProc 1.03. XProc 3.0's new concept of a document4. Applying the model5. ConclusionsBibliography

    Lightweight XML DevOps using Apache Ant1. Introduction2. Development Policy2.1. Documentation

    3. The ANT macro library3.1. Example of use

    4. XPantS4.1. Macro construction4.2. XPantS contents4.3. Build file example

    5. ConclusionBibliography

    An XSD 1.1 Schema Validator Written in XSLT 3.01. Introduction2. The Validation Task3. Design Considerations3.1. Generic Stylesheet or Generated Stylesheet?3.2. Subset of XSLT 3.03.3. Streaming3.4. Typed data3.5. Support for xsi:schemaLocation

    4. Implementation4.1. Use of Maps for Returned Values4.2. Declaring Map Types4.3. Assessment against Complex Types using Finite State Machines4.4. Checking Assertions4.5. Other Complications

    5. Results5.1. Conformance5.2. Usability5.3. Performance

    6. Conclusions

    When Overlapping XML Meets Changing XML Does Confusion Reign?1. Introduction and Background2. How Content Duplication Represents Any Change3. Representing Structural Change without Content Duplication4. Dominant Hierarchy5. Attributes6. Processing Observations7. ConclusionsReferences

    The XForms 2.0 Test Suite1. Introduction2. Introduction to XForms3. Test Suites4. The XForms 1.* test suite5. The XForms 2.0 test suite6. The Generic Structure of the Tests7. Datatypes8. Events9. Actions10. Initialisation11. ConclusionBibliography