Upload
stephen-hudson
View
214
Download
0
Embed Size (px)
Citation preview
ICT Planning Meeting, Santiago 1
Software Phase V Testing andImprovements to Test Procedures
S. Corder and L.-A. Nyman
April 18, 2013
ICT Planning Meeting, Santiago 2
Software Phase V Testing - Side effects on CSV/DSO
• The situation now is that we are doing continuous testing on various incremental releases for which only the last one goes into the release candidate (on-line and off-line subsystems)
• Testing is a huge staff effort: – Online features often require assessment of full system– Incremental release results in effectively non-stop testing of new offline features.– Delivery of online features monthly but 3 weeks of time to do acceptance
testing/verification is inconsistent with realistic scenarios: If CSV spends 2-3 weeks verifying the new system, which is realistic, on that timescale critical resources spend no time doing anything else (Sawada/Barkats/Kamazaki)
– Line between testing new features and testing new capabilities is often lost: features enable capabilities but they do not directly provide them.
• Current testing/feature load is not maintainable. Possible mitigations:– More testing before phase V– Fewer/smaller new features– Less frequent releases
April 18, 2013
ICT Planning Meeting, Santiago 3
Testing Improvements /Optimizations
• On-line system and regression testing: – Obsmode Testing/Regression: Basic observing modes, sources of known
structure, produce data to be used in pipeline/reduction testing.– Testing to prepare a base set for next cycle; Regression to cover the base set from previous cycle.– Pipeline use to reduce the data. Some metrics for data verification that will migrate to the Pipeline (I
know this is throw away code…)
– Software regressions:• Basic: few, high-level tests with clear pass/fail criteria and automated evaluation tools. Not
done in the pipeline for the short term. Tools/metrics in the works that can be passed to the pipeline.
• Intensive: cross check of most common failure modes in new feature delivery. Similar automatic evaluation, to be used with new releases. Pipeline reduction in all possible cases.
– Performance: Outside the scope of this discussion– Details on Obsmode and Software regression presented later.
• What is ICT going to do to make sure that delivered software is less likely to fail before these points?
April 18, 2013
ICT Planning Meeting, Santiago 4
Low-level changes with large impact –features that may destablize entire system
• 32 => 64 bit systems• Multiple concurrent arrays on the baseline correlator• Scalability of data capturer, bulk data system, etc.• Correlator software scalability (data rates)• Coordinated subarrays (SD, 7m separate
observations but with common calibration)• Are we missing any?• How can we schedule these and test in a way that
mitigates impact on science time?April 18, 2013
ICT Planning Meeting, Santiago 5
Testing Improvements /Optimizations
• Off-line system (including PT, Ph1M, SLT, AQUA)
• Regression testing:• Basic: few, high-level tests with clear pass/fail criteria and automated
evaluation tools (many more details to follow)
• Setting up a useful test environment: Using a commissioning Archive?
April 18, 2013