7
730 Engineering Safety Information in Software Intensive Systems Baiqiang Xia School of Reliability and System Engineering Beihang University Beijing, China [email protected] Deming Zhong School of Reliability and System Engineering Beihang University Beijing, China [email protected] Abstract—Safety is crucial for software intensive systems. Safety-related accidents have resulted in great losses of human, system, environment and mission. However, safety concept is far long to be correctly understood and adequately engineered. Safety is more of a social problem than a technical problem, which addresses social risks instead of solely technical hazards. In engineering practices, most of safety-related work has been done under traditional reliability assumptions and approaches. This mismatch results into incomplete and inconsistent information of system risks, which accounts more for system accidents than implementation errors. This paper investigates the overall system properties, such as the corresponding underlying strategy in man-made system design, the basic characters of software intensive systems, and component-based safety- driven system engineering for these systems. It provides a broader view for system safety, which also give specific consideration for mission safety to assure completeness and consistency in engineering. Through injecting component- based ideas and approaches into safety-driven system engineering, a new approached is designed to engineering safety information in software intensive systems. In the end, a case studied shows the process and attributes of the approach. Keywords Requirement engineering; safety engineering; software intensive system I. INTRODUCTION Software intensive systems have been introduced into many application domains, such as aircraft, nuclear plant, and medical devices etc. These systems have significant differences compared to systems that have less real-time requirements and complexity in life-circle. They are usually extremely complex, highly automatic, high-tech, safety- critical and mission-critical. These characters have results into great difficulties for system engineers in developing safe systems, and system users in operating system safety. Accidents in these systems could result in great losses of human, system, environment and mission. Safety has been perceived as crucial property of software intensive systems. However, not only most of the safety-related work does not receive appropriate priorities in system engineering, but also the nature of system safety is far long to be understood and addressed by system engineers. Safety related issues are commonly engineered in traditional reliability engineering approaches (such as FTA and FMEA) with underlying reliability assumptions, which assume component failures as the causation of accidents rather than inadequate control constraints and control actions. Traditional reliability engineering could not address safety efficiently and economically. Safety information of software intensive systems are usually not completely developed and consistently engineered. Accidents arising from inadequate safety engineering are more common than from component failures. It is common that accidents take place when the system operates exactly as specified. Accidents tend to be more likely to arise in dysfunctional component interactions, human-machine mode-confusion and environmental disturbs than component failures. Incompleteness and inconsistency of design information accounts more than system implementation incorrectness for systems accidents. Statistics shows nearly 70% of fatal accidents involve incomplete or ambiguous design information of components interaction [2]. Early conceptual stages of system life circle (requirements abstraction and conceptual design stages) should receive more emphasis to address system safety more effectively and economically. Based on the discussion above, two key issues could be summarized for today’s software intensive system safety engineering. First, there need to be a better definition for safety to acquire completeness and consistency in safety engineering. Second, there need to be approaches that could assist in keeping safety information complete and consistent in system design, especially in the conceptual stages. This paper will explore on the nature of system safety and basic characters of software intensive systems, clarify the essential aspects of safety engineering work, and provide a safety-driven component-based system engineering approach to address incompleteness and inconsistency of safety information in early conceptual stages of system engineering process. 978-1-61284-666-8/11$26.00 2011 IEEE

[IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

  • Upload
    deming

  • View
    217

  • Download
    4

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

730

Engineering Safety Information in Software Intensive Systems

Baiqiang Xia School of Reliability and System Engineering

Beihang University Beijing, China

[email protected]

Deming Zhong School of Reliability and System Engineering

Beihang University Beijing, China

[email protected]

Abstract—Safety is crucial for software intensive systems. Safety-related accidents have resulted in great losses of human, system, environment and mission. However, safety concept is far long to be correctly understood and adequately engineered. Safety is more of a social problem than a technical problem, which addresses social risks instead of solely technical hazards. In engineering practices, most of safety-related work has been done under traditional reliability assumptions and approaches. This mismatch results into incomplete and inconsistent information of system risks, which accounts more for system accidents than implementation errors. This paper investigates the overall system properties, such as the corresponding underlying strategy in man-made system design, the basic characters of software intensive systems, and component-based safety-driven system engineering for these systems. It provides a broader view for system safety, which also give specific consideration for mission safety to assure completeness and consistency in engineering. Through injecting component-based ideas and approaches into safety-driven system engineering, a new approached is designed to engineering safety information in software intensive systems. In the end, a case studied shows the process and attributes of the approach.

Keywords����Requirement engineering; safety engineering; software intensive system

I. INTRODUCTION

Software intensive systems have been introduced into many application domains, such as aircraft, nuclear plant, and medical devices etc. These systems have significant differences compared to systems that have less real-time requirements and complexity in life-circle. They are usually extremely complex, highly automatic, high-tech, safety-critical and mission-critical. These characters have results into great difficulties for system engineers in developing safe systems, and system users in operating system safety.

Accidents in these systems could result in great losses of human, system, environment and mission. Safety has been perceived as crucial property of software intensive systems. However, not only most of the safety-related work does not receive appropriate priorities in system engineering, but also the nature of system safety is far long

to be understood and addressed by system engineers. Safety related issues are commonly engineered in traditional reliability engineering approaches (such as FTA and FMEA) with underlying reliability assumptions, which assume component failures as the causation of accidents rather than inadequate control constraints and control actions. Traditional reliability engineering could not address safety efficiently and economically. Safety information of software intensive systems are usually not completely developed and consistently engineered. Accidents arising from inadequate safety engineering are more common than from component failures.

It is common that accidents take place when the system operates exactly as specified. Accidents tend to be more likely to arise in dysfunctional component interactions, human-machine mode-confusion and environmental disturbs than component failures. Incompleteness and inconsistency of design information accounts more than system implementation incorrectness for systems accidents. Statistics shows nearly 70% of fatal accidents involve incomplete or ambiguous design information of components interaction [2]. Early conceptual stages of system life circle (requirements abstraction and conceptual design stages) should receive more emphasis to address system safety more effectively and economically.

Based on the discussion above, two key issues could be summarized for today’s software intensive system safety engineering. First, there need to be a better definition for safety to acquire completeness and consistency in safety engineering. Second, there need to be approaches that could assist in keeping safety information complete and consistent in system design, especially in the conceptual stages. This paper will explore on the nature of system safety and basic characters of software intensive systems, clarify the essential aspects of safety engineering work, and provide a safety-driven component-based system engineering approach to address incompleteness and inconsistency of safety information in early conceptual stages of system engineering process.

978-1-61284-666-8/11$26.00 2011 IEEE

Page 2: [IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

731

II. SOFTWARE INTENSIVE SYSTEMS SAFETY ENGINEERING

A. Two types of system engineering Systems are essentially tools that satisfy human purpose.

At the top level of system design, all the system requirements are subjective. As system design proceeds, objective information is added gradually, until the system is finally implemented and has become totally objective. The static and dynamic properties of system that could satisfy human needs are called system functions. Indeed, it would be more accurate to say that what we need are the functions, rather than the systems themselves. The engineers are building functions through building system.

Systems are the embodiment of functions. Every objective system has a huge set of properties. Some of them are useful and needed by human, while some others may have little or harmful effect on human purpose. In system development process, system engineers should have an overall strategy to deal with these different categories of system properties, as shown in Figure 1.

Figure 1. overall strategy toward system properties

In reality, the overall strategy has been customized according to the character difference of system, which results into two types of system engineering.

For traditional mechanic systems, system engineering mainly concern about the first category of system properties. Mechanic systems usually do not result into unacceptable accidents. Even some mechanic systems could run into catastrophic incidents, the hazards usually could be controlled simply by adding physical prevention in system design, as these systems are visible and relatively lower in complexity. This kind of system engineering is reliability-driven system engineering, which aims at the best functionality of system. Little consideration is given to the second category, as the negative effect is not significant or easy to address.

In software intensive systems, the possible harmful effect of the system are no longer ignorable. The majority of system functions are realized by invisible software components. These systems have much higher real-time complexity than traditional mechanic systems. It isparticularly difficult to identify and control hazardous

properties of these systems, which could result into unacceptable losses of human, system, environment and mission. In these systems, the harmful properties should be addressed with top priority. Safety-driven system engineering which aims at acceptable risk should be applied.

Reliability-driven system engineering takes systems as technical systems. It mainly pays attention to technical aspects of physical system. Its efforts are focused on technical realization and maintenance of system functions. In contrast, safety-driven system engineering thinks systems are more of social systems realized in technical ways. It considers the social risks (or consequences) as top requirement of system. All the issues that could result in high social consequences should be included with high priority in system engineering process. That’s to say, safety-driven system engineering should not only consider the immediate losses of physical system, such as losses of human, system and environment, it should also include mission losses as an essential part as mission losses could generate significant consequences in society, economy, and future scientific researches.

B. The characters of software intensive systems No common sense has been reached in the definition of

so-called software intensive systems. However, there are several names for this kind of systems that popularly used in current engineering practices. Some typical ones are listed in table 1.

TABLE I. TYPICAL NAMES FOR SOFTWARE INTENSIVE SYSTEMS

Number Name 1. safety-critical System

2. mission-critical System

3. Software intensive System

4. complex embedded System

5. real-time dynamic System

6. real-time embedded control System

Names are concise summaries of system characters. These names reflect the best perception of software intensive systems in current engineering practices. By analysis these names comprehensively, we could extract a basic set of characters of software intensive systems, such as mission critical, safety-critical, software intensive, complex, embedded, real-time dynamic and control-critical, which together distinguish software intensive systems from traditional mechanism ones.

Base on the analysis above, the characters of software intensive systems are summarized in following four aspects.

� Mission Critical

There is no need to suffer so great economic, technical and safety challenges to develop an ordinary mission system. A software intensive system must be a mission

Page 3: [IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

732

critical system first. The successfulness of mission not only immediately influences human, system and environment, but also will have extensive influences on related scientific researches, economy and society in the long run. Take the mishap of Ariane 5 for example, it not only resulted in the loss of the expensive aircraft and four man-made satellites immediately, but also forced European Space Agency suspending all its space mission plans for more than one year. Thus, safety-driven engineering should not only consider the possible loss of human, system and environment, mission safety should also be included with high priority. If a system could not provide necessary functions to complete its mission, all the considerations and efforts paid to human, system and environment are wasted. From this point, any issue that related to mission successfulness should be thoroughly addressed. Mission process should be carefully examined in system design.

� High Complexity

Dynamic interactions among system components (include human) are huge in amount and rapid in rate. Compared to mechanism systems, complexity in software intensive systems are much higher, which requires much higher on controller’s cognitive and reactive abilities. The structural design and time sequences of these systems are particularly complex. Real-time performances are usually strictly defined and required. It is difficult for controllers to thoroughly understand the static and dynamic status of these systems thus to control system operation manually in a real-time manner. The high complexity character could also be perceived from the fact that software intensive systems usually deal with huge amount of information and the information usually changes rapidly. From this point, system design information must be deterministic and complete enough to avoid real-time control problems. Approaches that could construct design information structurally are needed to lower system complexity and assure completeness and consistency of design information.

� Highly Automatic

It is beyond human cognitive and reactive ability to control these systems behaviors in a real-time manner. Automatic tools must be applied to assist in real-time control of them. There are two types of tools applied. The first type is system cognition tool that assists in identification and understand of system structure and status to complement the lack of cognitive ability of humans, such as the sensors. The second type is system operation tool, which assists in real-time decision and control execution to complement the lack of reactive ability of human, such as the CDHC. These tools together form the automatic controller in system. The automatic controller interacts intensively with human controllers in system operation. From this point, the possible mismatches among human, system and the actual scenario should be carefully examined.

Figure 2. Control information flow in system

As shown in figure 2, there are three basic control routes in system. The dashed arrow means human controller issue control instructions to real scenario indirectly through automatic controller. Three basic types of mismatches could be generated accordingly. Firstly, there could be mismatches between human and automatic controllers, such as the software does not know that the plane is on the ground and raises the landing gear. Secondly, the possible mismatches between automatic controller and the actual scenario, such as the pilot does not identify an object as friendly and shoots a missile at it. Thirdly, the possible mismatches between human controller and real scenario, such as the pilot thinks the aircraft is in mode A but the computer has changed the mode to B and the pilot issues inappropriate commands for the actual scenario.

Moreover, these mismatches could be further detailed into four sub-types as follow [1].

1) A required control action is not provided.

2) An incorrect or unsafe control action is provided.

3) A potentially correct or adequate control action is provided too late (at the wrong time).

4) A correct control action is stopped too soon.

� High-risk

According to risk management theory, there are typically three types of risks, namely economic risk, process risk and product risk. This taxonomy could also be applied for software intensive systems. First, software intensive systems are usually very costly to design and utilize. The possible failure of system development and loss of system in mission could generate high monetary risks. Secondly, this kind of system usually requires high quality materials, high-level techniques and high standard management and operation to address its complexity and real-time characters. Even a tiny flaw in material, techniques, management or operation could finally result in system catastrophe. It is extremely difficult to satisfy all of these process requirements with limited engineering resources. Thirdly, Software intensive system usually deals with toxic, contaminative, corrosive or explosive materials and experiences extremely high/low temperature, high pressure, high speed, high altitude, high acceleration, high stress, high noise, high light or vacuum in its operation. Both the intermediate and final products involve high product risks. Product risk could be taken as the result of

Page 4: [IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

733

economic process risk. This risk could generate unacceptable results directly.

A predefined checklist could be developed here for preliminary hazard identification (PHI), as show in follow in table 2. Functional failure is also identified as a basic item to cover those unacceptable mission failures generated by component failures.

TABLE II. PREDEFINED PHI CHECK-LIST

NO. Items NO. Items 1. Toxic materials 2. Contaminative materials3. Corrosive materials 4. Explosive materials 5. High/low temperature 6. High pressure 7. High speed 8. High altitude 9. High acceleration 10. High stress 11. High noise 12. High radiat/ light

13. Vacuum 14. Functional failure

C. Brief summary Based on the analysis in this chapter, a brief summary

could be made. In system engineering, not only the useful properties of these systems should be carefully utilized, but also the harmful properties of these systems should be carefully addressed. Software intensive systems are indeed high consequence systems. They are mission critical, high-complexity, highly automatic and high-risk. The definition of this kind of systems is more social than technical. System engineering for these systems should aim at lowering system complexity, improving automatic performance, enhancing mission successfulness and engineering system risks to acceptable level. Traditional reliability-driven system engineering focuses on the technical aspects of system. It could not address harmful properties of complex systems effectively. Safety-driven system engineering that aims at acceptable risk of mission, system, human and environment is applicable in this domain.

The next chapter will proceed with an integrated methodology that injects advanced software developing ideas and approaches in safety-driven system engineering to meet a set of system goals.

III. AN INTEGRATED METHODOLODGY OF SAFETY-DRIVEN SYSTEM ENGINEERING

A. Component-based system design Object-oriented techniques have received great success

in software engineering domain. The object-oriented characters of this technique significantly reduce the cognitive burden for software engineers. With object-oriented techniques, software engineers could design software systems in similar ways to hardware systems, which decompose systems into physical blocks and build clear functions and interfaces in these blocks. It models

detail software elements into clearly defined software objects. The complexity and ambiguity of software system are thus significantly reduced.

Component-based techniques inherit the merits of object-oriented techniques and advance to a new level. It leads a new trend in software system engineering. It decomposes the whole system into functional components and models them into clearly defined components. Component-based techniques are effective in define clear component functions and interfaces. Further, more, it also supports component reuse in system engineering and develops system at lower expense.

Component-based system design involves functional decomposition of system into components, as shown in figure 3. Functional decomposition of system allocates and encapsulates specific system level functions into specific component. Then the components are reconstructed into subsystems and the whole system [1].

Figure 3. System decomposition example

For software intensive systems, the errors in hardware manufacturing and software implementing processes account little for system accidents. Most of accidents results from nondeterministic component functions and ambiguous interfaces caused by incomplete or inconsistent specification of requirements. Component-based techniques provide a clear picture of system structure, thus reduce potential incompleteness and inconsistency in requirements.

B. Safety-driven system engineering Two fundamental pieces of top requirements for

software intensive systems could be generated from Figure 1. (1) The useful properties of system must be effectively activated, and (2) the harmful properties must be adequately constrained. That is to say, all the system properties should be controlled by system design according to their effects. Correspondently, there could be two types of control flaws in system design. (1) Useful properties of system are not effectively activated in system design, where necessary functions for mission in system are incompletely or incorrectly designed. (2) Harmful properties are not adequately constrained in system design, where hazards are exposed to human, system, and the environment. Although these two types of control flaws are dependent of each other,

Page 5: [IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

734

they together could cover all the aspects that safety engineering concerns in system level, namely loss of mission, human, system and the environment. Completeness of safety related information is reached at this level.

With these two types of control flaws, a preliminary hazard analysis could be triggered for a planned system. Any scenario that could result in unacceptable loss of mission, human, system and environment should be identified as a preliminary hazard. These system level hazards are then derived into system level safety requirements. As system development proceeds, design decisions are continuously made in system. System level safety requirements are revised with new available design information synchronically.

C. An integrated methodology for safety engineering

As described in former chapters, software intensive systems are mission-critical, high-complexity, highly automatic and high-risk. Accidents resulted from incomplete or inconsistent requirements are more common than that from erroneous system implementation. Component-based system design allocates and encapsulates specific system level functions into specific component. System uncertainties caused by nondeterministic component functions and ambiguous interface modes could be greatly reduced in this way. Safety-driven system engineering addresses possible losses of mission, human, system and environment in a system engineering process. With all these aspects considered, better completeness and consistency of safety information is reached.

Figure 4. Safety-driven component-based system engineering processes

Figure 4 shows the general processes of safety-driven component-based system engineering. At the beginning of system design, system purpose and plan are defined by project managers. Basic information of mission, system, human and environment become available. Through task analysis, the mission processes are identified for the first time.

A PHI (Preliminary Hazard identification) process is then carried out. Base on acquired information, the predefined PHI checklist (as shown in Table 2) is used as to examine whether losses of mission, system, human and environment could happen during mission process. If so, the scenario is identified as a hazard for further investigation. For example, high-altitude character of airplane could possibly result in system crash down. As a result, system crash is identified as a hazard. In this way, a customized version of preliminary hazard list could be made to specific system.

Component-base system design then proceeds with system functional decomposition and subsystem/system construction. In this period, high-level functional requirements are decomposed and allocated to specific components, including human components. Visible function and interface modes of components are defined although internal details are still unavailable. With functional system structure defined, the control structure of system is also defined. A SHA (System hazard analysis) process is carried out to check how mismatches (and subtypes) could activate the hazards identified in PHI process in system control structure.

In later steps of component-base design process, component requirements are allocated and implemented in software and hardware. System safety process proceeds accordingly through further safety analysis and mitigation implementation. It needs to note that as system information increases with system design process, safety analysis should be synchronously refined.

By integrating safety-driven system engineering and component-base design into a whole methodology, safety-driven component-based system engineering gives direct consideration to the overall characters of software intensive systems. It could help enhance effectiveness and efficiency in system engineering. The next chapter will go on with a case study to show its process and effect.

IV. CASE STUDY

The functional goal of the example spacecraft is collecting and delivering to the Client images of selected sites on Earth at specified times and resolutions to monitor deforestation. A low-Earth orbiting spacecraft carrying a camera as payload obtains the pictures.

When the client requires pictures, the ground segment generates a plan that specifies the location (longitude, latitude), time, and resolution desired for each picture. The

Page 6: [IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

735

plan was then send from ground to onboard system. On receipt of the plan, the onboard system set up the camera and repositions the spacecraft accordingly. Once the images are taken, the data is processed and briefly stored on board. It is then downlinked to the operators on Earth when the spacecraft comes in view of the Ground Station. Once the receipt confirmation message sent by the ground segment is received by the onboard system, the data can be cleared. Onboard system also points the satellite in appropriate direction for data transmission and reception. In addition, there is human controller in ground segment that can manually operate the spacecraft when needed based on housekeeping information of onboard system.

The focus in this paper is on the design of the automatic Onboard Controller, Camera Payload, and some aspects of the Ground Segment, which is simplified to include only an Operation Center and a Ground Station with an individual Controller. Moreover, the camera is the only payload onboard the spacecraft. The following subsections show the safety-driven component-based system engineering steps for this mission system.

Step 1. Develop system functional requirement

According to previous description, the system boundary and function process could be depicted in figure 5.

The overall system level requirement is thus stated as follow:

The system should be able to collect and deliver to the Client images of selected sites on Earth at specified times and resolutions on Client require. It should as well avoid possible losses of mission, system, human and environment in its operation.

Figure 5. System bundary and function process

Step 2. Preliminary Hazard Identification

The items listed in table 2 are checked in spacecraft to ensure completeness in hazard identification. Items labeled as high/low temperature, high altitude, high speed, and high stress, and functional failure are checked here accordingly.

The preliminary hazard identified for example aircraft is listed as following.

PH.1 : Spacecraft components are damaged by high/low temperature and the temperature changes.

PH.2 : Spacecraft re-enters the atmosphere and crashes on the Earth from high altitude with high speed, resulting in possible losses of system, human and environment.

PH.3 : Spacecraft is damaged by high stress involved in collision with other objects in Earth orbit, such as another aircraft.

PH.4 : Spacecraft failed to communicate with Ground Station resulted from disturbance from other spacecraft.

PH.5 : Spacecraft function fails where desired images of correct place, correct time and correct resolution cannot be obtained by client when needed.

PH.6 : Spacecraft adversely affects other spacecraft.

PH.7 : Spacecraft adversely affects the Ground Station.

All of these hazards have the potential to generate unacceptable losses. For further analysis in Step 4, we consider only PH.6, where the client could not receive the desired images. Analysis of the other hazards is performed in a similar manner.

Step 3. System decomposition and construction

The product of this step is shown in figure 6. In this architecture, components are defined with allocated function and rebuilt into the system. The Onboard CDHC (Command and Data Handling Computer) is the auto controller of spacecraft. It is in charge of communication with Ground Station and onboard task management. On picture request, it issue commands to RCS (Reaction Control System) and to point the spacecraft in right direction and instructs the Payload Camera prepare for picture collecting. Human Controllers manually control the spacecraft when necessary. A Plan Generator is introduced in OC to generate the plan. These components together form the architecture of the whole system.

Figure 6. system decompositon and construction

Page 7: [IEEE 2011 9th International Conference on Reliability, Maintainability and Safety (ICRMS 2011) - Guiyang, China (2011.06.12-2011.06.15)] The Proceedings of 2011 9th International

736

Step 4. System Hazard Analysis

In this step, the possible mismatches and the subtypes are checked with system control structure that could be identified from figure 6. System hazard information is developed to component level based on visible function and interface of components. For PH.6, the analysis process is shown as follow.

PH.6__SH.1 : The Plan Generator generates a wrong plan.

PH.6__SH.2 : Image plans are mot correctly transferred from Plan Generator to Ground Station.

PH.6__SH.3 : Image requests are mot correctly transferred from Ground Station to Spacecraft.

PH.6__SH.4 : Spacecraft could not take the required images.

PH.6__SH.5 : Images are not transferred from Spacecraft to Ground Station.

PH.6__SH.6 : Images are not transferred from Ground Station to Client through Operation Center.

PH.6__SH.7 : Human controller issue a wrong control command during mission time.

These system hazards are further investigated in FHA process until component level. Take PH.6__SH.4 and PH.6__SH.5 for example, further analysis could be performed as follow.

PH.6__SH.4__1: The Onboard CDHC could not issue appropriate instructions to RCS.

PH.6__SH.4__2:The Onboard CDHC could not issue appropriate instructions to Payload Camera.

PH.6__SH.4__3: The RCS could not point the spacecraft in correct direction.

PH.6__SH.4__4: The Payload Camera could not collect desired picture.

PH.6__SH.5__1: The CDHC could not command related communication devices transfer collected picture to Ground Station.

PH.6__SH.5__2: Ground Station could not receive the picture transferred from communication devices of CDHC.

Step 5. Requirements allocation and implementation

As system hazard information is developed to component level, safety requirements for each individual

component are also generated. These requirements are then located to software and hardware components. As implementation proceeds, new information will continuously emerge. Further analysis in components is needed in component implementation in an iterative way with new information to design mitigation for component-level hazard.

V. CONCLUSION

All man-made systems have both positive and negative effects for human beings. Maintaining the positive effects and constrain the negative effects is the top requirement for system. Software intensive systems are mission-critical, high-complexity, high automatic and high-risk. The negative effects of these systems are not limited to technical system, operator and environment, but also reach to social aspects. Together with loss of system, human and environment, the loss of mission should also be included into system safety process.

Accident reports shows that information incompleteness and inconsistency accounts more for accidents than implementation errors. Safety information should be injected into system design completely and consistently. By combination of safety-driven system engineering and component based system design, a new methodology is developed in this paper to assure completeness and consistency in system engineering. As shown in the case study, this methodology is based on a complete view of system safety and provides linkages for safety information through system design process, thus provides a possible way to assure completeness and consistency in system design.

Further researches for this paper is aiming at methodology refinement and development of supporting tools to ensure and valid completeness and consistency of information in software intensive system design.

REFERENCES

[1] Kathryn Anne Weiss, Nicolas Dulac, Stephanie Chiesi, Mirna Daouk, David Zipkin, and Nancy Leveson, “Engineering Spacecraft Mission Software using a Model-Based and Safety-Driven Design Methodology,” Journal of Aerospace Computing�Information, and Communication, Vol. 3, November 2006

[2] Ricky W. Butler,Steven P. Miller, Rockwell Collins, James N. Potts, Cedar Rapids, Victor A. Carreno, “A Formal Methods Approach to the Analysis of Mode Confusion,� IEEE Transl�0-7803-5086-3 /98