How do you feel about this article? Help us to provide better content for you.
Thank you! Your feedback has been received.
There was a problem submitting your feedback, please try again later.
What do you think of this article?
In my first blog, here, I introduced the concept that software and systems engineering projects are different when looking at regulatory compliance and the impact of meeting regulations on the project. In this blog, I want to look at the practical implications of this, using my experience within defence and aviation. I have used aviation examples, as it is easier to convey the different safety levels.
Unlike for physical equipment/products, I’ve never known for a software/systems product to be independently tested by a testing house. From my experience what happens is that there will be regular meetings scheduled with the regulator throughout the project and there will be requirements on the software development process as well as for specific documentation to be developed during it. These meetings and the documentation will then be used to assess whether the software/systems product can be brought into service.
The impact of the different levels of safety criticality on the design, implementation and test activities for the project should not be underestimated, both in terms of time and cost. In the following, which gives just a flavour to illustrate, I have described what’s needed at the highest level of safety criticality but for systems that are less critical, there will be fewer demands.
Requirements may need to be specified for the system, all sub-systems and all units, and there will need to be a mapping between the different levels of requirements to demonstrate that the system will function in the way it is specified to.
The software architecture/design needs to be cognizant of the level of safety criticality. So, for example in a helicopter/plane, the Flight Management Software will be classed as the highest level of safety criticality as there could be loss of life if it were to fail and cause the aircraft to crash. For this level of safety criticality, the allowed constructs do not include such functionality as dynamic memory allocation. These architectural and design constraints will also apply to any commercially provided functionality such as real-time operating systems. For software that has been assessed as less safety-critical, for example, a system such as a map capability, there are less rigorous restrictions placed on it and the constructs that can be used are more wide-ranging, eg dynamic memory allocation may be able to be used.
The implementation will need to be documented in the way defined by the standard for the level of safety criticality specified. This could also include coding standards that may define how the code is laid out, how variables are named, the use of constants rather than hardcoded numbers etc.
For test activities, evidence will be needed to demonstrate that all requirements specified have been tested and met. The test evidence will also need to be provided in such a way as to ensure that any safety standard requirements are addressed.
Note, RTCA DO-178B (the predecessor to the current RTCA DO-178C standard) also defined a Level E. Level E was defined as the software having no safety requirements and therefore the standard did not apply. This is the only safety standard that I am personally aware of that defines a level of software classification where the standard doesn’t apply.
On top of the additional time and cost incurred by the project for the new development, the project will need to include the cost of the required documentation provided by the supplier of any commercial components used, eg a real-time operating system. For this privilege, the supplier will charge the project, and for the highest levels of safety criticality, the cost of purchasing this evidence can be substantial, for example in the early 2000s a supplier quoted a figure of over $100,000 for this evidence.
Some of the documentation requirements specified within the standard or by the customer/ regulator may include the development of an FMEA (Failure Mode Effects Analysis) or FMECA (Failure Mode, Effects and Criticality Analysis). These documents will require fault trees to be developed to show potential failure paths through the software/system. I was involved in an interesting debate as to what value do you place on software as software always has bugs in it (the more complex the software the higher the probability) but those bugs are only seen if a specific path is taken through the software, so should software be allocated as 1 (always going to fail) or 0 (never going to fail) or something in between? I suspect the debate is on-going!
I also know there have been proposals that for the highest level of safety criticality, two completely standalone systems should be developed so different teams would be paid to develop the system from scratch thereby having different requirements, design, implementation and test. As far as I am aware, no customer has ever actually done this due to the cost and time implications.
In the next blog I will look at The Impact of having regulatory requirements on a Software project.