Linked Knowledge Nuggets: arrow_forward "Designing for maintainability from day one"
personAuthor: Process Fellows
Think maintenance when architecting. Clear structures, good naming, minimal duplication, and comments that explain why (not what). Maintainability isn’t added later — it’s built in.
arrow_forward "Examples of architectural decision criteria to decide between MLE and SWE development approach"
personAuthor: Process Fellows
Rule-based vs. data-driven decisions:
Classic software development is ideal if you can formulate clear, explicit rules (e.g. “If A, then B”).
ML is suitable when rules are difficult to define but patterns in the data can be recognized.
Availability and quality of the data:
ML requires large, high-quality data sets. If you don't have enough data or the data is very noisy, ML will probably not work well.
Classical development does not require large data sets, but rather precise logic.
Expected generalization ability:
ML is strong if you want a system to learn from examples and make predictions for new, unknown data (e.g. image recognition, speech input).
Classical development is better if you can specify a deterministic output for every possible input.
Maintenance effort and explainability:
Classical software is usually easier to debug and explain because the logic is explicitly specified.
ML models are often black boxes - it can be difficult to understand why a particular decision was made.
Frequency of rule changes:
If rules change frequently, ML can be more flexible as it can adapt through training.
If the rules remain stable, classical development is more efficient.
Real-time requirements:
Classical software is often faster and more predictable as no complex calculations are required.
ML can be computationally intensive, especially when real-time inference is required.
Cost and implementation effort:
Classical development can be cheaper and faster if a deterministic solution exists.
ML often requires expensive resources (computing power, model training, data preparation).
arrow_forward "Software Requirements Analysis and Architectural Design"
personAuthor: Process Fellows
SWE.1 & SWE.2 define how software requirements are derived from system input and structured into a clear, consistent software architecture. The processes ensure traceability, logical breakdown, and well-founded design decisions for scalable and maintainable software.
school
PF_SWE.1_SWE.2_Software Requirements Analysis and Architecture _Extract.pdf
arrow_forward "The role of the architect"
personAuthor: Process Fellows
Not a bureaucrat, but a facilitator. Someone who translates goals into structures, moderates conflicts, aligns toolchains, and keeps the big picture stable across sprints and handovers. By the way, this is true for product related architects as well for process architects.
# PROCESS PURPOSE
The purpose is to establish an analyzed software architecture, comprising static and dynamic aspects, consistent with the software requirements.
# PROCESS OUTCOMES
O1
A software architecture is designed including static and dynamic aspects.
O2
The software architecture is analyzed against defined criteria.
O3
Consistency and bidirectional traceability are established between software architecture and software requirements.
O4
The software architecture is agreed and communicated to all affected parties.
# BASE PRACTICES
BP1
Specify static aspects of the software architecture. (
O1 )
Specify and document the static aspects of the software architecture with respect to the functional and non-functional software requirements, including external interfaces and a defined set of (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.) with their interfaces and relationships. Note 1: The (Hardware = Assembled and interconnected electrical or electronic hardware components or parts which perform analog or digital functions or operations.)-software-interface (HSI) definition puts in context the (Hardware = Assembled and interconnected electrical or electronic hardware components or parts which perform analog or digital functions or operations.) design and therefore is an aspect of system design (SYS.3).
Linked Knowledge Nuggets: arrow_forward "What is the difference between a model, view and a diagram in (system) architecture?"
personAuthor: Process Fellows
In model-based development, it's essential to distinguish between the concepts of "model", "view", and "diagram", as each serves a specific purpose.
A "model" is an abstraction of reality. It represents the complete system description, but only in terms of the essential elements that are relevant to the modeling context. Irrelevant details are deliberately excluded to maintain clarity and focus. The model is considered "complete" not because it includes every possible detail, but because it captures all significant influencing factors necessary for understanding and analysis within its intended scope. Remark: This definition is applicable as well for the various disciplines, e.g. in case a model is used to define a software architecture. If needed, different modelling techniques (e.g. SysML versus UML) might be used for different disciplines.
A "view" focuses on a particular aspect of the model, such as its structure or behavior. Views are tailored to the needs of specific stakeholders, which means that certain details may be intentionally omitted. However, a view never contains more information than the model itself—it is always a subset or projection of the model. The model remains the authoritative source of truth, while views help stakeholders concentrate on what matters most to them.
A "diagram" is a visual representation of a model with respect to a specific view. It helps communicate the model’s content in a clear and accessible way. Multiple types of diagrams can be used to illustrate different views, depending on the aspect being analyzed and the audience’s needs.
BP2
Specify dynamic aspects of the software architecture. (
O1 )
Specify and document the dynamic aspects of the software architecture with respect to the functional and non-functional software requirements, including the behavior of the (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.) and their interaction in different software modes, and concurrency aspects. Note 2: Examples for concurrency aspects are application-relevant interrupt handling, preemptive processing, multi-threading. Note 3: Examples for behavioral descriptions are natural language or semi-formal notation (e.g, SysML, UML).
Linked Knowledge Nuggets: arrow_forward "What is the difference between a model, view and a diagram in (system) architecture?"
personAuthor: Process Fellows
In model-based development, it's essential to distinguish between the concepts of "model", "view", and "diagram", as each serves a specific purpose.
A "model" is an abstraction of reality. It represents the complete system description, but only in terms of the essential elements that are relevant to the modeling context. Irrelevant details are deliberately excluded to maintain clarity and focus. The model is considered "complete" not because it includes every possible detail, but because it captures all significant influencing factors necessary for understanding and analysis within its intended scope. Remark: This definition is applicable as well for the various disciplines, e.g. in case a model is used to define a software architecture. If needed, different modelling techniques (e.g. SysML versus UML) might be used for different disciplines.
A "view" focuses on a particular aspect of the model, such as its structure or behavior. Views are tailored to the needs of specific stakeholders, which means that certain details may be intentionally omitted. However, a view never contains more information than the model itself—it is always a subset or projection of the model. The model remains the authoritative source of truth, while views help stakeholders concentrate on what matters most to them.
A "diagram" is a visual representation of a model with respect to a specific view. It helps communicate the model’s content in a clear and accessible way. Multiple types of diagrams can be used to illustrate different views, depending on the aspect being analyzed and the audience’s needs.
intacs® Certified Mechanical SPICE
Sponsored
The official intacs® Certified training for Mechanical SPICE.
Analyze the software architecture regarding relevant technical design aspects and to support (Project = Endeavor with defined start and finish dates undertaken to create a product or service in accordance with specified resources and requirements.) management regarding (Project = Endeavor with defined start and finish dates undertaken to create a product or service in accordance with specified resources and requirements.) estimates. Document a rationale for the software architectural design decision. Note 4: See MAN.3.BP3 for (Project = Endeavor with defined start and finish dates undertaken to create a product or service in accordance with specified resources and requirements.) feasibility and MAN.3.BP5 for (Project = Endeavor with defined start and finish dates undertaken to create a product or service in accordance with specified resources and requirements.) estimates. Note 5: The analysis may include the suitability of pre-existing (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.) for the current application. Note 6: Examples of methods suitable for analyzing technical aspects are prototypes, simulations, qualitative analyses. Note 7: Examples of technical aspects are functionality, timings, and resource consumption (e.g, ROM, RAM, external / internal EEPROM or Data Flash or CPU load). Note 8: Design rationales can include arguments such as proven-in-use, reuse of a software framework or software product line, a make-or-buy decision, or found in an evolutionary way (e.g, set-based design).
Linked Knowledge Nuggets: arrow_forward "Analysis of an architectural design"
personAuthor: Process Fellows
The analysis of an architectural design is a substantive examination of the quality of an architecture.
It should never depend on the background knowledge or experience of a single person.
For this reason, the analysis must involve a (interdisciplinary) team of experts, be based on predefined criteria and the analysis method should be documented in a comprehensible manner.
Examples of analysis criteria can be found in the Automotive Spice Guideline, but please do not hesitate to define additional criteria that are appropriate to the task and the environment!
BP4
Ensure consistency and establish bidirectional traceability. (
O3 )
Ensure consistency and establish bidirectional traceability between the software architecture and the software requirements. Note 9: There may be non-functional software requirements that the software architectural design does not trace to. Examples are development process requirements. Such requirements are still subject to (Verification = Verification is confirmation through the provision of objective evidence that an element fulfils the specified requirements.). Note 10: Bidirectional traceability supports consistency, and facilitates impact analysis of change requests, and demonstration of (Verification = Verification is confirmation through the provision of objective evidence that an element fulfils the specified requirements.) coverage. Traceability alone, e.g, the existence of links, does not necessarily mean that the information is consistent with each other.
Linked Knowledge Nuggets: arrow_forward "Consistency vs. Traceability – What’s the Difference?"
personAuthor: Process Fellows
Consistency ensures that related content doesn’t contradict itself – e.g., requirements align with architecture and test. Traceability, in contrast, is about links: can you follow a requirement through to implementation and verification? Both are needed – consistency builds trust, traceability enables control. Typically, traceability strongly supports consistency review.
arrow_forward "The role of traceability in risk control"
personAuthor: Process Fellows
Traceability isn’t just about completeness — it’s about managing impact. When a requirement changes, trace links tell you what’s affected. That’s your early-warning system.
arrow_forward "The true benefit of traceability
"
personAuthor: Process Fellows
Sometimes the creation of traceability is seen as an additional expense, the benefits are not recognized.
Traceability should be set up at the same time as the derived elements are created. Both work products are open in front of us and the creation of the trace often only takes a few moments.
In the aftermath, the effort increases noticeably and the risk of gaps is high.
If the traceability is complete and consistent, the discovery of dependencies is unbeatably fast and reliable compared to searching for dependencies at a later stage, when there may also be time pressure.
It also enables proof of complete coverage of the derived elements and allows the complete consistency check.
Communicate the agreed software architecture to all affected parties.
# OUTPUT INFORMATION ITEMS
15-51
Analysis results (
O2 )
Identification of the object under analysis.
The analysis criteria used, e.g.:
selection criteria or prioritization scheme used
decision criteria
quality criteria
The analysis results, e.g.:
what was decided/selected
reason for the selection
assumptions made
potential negative impact
Aspects of the analysis may include
correctness
understandability
verifiability
feasibility
validity
Used by these processes:
ACQ.4 Supplier Monitoring
HWE.1 Hardware Requirements Analysis
HWE.2 Hardware Design
MAN.5 Risk Management
MAN.6 Measurement
MLE.1 Machine Learning Requirements Analysis
MLE.2 Machine Learning Architecture
PIM.3 Process Improvement
SWE.1 Software Requirements Analysis
SWE.2 Software Architectural Design
SYS.1 Requirements Elicitation
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
13-52
Communication evidence (
O4 )
All forms of interpersonal communication such as
e-mails, also automatically generated ones
tool-supported workflows
meeting, verbally or via meeting minutes (e.g., daily standups)
podcast
blog
videos
forum
live chat
wikis
photo protocol
Used by these processes:
ACQ.4 Supplier Monitoring
HWE.1 Hardware Requirements Analysis
HWE.2 Hardware Design
HWE.3 Verification against Hardware Design
HWE.4 Verification against Hardware Requirements
MAN.3 Project Management
MLE.1 Machine Learning Requirements Analysis
MLE.2 Machine Learning Architecture
MLE.3 Machine Learning Training
MLE.4 Machine Learning Model Testing
PIM.3 Process Improvement
REU.2 Management of Products for Reuse
SUP.1 Quality Assurance
SUP.11 Machine Learning Data Management
SWE.1 Software Requirements Analysis
SWE.2 Software Architectural Design
SWE.3 Software Detailed Design and Unit Construction
SWE.4 Software Unit Verification
SWE.5 Software Component Verification and Integration Verification
SWE.6 Software Verification
SYS.1 Requirements Elicitation
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
SYS.4 System Integration and Integration Verification
SYS.5 System Verification
VAL.1 Validation
Used by these process attributes:
PA2.1 Performance Management
13-51
Consistency Evidence (
O3 )
Demonstrates bidirectional traceability between artifacts or information in artifacts, throughout all phases of the life cycle, by e.g.,
tool links
hyperlinks
editorial references
naming conventions
Evidence that the content of the referenced or mapped information coheres semantically along the traceability chain, e.g., by
performing pair working or group work
performing by peers, e.g., spot checks
maintaining revision histories in documents
providing change commenting (via e.g., meta-information) of database or repository entries
Note: This evidence can be accompanied by e.g., Definition of Done (DoD) approaches.
Used by these processes:
HWE.1 Hardware Requirements Analysis
HWE.2 Hardware Design
HWE.3 Verification against Hardware Design
HWE.4 Verification against Hardware Requirements
MAN.3 Project Management
MLE.1 Machine Learning Requirements Analysis
MLE.2 Machine Learning Architecture
MLE.3 Machine Learning Training
MLE.4 Machine Learning Model Testing
SUP.8 Configuration Management
SUP.10 Change Request Management
SWE.1 Software Requirements Analysis
SWE.2 Software Architectural Design
SWE.3 Software Detailed Design and Unit Construction
SWE.4 Software Unit Verification
SWE.5 Software Component Verification and Integration Verification
SWE.6 Software Verification
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
SYS.4 System Integration and Integration Verification
SYS.5 System Verification
VAL.1 Validation
04-04
Software Architecture (
O1 )
A justifying rationale for the chosen architecture.
Individual functional and non-functional behavior of the (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.)
Settings for (Application parameter = An application parameter is a software variable containing data that can be changed at the system or software levels; they influence the system’s or software behavior and properties. The notion of application parameter is expressed in two ways:
The specification (including variable names, the domain value range, technical data types, default values, physical unit (if applicable), the corresponding memory maps, respectively).
The actual quantitative data value it receives by means of data application.
Application parameters are not requirements. They are a technical implementation solution for configurability-oriented requirements.) (being a technical implementation solution for configurability-oriented requirements)
Technical characteristics of interfaces for relationships between (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.) such as:
Synchronization of Processes and (Task = A definition, but not the execution, of a coherent and set of atomic actions.)
Programming language call
APIs
Specifications of SW libraries
Method definitions in an object- oriented class definitions or UML/SysML interface classes
Callback functions, “hooks”
Dynamics of (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.) and software states such as:
intercommunication (processes, (Task = A definition, but not the execution, of a coherent and set of atomic actions.), threads) and priority
time slices and cycle time
interrupts with their priorities
interactions between (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.)
Explanatory annotations, e.g, with natural language, for single elements or entire diagrams/models.