Linked Knowledge Nuggets: arrow_forward "AI in Automotive Systems: Aligning with ISO/PAS 8800"
personAuthor: Sebastian Keller
How can AI be safely integrated into the vehicles of tomorrow?
ISO/PAS 8800:2024 lays the foundation for managing artificial intelligence in safety-related automotive systems. Join our webinar and learn what the new specification means for managers, project leaders, quality specialists, and engineering teams.
We’ll demystify the key concepts behind AI safety, explain how ISO/PAS 8800 relates to ISO 26262, and show how organizations can prepare for the next generation of system validation and assurance.
You will gain an overview of how to transfer AI development activities into structured frameworks such as Automotive SPICE, define roles such as AI safety manager or data governance lead, and avoid common pitfalls such as uncontrolled uncontrolled data drift.
Reserve your spot today and lead your company's AI safety transformation with confidence.
school
Webinar recording and slides
arrow_forward "Examples of architectural decision criteria to decide between MLE and SWE development approach"
personAuthor: Process Fellows
Rule-based vs. data-driven decisions:
Classic software development is ideal if you can formulate clear, explicit rules (e.g. “If A, then B”).
ML is suitable when rules are difficult to define but patterns in the data can be recognized.
Availability and quality of the data:
ML requires large, high-quality data sets. If you don't have enough data or the data is very noisy, ML will probably not work well.
Classical development does not require large data sets, but rather precise logic.
Expected generalization ability:
ML is strong if you want a system to learn from examples and make predictions for new, unknown data (e.g. image recognition, speech input).
Classical development is better if you can specify a deterministic output for every possible input.
Maintenance effort and explainability:
Classical software is usually easier to debug and explain because the logic is explicitly specified.
ML models are often black boxes - it can be difficult to understand why a particular decision was made.
Frequency of rule changes:
If rules change frequently, ML can be more flexible as it can adapt through training.
If the rules remain stable, classical development is more efficient.
Real-time requirements:
Classical software is often faster and more predictable as no complex calculations are required.
ML can be computationally intensive, especially when real-time inference is required.
Cost and implementation effort:
Classical development can be cheaper and faster if a deterministic solution exists.
ML often requires expensive resources (computing power, model training, data preparation).
arrow_forward "Examples of the impact of data requirements on ML architecture"
personAuthor: Process Fellows
Amount of data: A linear regression model requires significantly less data than a “deep neural network”
Faulty or noisy data (e.g. sensor data) require pre-processing.
Data structure in combination with the problem to be solved has an impact too:
Tabular data → Decision trees or gradient boosting (e.g. XGBoost)
Images → Convolutional Neural Networks (CNNs)
Texts → Transformer models (e.g. BERT, GPT)
Time series → Recurrent neural networks (RNNs) or Long Short-Term Memory (LSTMs) / Gated Recurrent Units (GRUs, similar to LSTM but less computational power needed)
Data availability:
Real time applications (e.g. autonomous driving): low latency, optimized neural networks
Batch processing (e.g. recommendation system for Netflix): More time for complex models, deeper architectures are possible.
Feature Dimensionality: Models for high-dimensional data require special techniques for reduction, e.g.
image data (millions of pixels per image) → CNNs with feature extraction.
# PROCESS PURPOSE
The purpose is to establish an ML architecture supporting training and deployment, consistent with the ML requirements, and to evaluate the ML architecture against defined criteria.
# PROCESS OUTCOMES
O1
A ML architecture is developed.
O2 (Hyperparameter = In machine learning, a hyperparameter is a parameter whose value is used to control the training of the ML model. Its value must be set between training iterations. Examples: learning rate, loss function, model depth, regularization constants.) ranges and initial values are determined as a basis for the training.
O3
Evaluation of ML architectural elements is conducted.
O4
Interfaces of the ML architectural elements are defined.
O5
Resource consumption objectives for the ML architectural elements are defined.
O6
Consistency and bidirectional traceability are established between the ML architectural elements and the ML requirements.
O7
The ML architecture is agreed and communicated to all affected parties.
Develop and document the ML architecture that specifies ML architectural elements including details of the ML model, pre- and postprocessing, and (Hyperparameter = In machine learning, a hyperparameter is a parameter whose value is used to control the training of the ML model. Its value must be set between training iterations. Examples: learning rate, loss function, model depth, regularization constants.) which are required to create, train, test, and deploy the ML model. Note 1: Necessary details of the ML model may include layers, activation functions, and backpropagation. The level of detail of the ML model may not need to cover aspects like single neurons. Note 2: The details of the ML model may differ between the ML model used during training and the deployed ML model.
Linked Knowledge Nuggets: arrow_forward "AI terms: AI systems, AI elements, AI components, AI models, AI methods, AI technologies"
personAuthor: Process Fellows
An AI system is utilizing one or more AI models (e.g. each of them could be a neural network) and can contain one or more AI components (for example two components, each of them contain a neural network with its own purpose, but the first component is delivering an output which is maybe used as an input for the second component). The term AI system serves as the "top level" of the content to be developed.
The AI system can use various AI methods (e.g. deep neural network, k-nearest neighbour, support vector machine, etc.) and can utilize different AI technologies (e.g. machine learning frameworks, libraries).
An AI model is a system that uses logic, math, or both to make predictions or draw conclusions from data, without relying entirely on rules defined by humans.
An AI element can be an AI component, subset of AI components or the complete AI system.
An AI component can be an AI pre-processing component, AI post-processing component, an AI model, a conventional software component within an AI system.
arrow_forward "Examples of the impact of data requirements on ML architecture"
personAuthor: Process Fellows
Amount of data: A linear regression model requires significantly less data than a “deep neural network”
Faulty or noisy data (e.g. sensor data) require pre-processing.
Data structure in combination with the problem to be solved has an impact too:
Tabular data → Decision trees or gradient boosting (e.g. XGBoost)
Images → Convolutional Neural Networks (CNNs)
Texts → Transformer models (e.g. BERT, GPT)
Time series → Recurrent neural networks (RNNs) or Long Short-Term Memory (LSTMs) / Gated Recurrent Units (GRUs, similar to LSTM but less computational power needed)
Data availability:
Real time applications (e.g. autonomous driving): low latency, optimized neural networks
Batch processing (e.g. recommendation system for Netflix): More time for complex models, deeper architectures are possible.
Feature Dimensionality: Models for high-dimensional data require special techniques for reduction, e.g.
image data (millions of pixels per image) → CNNs with feature extraction.
arrow_forward "What is a convolutional neural network (CNN)?"
personAuthor: Process Fellows
A Convolutional Neural Network (CNN) is a type of neural network mainly used to analyze images. It works by automatically detecting patterns — such as edges, shapes, or textures — through small filters that scan the image. These patterns are combined step by step so the network can recognize complex objects like faces, cars, or animals.
arrow_forward "What is a feed forward neural network (FFNN)?"
personAuthor: Process Fellows
A Feedforward Neural Network (FFNN) is the simplest type of neural network. In this model, information moves only in one direction — from the input layer, through one or more hidden layers, to the output layer. There are no backward or side connections between neurons, and neurons within the same layer don’t interact with each other. Usually, every neuron in one layer is connected to every neuron in the next layer, which is called a “fully connected” structure.
arrow_forward "What is a learning rate in Machine Learning?"
personAuthor: Process Fellows
In machine learning, the learning rate is a hyperparameter that determines the size of the steps a model takes when optimizing its parameters as it learns from errors (loss function).
Background:
When training a model (e.g., neural network), the aim is to minimize the loss function. This is typically done using gradient descent or variants thereof. The gradient tells us the direction in which the loss becomes smaller. The learning rate determines how far we go in this direction.
Advantages and disadvantages of learning steps that are too large or too small:
High learning rate: The model makes large jumps.
Advantage: Learning is faster.
Risk: The minimum is “skipped” or the loss becomes unstable.
Low learning rate: The model takes small, cautious steps.
Advantage: More stable learning, more accurate approximation of the minimum.
Disadvantage: Learning takes a very long time and you can get “stuck” in local minima.
arrow_forward "What is a long short-term memory network (LSTM)?"
personAuthor: Process Fellows
A Long Short-Term Memory (LSTM) network is a type of recurrent neural network designed to remember information over long periods of time. Unlike regular neural networks, LSTMs can keep track of what they learned earlier in a sequence, which helps them understand patterns that unfold over time. This makes them very useful for things like speech recognition, language translation, or predicting time-based data, where past information is important for understanding what comes next.
arrow_forward "What is a loss function?"
personAuthor: Process Fellows
In the context of machine learning, a loss function is a mathematical function that measures how well or poorly a model makes its predictions compared to the actual target values.
The goal when training an ML model is to adjust the model's parameters so that this error (the loss) is as small as possible.
arrow_forward "What is a neural network?"
personAuthor: Process Fellows
Neural networks are designed to imitate how the human brain processes information. They consist of many small processing units called neurons, which are connected in layers. Each neuron takes in several inputs, processes them, and produces one output. The connections between neurons have weights that indicate how important each input is.
During training, the network is given known examples and compares its output to the correct answer. The difference (error) is used to adjust the weights — strengthening the connections that lead to correct results and weakening those that cause mistakes. Over time, this process allows the network to learn how to analyze and make decisions for complex problems.
arrow_forward "What is a recurrent neural network (RNN)?"
personAuthor: Process Fellows
Recurrent Neural Networks (RNNs) are designed to handle data that comes in a specific order — meaning the sequence of inputs is important. Such data can be continuous, like sound or video, but also structured, like text or even images.
Unlike simpler networks, RNNs don’t just process the current input; they also take into account what they have learned from previous steps. This gives them a kind of “memory” that allows them to recognize patterns over time. Because of this, RNNs are often used in applications such as speech recognition, language translation, time-series prediction, and image analysis.
arrow_forward "What is an activation function?"
personAuthor: Process Fellows
An activation function is a function applied to the weighted combination of all inputs to a neuron. Activation functions allow neural networks to learn complicated features in the data. They are typically of a non-linear nature.
BP2
Determine hyperparameter ranges and initial values. (
O2 )
Determine and document the (Hyperparameter = In machine learning, a hyperparameter is a parameter whose value is used to control the training of the ML model. Its value must be set between training iterations. Examples: learning rate, loss function, model depth, regularization constants.) ranges and the initial values as a basis for the training.
BP3
Analyze ML architectural elements. (
O3 )
Define criteria for analysis of the ML architectural elements. Analyze ML architectural elements according to the defined criteria. Note 3: Trustworthiness and explainability might be criteria for the analysis of the ML architectural elements.
Linked Knowledge Nuggets: arrow_forward "Types of explainable AI (xAI)"
personAuthor: Process Fellows
Post-hoc explanations (explanations after the decision): Already trained model is analyzed to explain why it made a certain decision.
Saliency maps - Shows which input features were important (e.g. for images).
LIME (Local Interpretable Model-agnostic Explanations) - Creates simplified, linear models to explain individual predictions.
SHAP (Shapley Additive Explanations) - Calculates the influence of each input feature on the prediction.
Counterfactual Explanations - Shows how the decision would change if certain inputs were different.
Intrinsically explainable models: Models are designed to be explainable from the outset, examples:
Decision trees - Clear “if-then” rules.
Linear regression - Direct correlation between input and output.
arrow_forward "What does trustworthiness mean?"
personAuthor: Process Fellows
The trustworthiness of ML models encompasses criteria that enable stakeholders to assess whether the model aligns with their expectations. Trustworthiness is created for example by criteria like robustness, predictability, explainability, controllability, generalization, bias and fairness and more criteria.
AI controllability means, ability of an external agent to control the AI, its output or the behaviour of the item influenced by the AI output in order to prevent harm.
AI explainability means, property of an AI system to express important factors influencing the AI system's outputs in a way that humans can understand.
AI predictability means, ability of the AI system to produce trusted predictions, i.e. predications are accurate and there is statistical evidence.
AI generalization means, ability of an AI model to adapt and perform well on previously unseen data during inference.
AI robustness means, ability to maintain an acceptable level of performance although input is "imperfect", e.g. image data which is partially corrupt or there is significant sensor noise.
AI bias means, an AI model or dataset can be systematically prejudiced towards some kind of (potentially erroneous) assumption. This assumption stems from the inherent statistical distributions (e.g. over classes) in a dataset that can be learned by a model.
AI fairness means, if the model bias is linked to a difference in treatment of certain subgroups of humans (e.g. ethnic minorities, age or sex) this model is considered unfair. AI fairness is the reasonable absence of unfairness.
BP4
Define interfaces of the ML architectural elements. (
O4 )
Determine and document the internal and external interfaces of each ML architectural element including its interfaces to related (Software component = Software component in design and implementation-oriented processes:
The software architecture decomposes the software into software components across appropriate hierarchical levels down to the lowest-level software components in a conceptual model.
Software component in verification-oriented processes:
The implementation of a SW component under verification is represented e.g., as source code, object files, library file, executable, or executable model.).
Linked Knowledge Nuggets: arrow_forward "What has to be considered when defining interfaces between AI elements?"
personAuthor: Process Fellows
The ML architecture must also take into account the interfaces between the different architectural elements. This is basically similar to the approach in Software Architecture. Such interfaces of a SW architecture are usually described by their name, type, range, default value, unit, resolution, and direction. In case of a ML architecture, additional aspects are important, because such systems are typically data driven and probabilistic. Examples (non-exhaustive):
Often probabilities or confidence values are provided by such models, not just yes/ no answers. Therefore, interfaces might define how uncertainty is communicated as well how thresholds for decisions are configured.
Sensitiveness for input data: format, normalization and encoding must be clearly specified, as well how to handle missing values or out-of -distribution data. In case of "classical" APIs, often the type must match. In case of AI, in addition the data representation must match (e.g. "tokenization", image dimension).
ML / AI models can change frequently because of re-training or fine tuning. Therefore, interfaces should consider a model version or API version. In addition a strategy for backward compatibility could make sense.
Interference times can vary (depending on e.g. hardware and batch size). Therefore interfaces could account for configurable timeouts and quality-of-service parameters or they could support asynchronous processing or streaming as an option.
AI / ML models often process personal data. Therefore interfaces could support anonymization of data, as well logging should be done under consideration of GDPR and similar standards.
BP5
Define resource consumption objectives for the ML architectural elements. (
O5 )
Determine and document the resource consumption objectives for all relevant ML architectural elements during training and deployment.
BP6
Ensure consistency and establish bidirectional traceability. (
O6 )
Ensure consistency and establish bidirectional traceability between the ML architectural elements and the ML requirements. Note 4: Bidirectional traceability supports consistency, and facilitates impact analyses of change requests, and (Verification = Verification is confirmation through the provision of objective evidence that an element fulfils the specified requirements.) coverage demonstration. Traceability alone, e.g., the existence of links, does not necessarily mean that the information is consistent with each other. Note 5: The bidirectional traceability should be established on a reasonable level of abstraction to the ML architectural elements.
Linked Knowledge Nuggets: arrow_forward "Consistency vs. Traceability – What’s the Difference?"
personAuthor: Process Fellows
Consistency ensures that related content doesn’t contradict itself – e.g., requirements align with architecture and test. Traceability, in contrast, is about links: can you follow a requirement through to implementation and verification? Both are needed – consistency builds trust, traceability enables control. Typically, traceability strongly supports consistency review.
arrow_forward "The role of traceability in risk control"
personAuthor: Process Fellows
Traceability isn’t just about completeness — it’s about managing impact. When a requirement changes, trace links tell you what’s affected. That’s your early-warning system.
arrow_forward "The true benefit of traceability
"
personAuthor: Process Fellows
Sometimes the creation of traceability is seen as an additional expense, the benefits are not recognized.
Traceability should be set up at the same time as the derived elements are created. Both work products are open in front of us and the creation of the trace often only takes a few moments.
In the aftermath, the effort increases noticeably and the risk of gaps is high.
If the traceability is complete and consistent, the discovery of dependencies is unbeatably fast and reliable compared to searching for dependencies at a later stage, when there may also be time pressure.
It also enables proof of complete coverage of the derived elements and allows the complete consistency check.
BP7
Communicate agreed ML architecture. (
O7 )
Inform all affected parties about the agreed ML architecture including the details of the ML model and the initial (Hyperparameter = In machine learning, a hyperparameter is a parameter whose value is used to control the training of the ML model. Its value must be set between training iterations. Examples: learning rate, loss function, model depth, regularization constants.) values.
# OUTPUT INFORMATION ITEMS
15-51
Analysis results (
O1, O3 )
Identification of the object under analysis.
The analysis criteria used, e.g.:
selection criteria or prioritization scheme used
decision criteria
quality criteria
The analysis results, e.g.:
what was decided/selected
reason for the selection
assumptions made
potential negative impact
Aspects of the analysis may include
correctness
understandability
verifiability
feasibility
validity
Used by these processes:
ACQ.4 Supplier Monitoring
HWE.1 Hardware Requirements Analysis
HWE.2 Hardware Design
MAN.5 Risk Management
MAN.6 Measurement
MLE.1 Machine Learning Requirements Analysis
MLE.2 Machine Learning Architecture
PIM.3 Process Improvement
SWE.1 Software Requirements Analysis
SWE.2 Software Architectural Design
SYS.1 Requirements Elicitation
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
13-52
Communication evidence (
O4 )
All forms of interpersonal communication such as
e-mails, also automatically generated ones
tool-supported workflows
meeting, verbally or via meeting minutes (e.g., daily standups)
podcast
blog
videos
forum
live chat
wikis
photo protocol
Used by these processes:
ACQ.4 Supplier Monitoring
HWE.1 Hardware Requirements Analysis
HWE.2 Hardware Design
HWE.3 Verification against Hardware Design
HWE.4 Verification against Hardware Requirements
MAN.3 Project Management
MLE.1 Machine Learning Requirements Analysis
MLE.2 Machine Learning Architecture
MLE.3 Machine Learning Training
MLE.4 Machine Learning Model Testing
PIM.3 Process Improvement
REU.2 Management of Products for Reuse
SUP.1 Quality Assurance
SUP.11 Machine Learning Data Management
SWE.1 Software Requirements Analysis
SWE.2 Software Architectural Design
SWE.3 Software Detailed Design and Unit Construction
SWE.4 Software Unit Verification
SWE.5 Software Component Verification and Integration Verification
SWE.6 Software Verification
SYS.1 Requirements Elicitation
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
SYS.4 System Integration and Integration Verification
SYS.5 System Verification
VAL.1 Validation
Used by these process attributes:
PA2.1 Performance Management
13-51
Consistency Evidence (
O3 )
Demonstrates bidirectional traceability between artifacts or information in artifacts, throughout all phases of the life cycle, by e.g.,
tool links
hyperlinks
editorial references
naming conventions
Evidence that the content of the referenced or mapped information coheres semantically along the traceability chain, e.g., by
performing pair working or group work
performing by peers, e.g., spot checks
maintaining revision histories in documents
providing change commenting (via e.g., meta-information) of database or repository entries
Note: This evidence can be accompanied by e.g., Definition of Done (DoD) approaches.
Used by these processes:
HWE.1 Hardware Requirements Analysis
HWE.2 Hardware Design
HWE.3 Verification against Hardware Design
HWE.4 Verification against Hardware Requirements
MAN.3 Project Management
MLE.1 Machine Learning Requirements Analysis
MLE.2 Machine Learning Architecture
MLE.3 Machine Learning Training
MLE.4 Machine Learning Model Testing
SUP.8 Configuration Management
SUP.10 Change Request Management
SWE.1 Software Requirements Analysis
SWE.2 Software Architectural Design
SWE.3 Software Detailed Design and Unit Construction
SWE.4 Software Unit Verification
SWE.5 Software Component Verification and Integration Verification
SWE.6 Software Verification
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
SYS.4 System Integration and Integration Verification
SYS.5 System Verification
VAL.1 Validation
01-54
Hyperparameter (
O1, O2 )
(Hyperparameter = In machine learning, a hyperparameter is a parameter whose value is used to control the training of the ML model. Its value must be set between training iterations. Examples: learning rate, loss function, model depth, regularization constants.) are used to control the ML model which has to be trained, e.g.:
Learn rate of training
Scaling of network (number of layers or neurons per layer)
Loss function
Minimum characteristics:
Description
Initial value
Final value upon communicating the results of the ML training
Used by these processes:
MLE.2 Machine Learning Architecture
MLE.3 Machine Learning Training
04-51
ML architecture (
O1, O2, O3, O4, O5 )
An ML architecture is basically a special part of a software architecture (see 04-04). Additionally
ML architecture describes the overall structure of the ML-based (Software element = Refers to software component or software unit)
ML architecture specifies ML architectural elements including an ML model and other ML architectural elements, provided to train, deploy, and test the ML model.
describes interfaces within the ML-based (Software element = Refers to software component or software unit) and to other (Software element = Refers to software component or software unit)
ML architecture describes details of the ML model like used layers, activation functions, loss function, and backpropagation
ML architecture contains defined (Hyperparameter = In machine learning, a hyperparameter is a parameter whose value is used to control the training of the ML model. Its value must be set between training iterations. Examples: learning rate, loss function, model depth, regularization constants.) ranges and initial values for training start
resource consumption objectives are defined
ML architecture contains allocated ML requirements