Concept for TimeDaemon#
TimeDaemon concept#
Use Cases#
TimeDaemon is the non Autosar adaptive process who is intended to get the Vehicle Time from the ptp slave daemon (ptpd or any other), verify and validate the timepoints and distribute time information across the clients.
More precisely we can specify the following use cases for the time daemon:
Providing current Vehicle time to different applications
Setting the synchronization qualifier (aka Synchronized, Timeout, so on)
Providing needed information for diagnostics
Providing needed information for addition verification, ex SafeCarTime
The raw architectural diagram is represented below.
Components decomposition#
The design consists of several sw components:
Deployment view#
The design deployment is represented on the following diagram:
Class view#
Main classes and components are presented on this diagram:
Data and control flow#
The Data and Control flow are presented in the following diagram:
On this view you could see several “workers” scopes:
PTP retrieving scope
PTPTimeInfo handling scope
PTPTimeInfo receiving on Application side scope
Each control flow is implemented with the dedicated thread or process and is independent form another ones.
Control flows#
PTP retrieving scope#
This control flow is responsible for the:
retrieve the latest information from the ptp stack and
provide it to the
PTPTimeInfo handlingcontrol flow
PTPTimeInfo handling scope#
This control flow is responsible for the:
Validate the time information, provided by the
PTP retrievingworkflow andpublish it to the
Applicationsvia some IPC
PTPTimeInfo receiving on Application side scope#
This control flow is responsible for the:
Propagate the time information from the
PTPTimeInfo handlingto the business logic of the applications.
Data types or events#
There are also several data types, which components are communicating to each other:
Raw ptp data#
raw_ptp_data is the data, which is provided by PTPMachine component and is just the raw data from ptp stack. is handled in the “PTP retrieving scope”
Input ptp data#
input_ptp_data is the same data as raw_ptp_data but which is handled already in “PTPTimeInfo handling scope”
Verified ptp data#
verified_ptp_data is the input_ptp_data which was verified according to the business logic and updated accordingly. This data should be published to the Applications.
SW Components decomposition#
Application SW component#
The Application component is the main entry point for the TimeDaemon. It is responsible for orchestrating the overall lifecycle and initialization of all daemon components.
The TimebaseHandler component is an timebase-specific logic implementation. There might be several handlers available in the Application per amount of timebases supported. This separation allows for different timebase implementations while maintaining a consistent application structure.
Component requirements#
The Application has the following requirements:
The
Applicationshall implement theInitialize()method to create and initialize all daemon componentsThe
Applicationshall implement theRun()method to start all components and wait for terminationThe
Applicationshall connect components to theMessageBrokerby setting up all required subscriptions during initialization stageThe
Applicationshall support extension for different timebases.
Class view#
The Class Diagram is presented below:
Initialization flow#
During initialization, the Application uses the MachineFactory to create, configure and subscribe all components in a specific order:
Create the
MessageBrokerfirst, as other components depend on itCreate ProactiveMachines (
PtpMachine,ControlFlowDivider) that drive system behaviorInitialize each component
Set up MessageBroker subscriptions to component notifications
Set up component subscriptions to MessageBroker topics
Create ReactiveMachines (
VerificationMachine,IPCMachine) that respond to eventsInitialize each component
Set up MessageBroker subscriptions to component notifications
Set up component subscriptions to MessageBroker topics
The initialization workflow is represented in the following sequence diagram:
Execution and shutdown flow#
During execution, the Application:
Starts all
ProactiveMachinesin the correct orderMonitors the stop token for termination requests
When termination is requested, stops all
ProactiveMachinesin reverse order
The execution and shutdown workflow is represented in the following sequence diagram:
Message Broker SW component#
The Message Broker component is the central communication hub that implements the Publish-Subscribe pattern within the TimeDaemon. It enables decoupled communication between components by managing topics and distributing messages to interested subscribers.
The component maintains a registry of topics and their subscribers, delivering messages to all registered subscribers when a component publishes to a topic. This decoupling allows components to evolve independently without direct dependencies on each other.
Component requirements#
The Message Broker has the following requirements:
The
Message Brokershall maintain a registry of topics and their subscribersThe
Message Brokershall allow components to subscribe to topics of interestThe
Message Brokershall distribute messages to all subscribers when a topic is published to
Class view#
The Class Diagram is presented below:
Initialization flow#
During initialization, all machine objects, see BaseMachine, the Application component needs to subscribe machines to Message Broker to the topics of interest.
The initialization workflow is represented in the following sequence diagram:
Message flow#
The message flow through the Message Broker is represented in the following sequence diagram:
Concurrency aspects#
The Message Broker doesn’t provide any synchronization between the publish-callback invoking processes.
Moreover, the callback invoke will happened in the scope of the thread, where the publish method is called.
To separate the control flows, the ControlFlowDivider shall be used
Scalability#
The Message Broker can be extended to support configuration-driven subscriptions, where topic relationships are defined in configuration files rather than hardcoded.
ControlFlowDivider SW component#
The ControlFlowDivider component is responsible for separating control (execution) flows within the TimeDaemon and providing the execution control flow for the data processing. It contains dedicated threads where data is published to the Message Broker, ensuring that blocking operations in one component do not affect the execution of other components and data missing is not affecting the data analysis in processing pipeline.
This component acts as a crucial intermediary that maintains the responsiveness of the system by decoupling the execution contexts of different operations, particularly between the PTP data retrieval and the time data processing pipelines.
Component requirements#
The ControlFlowDivider has the following requirements:
The
ControlFlowDividershall provide separate execution threads for different control flowsThe
ControlFlowDividershall isolate components from execution time variations in other componentsThe
ControlFlowDividershall maintain consistent data publishing rates to the subscribersThe
ControlFlowDividershall push the last received data to the subscribers if there is no new data for some time with the predefined rate, to avoid data missing in the processing pipeline
The
ControlFlowDividershall enable periodic processing of the pipeline through consistent event generationThe
ControlFlowDividershall buffer incoming data from fast producers
Class view#
The Class Diagram is presented below:
Initialization flow#
During initialization, the ControlFlowDivider performs the following steps:
Initialize internal data structures (queue, mutex, condition variable)
Create a worker thread to process data independently
Start the worker thread which enters a waiting state
The initialization workflow is represented in the following sequence diagram:
Message flow#
When the ControlFlowDivider receives new data from the PTP Machine via the Message Broker, it processes it through the following workflow:
The
Message Brokerexecutes the onNewData callback and provides the new dataThe data is placed in a thread-safe queue and exists from the callback
The worker thread wakes up, retrieves the data from the queue and
The worker thread publishes the retrieved data to the input_ptp_data topic
if there was no data for some timeout, the worker shall published the empty data to the input_ptp_data topic.
This separation of control flows ensures that slow or blocking operations in the PTP stack communication do not affect the responsiveness of time data processing in the TimeDaemon.
The execution workflow is represented in the following sequence diagram:
PTP Machine SW component#
The PTP Machine component shall retrieve all needed information from the ptp stack (ex ptpd) and provide it to the Message Broker for routing.
All communication with the ptp stack ight use devctl calls, which take some time, thus these calls shall be done in the dedicated thread.
Component requirements#
The PTP Machine has the following requirements:
The
PTP Machineshall retrieve the latest time information from the PTP stack (e.g.,ptpd)The
PTP Machineshall publish retrieved time information to theMessage Brokerusing the defined topicThe
PTP Machineshall format data according to thePTPTimeInfostructure required by downstream componentsThe
PTP Machineshall retrieve time information at a consistent rate to maintain time synchronizationThe
PTP Machineshall maintain consistent publishing rates for time data even when experiencing delays in PTP stack communication.The
PTP Machineshall support exchangeability with different PTP stack implementations
Class view#
The Class Diagram is presented below.
As long as it wraps the particular communication with the ptp stack, the implementations should be easily exchangeable with another one in case of stack change.
Component initialization#
During initialization the PTP Machine shall initialize the ptp stack to be able to communicate with it.
The initialization workflow is described below.
Publish new data#
After PTP Machine collects new data from the ptp stack, the component shall publish it to the Message Broker as raw-ptp-data.
The publish workflow is described below.
Verification Machine SW component#
The Verification Machine component is responsible for validating and qualifying the time information received from the PTP Machine. It applies various validation rules to ensure the time data meets quality requirements before distribution to applications.
The component implements a pipeline pattern where each stage performs a specific validation and adds appropriate qualifiers to the time data. This modular design allows for easy extension with additional validation steps.
Component requirements#
The Verification Machine has the following requirements:
The
Verification Machineshall validate and qualify time information received from the PTP MachineThe
Verification Machineshall validate if the time base is synchronized stateThe
Verification Machineshall validate if the time base is in timeout stateThe
Verification Machineshall validate timestamp for time jumps based on local clockThe
Verification Machineshall subscribe to the input_ptp_data topic via theMessage BrokerThe
Verification Machineshall publish verified time data to theMessage Brokerusing the verified-ptp-data topicThe
Verification Machineshall support extensibility to add new validation stages in the pipeline
Class view#
The Class Diagram is presented below.
Component initialization#
During initialization, the Verification Machine performs the following steps:
Set up the validation pipeline by creating and connecting validation stages
The component shall be subscribed by the Application to the input_ptp_data topic of the MessageBroker
The initialization workflow is represented in the following sequence diagram:
Data verification workflow#
When the Verification Machine receives new PTP data, it processes it through the validation pipeline:
IPC Machine SW component#
The IPC Machine component shall get the verified-ptp-data from the Verification Machine and provide it to the score::time::svt through shared memory or other ipc mechanisms.
The component provides two sub components: publisher and receiver to be deployed on the TimeDaemon and Application sides accordingly.
Component requirements#
The IPC Machine has the following requirements:
The
IPC Machineshall provide verified time data to thescore::time::svtcomponent through shared memory or other IPC mechanismsThe
IPC Machineshall create and initialize the IPCThe
IPC Machineshall support multiple client applications accessing the same time dataThe
IPC Machineshall subscribe to the verified_ptp_data topic via theMessage Broker
Class view#
The Class Diagram is presented below.
Component initialization#
Initialization is divided to two parts:
Initialization on the TimeDaemon side
Initialization on the Application side
Important thing, the shared memory shall be created by the TimeDaemon, which means the Application should wait until the shmem will be created and only then open, map and read it.
The main workflow is described below.
The component shall be subscribed during initialization by the Application on the verified-ptp-data updates from the Message Broker
Publish new data#
When IPC Machine receives the new verified-ptp-data from Message Broker, it shall serialize data and store it to the shared memory.
As long as there are different use cases by using it, like:
Get current Vehicle time
Get data for diagnostics
All PTPTimeInfo data (or almost all) shall be shared across applications.
The publish workflow is described below.
Receive data#
From Application side the receiver shall read the shared_memory and provide the data to the caller.
The receive workflow is described below.
score::time::SynchronizedVehicleTime SW Component#
score::time::svt is the interface towards Applications, how they could get the access to the Vehicle Time.
Component requirements#
The score::time::svt has the following requirements:
The
score::time::svtshall expose vehicle time amd it synchronization status to applicationsThe
score::time::svtshall retrieve time data fromIPC Machinereceiver componentThe
score::time::svtshall adjust vehicle time with local clock to provide accurate timestampsThe
score::time::svtshall support fast and low-latency time access via theNow()method
Class view#
The Class Diagram is presented below.
Receive data#
In case of receiving data, the Application shall just call score::time::svt::Now() and it shall return the latest published Vehicle Time, which is already adjusted with local clock.
To do so, in the score::time::svt there is a thread, who polls for new data the IPCMachine::receiver and put the data to the process-internal shared buffer (memory), from where it is being read on score::time::svt::Now() call.
The main workflow is described below.
This design guarantees very low latency of the executing the score::time::svt::Now() function but brings additional efforts for the thread, memory buffer, synchronizing and so on.
Receive data (simplified)#
As an alternative design, the receiving concept could be simplified and score::time::svt::Now() could directly invoke the IPCMachine::receiver call, adjust the Vehicle time and return it to the Application.
The design is represented below.
In this case, there will be no need for additional thread, shared buffer and synchronization, but the score::time::svt::Now() call will take longer. To decide which approach to use, additional tests shall be
Deployment#
The implementation of score::time::svt::details::timed could be placed in parallel to other implementations, like score::time::svt::details::mocked one and could be selected by Bazel select. Also it will ease the integration process.
Logging configuration#
The daemon should have the following logging contexts:
component |
App/Context ID |
Comments |
|---|---|---|
TimeDaemon |
TDON |
TimeDaemON |
Application |
TDAP |
TimeDaemon APplication |
MessageBroker |
TDMB |
TimeDaemon MessageBroker |
ControlFlowDivider |
TDCD |
TimeDaemon ControlFlowDivider |
PTPMachine |
TDPM |
TimeDaemon PTPMachine |
VerificationMachine |
TDVM |
TimeDaemon VerificationMachine |
IPCMachine::receiver |
TDIR |
TimeDaemon IPCMachine::Receiver |
IPCMachine::publisher |
TDIP |
TimeDaemon IPCMachine::Publisher |
Variability#
Configuration files#
The TimeDaemon uses structured configuration files to enable customization of its runtime behavior. These data could be configured:
Component-specific Configuration:
Each component can have dedicated configuration sections
Parameters such as update rates, timeouts, and thresholds can be specified
Topic Configuration:
Topics for the
Message Brokercan be defined in configurationPublisher and subscriber relationships can be specified externally
Component roles (publisher/subscriber) can be assigned through configuration
File Format and Structure: The configuration files use JSON format for readability and easy parsing:
{
"message_broker": {
"topics": [
{
"name": "raw_ptp_data",
"publishers": ["PtpMachine"],
"subscribers": ["ControlFlowDivider"]
},
{
"name": "input_ptp_data",
"publishers": ["ControlFlowDivider"],
"subscribers": ["VerificationMachine"]
},
{
"name": "verified_ptp_data",
"publishers": ["VerificationMachine"],
"subscribers": ["IPCMachine"]
}
]
},
"ptp_machine": {
"update_interval_ms": 50,
"ptp_stack_type": "ptp",
"ptp_stack_parameters": {
"device": "/dev/ptp0"
}
},
"control_flow_divider": {
"timeout_ms": 500,
"publishing_rate_ms": 100
},
"verification_machine": {
"validation_stages": ["synchronization", "timejumps", "timeout"],
"timejumps_parameters": {
"max_backward_jump_ns": 100000
},
"timeout_parameters": {
"threshold_ns": 100000
}
},
"ipc_machine": {
"shared_memory_name": "vehicle_time",
"shared_memory_size": 4096
}
}
Scalability#
The TimeDaemon’s architecture supports scalability in the following ways:
Component Extensibility:#
New machine components can be added by implementing the
BaseMachineinterfaceAdditional validation stages can be plugged into the
VerificationMachinepipelineAlternative IPC mechanisms or communication with ptp stack can be implemented by alternative the
IPCMachineorPTPMachineimplementation
Example based on Qualified Vehicle Time integration#
The Qualified Vehicle Time integration extends the standard TimeDaemon architecture with:
A
Qualified Vehicle Timecomponent that performs additional time qualification and provide new topics:qualified_ptp_dataanddiagnostic_sct_dataA dedicated IPC channel for SCT diagnostic data
A
score::time::qvtlibrary for diagnostic applications
The Qualified Vehicle Time component is integrated into the existing processing pipeline:
It subscribes to the verified_ptp_data topic from the
VerificationMachineIt processes and qualifies the time data with additional QVT-specific checks
It publishes two types of data:
Qualified time data to the standard IPC Machine towards clients interested in the qualified Vehicle Time
Diagnostic data to a dedicated QVT IPC channel towards Diagnostic and Central Validator notifications
The extended data flow with Qualified Vehicle Time integration is shown below:
Example based on Absolute Time integration#
Another example of the TimeDaemon extension is the integration of an Absolute Time source, such as GNSS, to provide absolute time information alongside the relative Vehicle Time from PTP.
The Absolute Time integration extends the standard TimeDaemon architecture with:
An
SDatMachinecomponent that retrieves absolute time from GNSS via SOMEIP or other sources and provide new topics:absolute_time_dataA dedicated verification stage in the
VerificationMachinefor Absolute Time qualificationA dedicated IPC channel for Absolute Time data
A
score::time::abslibrary for applications requiring absolute time on Clients side.
The way how it is integrated is presented below.
The control and data flow with Absolute Time integration is shown below.
Using in test environment#
Using in ITF#
Normal behavior is expected.
Using in Component Tests on the host#
Overview#
The TimeDaemon can be utilized in the Component Tests environment to enable comprehensive testing of time-dependent components without relying on physical PTP hardware.
This approach allows test cases to manipulate time values and synchronization states to validate application behavior under various timing conditions.
For the Component tests the PtpMachine::PtpEngine library is the only one platform-dependent.
Thus the TimeDaemon components remain largely unchanged except for the PTPMachine component, which is replaced with an test-specific implementation that can be controlled via test cases
This component shall:
simulate “normal”
PTPMachinebehaviorhave the communication channel to the test case and react on the manipulations
Next steps: plugin system#
The TimeDaemon could be extended with a flexible plugin system that enables dynamic component loading, configuration, subscription and extension without requiring code changes or recompilation.
Plugin Architecture#
The plugin system is structured around the following key elements:
Component Registry: A central registry that maintains information about available component implementationsComponent Factory: Creates component instances based on configurationPlugin Manager: Loads and initializes plugins at runtimeConfiguration-Driven Assembly: Components and their relationships defined in configuration files
Component Creation Process#
During TimeDaemon initialization:
The
Plugin Managerloads all specified plugins from configured directories or bazel targetsEach plugin registers its component factories with the registry
The
Applicationreads the component configurationFor each component in the configuration:
The appropriate factory is retrieved from the registry
The component is created with its specified parameters
Components are connected based on the
MessageBrokertopic configuration
ASIL-B qualification#
Clean separation of concerns allows score::time::svt as well as TimeDaemon to be qualified according to ASIL-B requirements following ISO 26262 standard.