Concept for TimeSlave#
TimeSlave concept#
Use Cases#
TimeSlave is a standalone gPTP (IEEE 802.1AS) slave endpoint process that implements the low-level time synchronization protocol for the Eclipse SCORE time system. It is deployed as a separate process from the TimeDaemon to isolate real-time network I/O from the higher-level time validation and distribution logic.
More precisely we can specify the following use cases for the TimeSlave:
Receiving gPTP Sync/FollowUp messages from a Time Master on the Ethernet network
Measuring peer delay via the IEEE 802.1AS PDelayReq/PDelayResp exchange
Optionally adjusting the PTP Hardware Clock (PHC) on the NIC
Publishing the resulting
GptpIpcDatato shared memory for consumption by the TimeDaemon
The raw architectural diagram is represented below.
Components decomposition#
The design consists of several sw components:
Class view#
Main classes and components are presented on this diagram:
Data and control flow#
The Data and Control flow are presented in the following diagram:
On this view you could see several “workers” scopes:
RxThread scope
PdelayThread scope
Main thread (periodic publish) scope
Each control flow is implemented with the dedicated thread and is independent from another ones.
Control flows#
RxThread scope#
This control flow is responsible for the:
receive raw gPTP Ethernet frames with hardware timestamps from the NIC via raw sockets
decode and parse the PTP messages (Sync, FollowUp, PdelayResp, PdelayRespFollowUp)
correlate Sync/FollowUp pairs and compute clock offset and neighborRateRatio
update the shared
PtpTimeInfosnapshot under mutex protection
PdelayThread scope#
This control flow is responsible for the:
periodically transmit PDelayReq frames and capture hardware transmit timestamps
coordinate with the RxThread to receive PDelayResp and PDelayRespFollowUp messages
compute the peer delay using the IEEE 802.1AS formula:
path_delay = ((t2 - t1) + (t4 - t3c)) / 2
Main thread (periodic publish) scope#
This control flow is responsible for the:
periodically call
GptpEngine::FinalizeSnapshot()to check timeout and commit the pending snapshotcall
GptpEngine::ReadPTPSnapshot(data)to copy the latestGptpIpcDatainto a local variablepublish to shared memory via
GptpIpcPublisher::Publish(data)
Data types or events#
There are several data types, which components are communicating to each other:
PTPMessage#
PTPMessage is a union-based container for decoded gPTP messages including the hardware receive timestamp. It is produced by MessageParser and consumed by SyncStateMachine and PeerDelayMeasurer.
SyncResult#
SyncResult is produced by SyncStateMachine::OnFollowUp() and contains the computed master timestamp, clock offset, Sync/FollowUp data, and time jump flags (forward/backward).
PDelayResult#
PDelayResult is produced by PeerDelayMeasurer and contains the computed path delay in nanoseconds and a validity flag.
PtpTimeInfo#
PtpTimeInfo is the TimeDaemon-internal aggregated snapshot. It is not the shared memory type; it is produced by ShmPTPEngine::ReadPTPSnapshot() by field-mapping from GptpIpcData into the format expected by the TimeDaemon pipeline.
SW Components decomposition#
TimeSlave Application SW component#
The TimeSlave Application component is the main entry point for the TimeSlave process. It extends score::mw::lifecycle::Application and is responsible for orchestrating the overall lifecycle of the GptpEngine and the IPC publisher.
Component requirements#
The TimeSlave Application has the following requirements:
The
TimeSlave Applicationshall implement theInitialize()method to create theGptpEnginewith configured options, initialize theGptpIpcPublisher(creates the shared memory segment), and create theHighPrecisionLocalSteadyClockfor the engineThe
TimeSlave Applicationshall implement theRun()method to enter a periodic publish loop (50 ms interval) and monitor thestop_tokenfor graceful shutdownOn each loop iteration,
TimeSlave Applicationshall callGptpEngine::FinalizeSnapshot(), thenGptpEngine::ReadPTPSnapshot(data), and publish the resultingGptpIpcDataviaGptpIpcPublisher::Publish(data)The
TimeSlave Applicationshall callGptpEngine::Deinitialize()andGptpIpcPublisher::Destroy()after thestop_tokenis set
GptpEngine SW component#
The GptpEngine component is the core gPTP protocol engine. It manages two background threads (RxThread and PdelayThread) for network I/O and peer delay measurement, and exposes a thread-safe ReadPTPSnapshot() method for the main thread to read the latest time measurement.
Component requirements#
The GptpEngine has the following requirements:
The
GptpEngineshall manage an RxThread for receiving and parsing gPTP frames from raw Ethernet socketsThe
GptpEngineshall manage a PdelayThread for periodic peer delay measurementThe
GptpEngineshall provide aFinalizeSnapshot()method that checks for sync timeout, applies status flags, and commits the pending snapshot to the current snapshot; this must be called beforeReadPTPSnapshot()The
GptpEngineshall provide aReadPTPSnapshot(GptpIpcData&)method that copies the latest committed snapshot into the caller’s buffer and returns false only if the engine is not initializedThe
GptpEngineshall support configurable parameters viaGptpEngineOptions(interface name, PDelay interval, PDelay warmup, sync timeout, time-jump threshold, PHC configuration)The
GptpEngineshall support exchangeability of the raw socket implementation for different platforms (Linux, QNX)
Class view#
The Class Diagram is presented below:
Threading model#
The GptpEngine operates with two background threads. The threading model is represented below:
Concurrency aspects#
The GptpEngine uses the following synchronization mechanisms:
A
std::mutexprotects thepending_snapshot_andcurrent_snapshot_fields (bothGptpIpcData): the RxThread writespending_snapshot_; the main thread callsFinalizeSnapshot()(commits pending to current) andReadPTPSnapshot()(reads current)The
PeerDelayMeasureruses its ownstd::mutexto synchronize between the PdelayThread (SendRequest()) and the RxThread (OnResponse(),OnResponseFollowUp())The
SyncStateMachineusesstd::atomic<bool>for the timeout flag, which is read from the main thread and written from the RxThread
Hardware timestamping fallback#
During Initialize(), GptpEngine calls IRawSocket::EnableHwTimestamping() to request NIC-level receive timestamps (SO_TIMESTAMPING on Linux). If the NIC does not support hardware timestamping, the call returns false and a warning is logged:
GptpEngine: HW timestamping not available on <iface>, falling back to SW timestamps
The engine continues to run normally. The difference between the two modes:
Field |
HW timestamping available |
SW timestamping fallback |
|---|---|---|
|
NIC hardware timestamp (nanosecond precision, captured at wire level) |
Software timestamp (captured at socket receive, higher jitter) |
|
Derived from NIC hardware timestamp |
Derived from software timestamp |
|
Always |
Always |
Clock offset accuracy |
High (sub-microsecond typical) |
Reduced (jitter depends on OS scheduling latency) |
The fallback does not affect protocol correctness — Sync/FollowUp correlation and peer delay measurement continue to work — but the computed clock offset will be less accurate due to higher receive timestamp jitter.
FrameCodec SW component#
The FrameCodec component handles raw Ethernet frame encoding and decoding for gPTP communication.
Component requirements#
The FrameCodec has the following requirements:
The
FrameCodecshall parse incoming Ethernet frames, extracting source/destination MAC addresses, handling 802.1Q VLAN tags, and validating the EtherType (0x88F7)The
FrameCodecshall construct outgoing Ethernet headers for PDelayReq frames using the standard PTP multicast destination MAC (01:80:C2:00:00:0E)
MessageParser SW component#
The MessageParser component parses the PTP wire format (IEEE 1588-v2) from raw payload bytes.
Component requirements#
The MessageParser has the following requirements:
The
MessageParsershall validate the PTP header (version, domain, message length)The
MessageParsershall decode all relevant message types: Sync, FollowUp, PdelayReq, PdelayResp, PdelayRespFollowUpThe
MessageParsershall use packed wire structures (__attribute__((packed))) for direct memory mapping of PTP messages
SyncStateMachine SW component#
The SyncStateMachine component implements the two-step Sync/FollowUp correlation logic. It correlates incoming Sync and FollowUp messages by sequence ID, computes the clock offset and neighbor rate ratio, and detects time jumps.
Component requirements#
The SyncStateMachine has the following requirements:
The
SyncStateMachineshall store Sync messages and correlate them with subsequent FollowUp messages by sequence IDThe
SyncStateMachineshall compute the clock offset:offset_ns = master_time - slave_receive_time - path_delayThe
SyncStateMachineshall compute theneighborRateRatiofrom successive Sync intervals (master vs. slave clock progression)The
SyncStateMachineshall detect forward and backward time jumps against configurable thresholdsThe
SyncStateMachineshall provide thread-safe timeout detection viastd::atomic<bool>, set when no Sync is received within the configured timeout
PeerDelayMeasurer SW component#
The PeerDelayMeasurer component implements the IEEE 802.1AS two-step peer delay measurement protocol. It manages the four timestamps (t1, t2, t3c, t4) across two threads.
Timestamp definitions#
Symbol |
Message |
Captured by |
Meaning |
|---|---|---|---|
|
PDelayReq (TX) |
Slave (PdelayThread) |
HW transmit timestamp of the PDelayReq frame leaving the slave NIC |
|
PDelayResp (RX) |
Master → carried in PDelayResp body |
HW receive timestamp of the PDelayReq frame arriving at the master NIC |
|
PDelayRespFollowUp |
Master → carried in PDelayRespFollowUp body |
HW transmit timestamp of the PDelayResp frame leaving the master NIC (“corrected” because it includes the master’s turnaround correction) |
|
PDelayResp (RX) |
Slave (RxThread) |
HW receive timestamp of the PDelayResp frame arriving at the slave NIC |
The peer delay formula is: path_delay = ((t2 - t1) + (t4 - t3c)) / 2
(t2 - t1)= propagation time from slave → master(t4 - t3c)= propagation time from master → slaveThe average of the two gives the one-way link delay
Component requirements#
The PeerDelayMeasurer has the following requirements:
The
PeerDelayMeasurershall transmit PDelayReq frames and capture the hardware transmit timestamp (t1)The
PeerDelayMeasurershall receive PDelayResp (providingt2,t4) and PDelayRespFollowUp (providingt3c) messagesThe
PeerDelayMeasurershall compute the peer delay using the IEEE 802.1AS formula:path_delay = ((t2 - t1) + (t4 - t3c)) / 2The
PeerDelayMeasurershall discard PDelayResp and PDelayRespFollowUp messages whose sequence ID does not match the most recently transmitted PDelayReqThe
PeerDelayMeasurershall suppress the path-delay result when more than one PDelayResp is received for a single PDelayReq (detection of non-time-aware bridges per IEEE 802.1AS)The
PeerDelayMeasurershall provide thread-safe access to thePDelayResultvia a mutex, asSendRequest()runs on the PdelayThread while response handlers are called from the RxThread
PhcAdjuster SW component#
The PhcAdjuster component synchronizes the PTP Hardware Clock (PHC) on the NIC. It applies step corrections for large offsets and frequency slew for smooth convergence of small offsets.
Component requirements#
The PhcAdjuster has the following requirements:
The
PhcAdjustershall apply an immediate time step correction for offsets exceedingstep_threshold_nsThe
PhcAdjustershall apply frequency slew (in ppb) for offsets below the step thresholdThe
PhcAdjustershall support platform-specific implementations:clock_adjtime()on Linux, EMAC PTP ioctls on QNXThe
PhcAdjustershall be configurable viaPhcConfig(device path, step threshold, enable/disable flag)
libTSClient SW component#
The libTSClient component is the shared memory IPC library that connects the TimeSlave process to the TimeDaemon process. It provides a lock-free, single-writer/multi-reader communication channel using a seqlock protocol over POSIX shared memory.
The component provides two sub components: publisher and receiver to be deployed on the TimeSlave and TimeDaemon sides accordingly.
Component requirements#
The libTSClient has the following requirements:
The
libTSClientshall define a shared memory layout (GptpIpcRegion) with a magic number (0x47505450= ‘GPTP’) for validation, an atomic seqlock counter (seq), a confirmation counter (seq_confirm), and aGptpIpcDatadata payloadThe
libTSClientshall align the shared memory region to 64 bytes (cache line size) to prevent false sharingThe
libTSClientshall provide aGptpIpcPublishercomponent (inscore::ts::details) that creates and manages the POSIX shared memory segment and writesGptpIpcDatausing the seqlock protocolThe
libTSClientshall provide aGptpIpcReceivercomponent (inscore::ts::details) that opens the shared memory segment read-only and readsGptpIpcDatawith up to 20 seqlock retriesThe
libTSClientshall use the POSIX shared memory name/gptp_ptp_infoby default
Class view#
The Class Diagram is presented below:
Publish new data#
When TimeSlave Application has a new GptpIpcData snapshot, it publishes to the shared memory via the seqlock protocol:
Increment
seq(becomes odd — signals write in progress); a release fence is appliedmemcpytheGptpIpcDataStore
seq_confirm = seq + 1and incrementseq(both become even — signals write complete)
Receive data#
From TimeDaemon side, the receiver reads from the shared memory using the seqlock protocol with bounded retry:
Read
seq1with acquire ordering (must be even, otherwise retry — write in progress)memcpytheGptpIpcDataApply an acquire-release fence; read
seq_confirmasseq2and re-readseqasseq3If
seq1 == seq2 == seq3, the read is consistent; otherwise retry — torn read detectedReturn
std::optional<GptpIpcData>(empty if all 20 retries exhausted)
The seqlock protocol workflow is presented in the following sequence diagram:
Platform support#
TimeSlave supports two target platforms with platform-specific implementations selected at compile time via Bazel select():
Component |
Linux |
QNX |
|---|---|---|
Raw Socket |
|
BPF ( |
Network Identity |
|
|
PHC Adjuster |
|
|
HighPrecisionLocalSteadyClock |
|
QNX |
The IRawSocket and INetworkIdentity interfaces provide the abstraction boundary. Platform-specific source files are organized under score/TimeSlave/code/gptp/platform/linux/ and score/TimeSlave/code/gptp/platform/qnx/.
Instrumentation#
ProbeManager#
The ProbeManager is a singleton that traces probe events at key processing points in the gPTP engine. It emits a LogDebug entry on every Trace() call and forwards the event to a linked Recorder (if set and enabled). Probing is controlled at runtime via SetEnabled(); the GPTP_PROBE() macro provides zero overhead when disabled.
Supported probe points (ProbePoint enum):
Value |
Enumerator |
Trigger |
|---|---|---|
0 |
|
Raw Ethernet frame received from socket (RxThread) |
1 |
|
Sync message successfully decoded by |
2 |
|
FollowUp received; |
3 |
|
Final clock offset value available after Sync/FollowUp correlation |
4 |
|
PDelayReq frame transmitted by |
5 |
|
Peer delay computation finished (all four timestamps collected) |
6 |
|
|
When a probe event is forwarded to the Recorder, it is written with RecordEvent::kProbe and the ProbePoint value stored in the status_flags field of the CSV row.
Recorder#
Thread-safe CSV file writer. When enabled, appends one row per event to the configured file. The file is opened in append mode (ios::app); a CSV header is written only if the file is newly created (size == 0).
Status model: the Recorder starts in the state determined by Config.enabled. If a write error occurs (file_.good() fails after a flush), enabled_ is atomically set to false and all subsequent Record() calls become no-ops. The file is never re-opened after an error.
Configuration (Recorder::Config):
Parameter |
Type |
Description |
|---|---|---|
|
bool |
Enable or disable recording; default: |
|
string |
Output CSV file path; default: |
|
int64_t |
Reserved for |
|
uint32_t |
Number of rows between explicit |
CSV output format:
mono_ns,event,offset_ns,pdelay_ns,seq_id,status_flags
Supported RecordEvent values written to the event column:
Value |
Enumerator |
Description |
|---|---|---|
0 |
|
A Sync message was received and processed |
1 |
|
A full peer delay measurement cycle completed |
2 |
|
A forward or backward time jump was detected |
3 |
|
Clock offset exceeded |
4 |
|
Forwarded from |
Logging configuration#
The TimeSlave and its TimeDaemon-side adapter use the following logging contexts:
Component |
Context ID |
Comments |
|---|---|---|
TimeSlave Application |
TSAP |
TimeSlave Application lifecycle (Initialize / Run) |
gPTP Engine (RxThread / PdelayThread) |
GTPS |
GPTP SLAVE engine — low-level protocol processing |
ShmPTPEngine (TimeDaemon side) |
GPTP |
TimeDaemon GPTP machine adapter (Initialize / ReadPTPSnapshot) |
Variability#
Configuration#
The GptpEngineOptions struct provides all configurable parameters for the gPTP engine:
Parameter |
Type |
Description |
|---|---|---|
|
string |
Network interface for gPTP frames (e.g., |
|
int |
Interval between PDelayReq transmissions (ms); default: |
|
int |
Delay before the first PDelayReq is sent (ms); default: |
|
int |
Timeout for Sync message reception before declaring timeout state (ms); default: |
|
int64_t |
Threshold above which a positive clock offset is flagged as a forward time jump (ns); default: |
|
PhcConfig |
PHC hardware clock adjustment settings (see |
The PhcConfig struct (embedded in GptpEngineOptions) contains:
Parameter |
Type |
Description |
|---|---|---|
|
bool |
Enable or disable PHC adjustment; default: |
|
string |
PHC device identifier: |
|
int64_t |
Offset threshold above which a step correction is applied instead of frequency slew (ns); default: |
Scalability#
The TimeSlave architecture supports the following extensibility points:
Platform extensibility#
New target platforms can be supported by implementing the
IRawSocketandINetworkIdentityinterfaces under a newplatform/<os>/directory and selecting the implementation viaBazel select()The
PhcAdjusterplatform implementations (clock_adjtimeon Linux, EMAC ioctls on QNX) can be extended for additional hardware without changing protocol logic
Protocol extensibility#
The
GptpEngineaccepts injectedIRawSocketandINetworkIdentitydependencies, making it straightforward to test or replace individual platform abstractionsThe shared memory IPC channel name is configurable (
GptpIpcPublisher::Init(name)/GptpIpcReceiver::Init(name)), allowing multiple gPTP instances per ECU if needed
TimeDaemon integration extensibility#
The
ShmPTPEngineimplements the samePTPEngineconcept as otherPTPMachinebackends, making it transparently exchangeable with any other engine implementationAlternative IPC mechanisms (e.g., socket-based) can be introduced by implementing a new engine class without modifying the
PTPMachinetemplate or downstream components
ShmPTPEngine SW component#
The ShmPTPEngine component (in score::td::details) is the TimeDaemon-side adapter that reads GptpIpcData from the shared memory channel written by TimeSlave and converts it into the PtpTimeInfo structure expected by the TimeDaemon pipeline.
It is instantiated as GPTPShmMachine — a type alias for PTPMachine<details::ShmPTPEngine> — which connects ShmPTPEngine to the TimeDaemon’s internal MessageBroker.
Component requirements#
The ShmPTPEngine has the following requirements:
The
ShmPTPEngineshall callGptpIpcReceiver::Init(ipc_name)duringInitialize()to open the shared memory channelThe
ShmPTPEngineshall callGptpIpcReceiver::Receive()inReadPTPSnapshot()to fetch the latestGptpIpcDataThe
ShmPTPEngineshall map all fields ofGptpIpcDatato the corresponding fields ofPtpTimeInfo(status flags, Sync/FollowUp data, peer-delay data, time references)The
ShmPTPEngineshall callGptpIpcReceiver::Close()duringDeinitialize()The
ShmPTPEngineshall be instantiatable with a configurable IPC channel name (default:/gptp_ptp_info)
Class view#
The Class Diagram is presented below:
Component initialization#
During initialization the ShmPTPEngine shall open the shared memory channel to be able to read from it.
The initialization workflow is represented in the following sequence diagram:
Read PTP snapshot#
After ShmPTPEngine reads the latest GptpIpcData from shared memory, it maps it to PtpTimeInfo and publishes via the MessageBroker.
The periodic read and publish workflow is described below:
Data mapping#
ShmPTPEngine::ReadPTPSnapshot() performs a field-by-field mapping from GptpIpcData to PtpTimeInfo:
|
|
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Factory#
CreateGPTPShmMachine(name, ipc_name) is a convenience factory function in score::td that creates a configured GPTPShmMachine (shared_ptr) backed by ShmPTPEngine:
auto machine = CreateGPTPShmMachine("shm", "/gptp_ptp_info");
Using in test environment#
Using in ITF#
Normal behavior is expected. TimeSlave runs as a standalone process, communicates over real Ethernet, and writes to /gptp_ptp_info shared memory as in production.
Using in Component Tests on the host#
Overview#
The TimeSlave and its constituent components can be tested on an x86 Linux host without PTP hardware or a real network. The key platform-dependent abstractions all have test-injectable counterparts:
Abstraction |
Production implementation |
Test replacement |
|---|---|---|
|
|
|
|
|
|
|
Platform clock (Linux / QNX) |
|
The GptpEngine provides a dedicated test constructor that accepts injected implementations:
GptpEngine engine(opts,
std::make_unique<FakeClock>(),
std::make_unique<FakeSocket>(),
std::make_unique<FakeIdentity>());
This allows complete white-box testing of the Sync/FollowUp correlation, peer-delay measurement, timeout detection, and time-jump flagging logic by pushing crafted PTP frames directly into the FakeSocket queue.
The GptpIpcPublisher and GptpIpcReceiver rely on POSIX shared memory (shm_open), which works on any Linux host, so ShmPTPEngine component tests can run end-to-end using real IPC without modification.