Concept for TimeSlave#
TimeSlave concept#
Use Cases#
TimeSlave is a standalone gPTP (IEEE 802.1AS) slave endpoint process that implements the low-level time synchronization protocol for the Eclipse SCORE time system. It is deployed as a separate process from the TimeDaemon to isolate real-time network I/O from the higher-level time validation and distribution logic.
More precisely we can specify the following use cases for the TimeSlave:
Receiving gPTP Sync/FollowUp messages from a Time Master on the Ethernet network
Measuring peer delay via the IEEE 802.1AS PDelayReq/PDelayResp exchange
Optionally adjusting the PTP Hardware Clock (PHC) on the NIC
Publishing the resulting
PtpTimeInfoto shared memory for consumption by the TimeDaemon
The raw architectural diagram is represented below.
Components decomposition#
The design consists of several sw components:
Class view#
Main classes and components are presented on this diagram:
Data and control flow#
The Data and Control flow are presented in the following diagram:
On this view you could see several “workers” scopes:
RxThread scope
PdelayThread scope
Main thread (periodic publish) scope
Each control flow is implemented with the dedicated thread and is independent from another ones.
Control flows#
RxThread scope#
This control flow is responsible for the:
receive raw gPTP Ethernet frames with hardware timestamps from the NIC via raw sockets
decode and parse the PTP messages (Sync, FollowUp, PdelayResp, PdelayRespFollowUp)
correlate Sync/FollowUp pairs and compute clock offset and neighborRateRatio
update the shared
PtpTimeInfosnapshot under mutex protection
PdelayThread scope#
This control flow is responsible for the:
periodically transmit PDelayReq frames and capture hardware transmit timestamps
coordinate with the RxThread to receive PDelayResp and PDelayRespFollowUp messages
compute the peer delay using the IEEE 802.1AS formula:
path_delay = ((t2 - t1) + (t4 - t3c)) / 2
Main thread (periodic publish) scope#
This control flow is responsible for the:
periodically call
GptpEngine::ReadPTPSnapshot()to get the latest time measurementenrich the snapshot with the local clock timestamp from
HighPrecisionLocalSteadyClockpublish to shared memory via
GptpIpcPublisher::Publish()
Data types or events#
There are several data types, which components are communicating to each other:
PTPMessage#
PTPMessage is a union-based container for decoded gPTP messages including the hardware receive timestamp. It is produced by MessageParser and consumed by SyncStateMachine and PeerDelayMeasurer.
SyncResult#
SyncResult is produced by SyncStateMachine::OnFollowUp() and contains the computed master timestamp, clock offset, Sync/FollowUp data, and time jump flags (forward/backward).
PDelayResult#
PDelayResult is produced by PeerDelayMeasurer and contains the computed path delay in nanoseconds and a validity flag.
PtpTimeInfo#
PtpTimeInfo is the aggregated snapshot that combines PTP status flags, Sync/FollowUp data, peer delay data, and a local clock reference. This is the data published to shared memory for the TimeDaemon.
SW Components decomposition#
TimeSlave Application SW component#
The TimeSlave Application component is the main entry point for the TimeSlave process. It extends score::mw::lifecycle::Application and is responsible for orchestrating the overall lifecycle of the GptpEngine and the IPC publisher.
Component requirements#
The TimeSlave Application has the following requirements:
The
TimeSlave Applicationshall implement theInitialize()method to create theGptpEnginewith configured options, initialize theGptpIpcPublisher(creates the shared memory segment), and prepare theHighPrecisionLocalSteadyClockfor local time referenceThe
TimeSlave Applicationshall implement theRun()method to start the GptpEngine, enter a periodic publish loop, and monitor thestop_tokenfor graceful shutdownThe
TimeSlave Applicationshall implement theDeinitialize()method to stop the GptpEngine threads and destroy the shared memory segmentThe
TimeSlave Applicationshall periodically read the latestPtpTimeInfosnapshot, enrich it with the local clock timestamp, and publish it viaGptpIpcPublisher
GptpEngine SW component#
The GptpEngine component is the core gPTP protocol engine. It manages two background threads (RxThread and PdelayThread) for network I/O and peer delay measurement, and exposes a thread-safe ReadPTPSnapshot() method for the main thread to read the latest time measurement.
Component requirements#
The GptpEngine has the following requirements:
The
GptpEngineshall manage an RxThread for receiving and parsing gPTP frames from raw Ethernet socketsThe
GptpEngineshall manage a PdelayThread for periodic peer delay measurementThe
GptpEngineshall provide a thread-safeReadPTPSnapshot()method that returns the latestPtpTimeInfoThe
GptpEngineshall support configurable parameters viaGptpEngineOptions(interface name, PDelay interval, sync timeout, time jump thresholds, PHC configuration)The
GptpEngineshall support exchangeability of the raw socket implementation for different platforms (Linux, QNX)
Class view#
The Class Diagram is presented below:
Threading model#
The GptpEngine operates with two background threads. The threading model is represented below:
Concurrency aspects#
The GptpEngine uses the following synchronization mechanisms:
A
std::mutexprotects thelatest_snapshot_field, shared between the RxThread (writer) and the main thread (reader viaReadPTPSnapshot())The
PeerDelayMeasureruses its ownstd::mutexto synchronize between the PdelayThread (SendRequest()) and the RxThread (OnResponse(),OnResponseFollowUp())The
SyncStateMachineusesstd::atomic<bool>for the timeout flag, which is read from the main thread and written from the RxThread
FrameCodec SW component#
The FrameCodec component handles raw Ethernet frame encoding and decoding for gPTP communication.
Component requirements#
The FrameCodec has the following requirements:
The
FrameCodecshall parse incoming Ethernet frames, extracting source/destination MAC addresses, handling 802.1Q VLAN tags, and validating the EtherType (0x88F7)The
FrameCodecshall construct outgoing Ethernet headers for PDelayReq frames using the standard PTP multicast destination MAC (01:80:C2:00:00:0E)
MessageParser SW component#
The MessageParser component parses the PTP wire format (IEEE 1588-v2) from raw payload bytes.
Component requirements#
The MessageParser has the following requirements:
The
MessageParsershall validate the PTP header (version, domain, message length)The
MessageParsershall decode all relevant message types: Sync, FollowUp, PdelayReq, PdelayResp, PdelayRespFollowUpThe
MessageParsershall use packed wire structures (__attribute__((packed))) for direct memory mapping of PTP messages
SyncStateMachine SW component#
The SyncStateMachine component implements the two-step Sync/FollowUp correlation logic. It correlates incoming Sync and FollowUp messages by sequence ID, computes the clock offset and neighbor rate ratio, and detects time jumps.
Component requirements#
The SyncStateMachine has the following requirements:
The
SyncStateMachineshall store Sync messages and correlate them with subsequent FollowUp messages by sequence IDThe
SyncStateMachineshall compute the clock offset:offset_ns = master_time - slave_receive_time - path_delayThe
SyncStateMachineshall compute theneighborRateRatiofrom successive Sync intervals (master vs. slave clock progression)The
SyncStateMachineshall detect forward and backward time jumps against configurable thresholdsThe
SyncStateMachineshall provide thread-safe timeout detection viastd::atomic<bool>, set when no Sync is received within the configured timeout
PeerDelayMeasurer SW component#
The PeerDelayMeasurer component implements the IEEE 802.1AS two-step peer delay measurement protocol. It manages the four timestamps (t1, t2, t3c, t4) across two threads.
Component requirements#
The PeerDelayMeasurer has the following requirements:
The
PeerDelayMeasurershall transmit PDelayReq frames and capture the hardware transmit timestamp (t1)The
PeerDelayMeasurershall receive PDelayResp (providingt2,t4) and PDelayRespFollowUp (providingt3c) messagesThe
PeerDelayMeasurershall compute the peer delay using the IEEE 802.1AS formula:path_delay = ((t2 - t1) + (t4 - t3c)) / 2The
PeerDelayMeasurershall provide thread-safe access to thePDelayResultvia a mutex, asSendRequest()runs on the PdelayThread while response handlers are called from the RxThread
PhcAdjuster SW component#
The PhcAdjuster component synchronizes the PTP Hardware Clock (PHC) on the NIC. It applies step corrections for large offsets and frequency slew for smooth convergence of small offsets.
Component requirements#
The PhcAdjuster has the following requirements:
The
PhcAdjustershall apply an immediate time step correction for offsets exceedingstep_threshold_nsThe
PhcAdjustershall apply frequency slew (in ppb) for offsets below the step thresholdThe
PhcAdjustershall support platform-specific implementations:clock_adjtime()on Linux, EMAC PTP ioctls on QNXThe
PhcAdjustershall be configurable viaPhcConfig(device path, step threshold, enable/disable flag)
libTSClient SW component#
The libTSClient component is the shared memory IPC library that connects the TimeSlave process to the TimeDaemon process. It provides a lock-free, single-writer/multi-reader communication channel using a seqlock protocol over POSIX shared memory.
The component provides two sub components: publisher and receiver to be deployed on the TimeSlave and TimeDaemon sides accordingly.
Component requirements#
The libTSClient has the following requirements:
The
libTSClientshall define a shared memory layout (GptpIpcRegion) with a magic number for validation, an atomic seqlock counter, and aPtpTimeInfodata payloadThe
libTSClientshall align the shared memory region to 64 bytes (cache line size) to prevent false sharingThe
libTSClientshall provide aGptpIpcPublishercomponent that creates and manages the POSIX shared memory segment and writesPtpTimeInfousing the seqlock protocolThe
libTSClientshall provide aGptpIpcReceivercomponent that opens the shared memory segment read-only and readsPtpTimeInfowith up to 20 seqlock retriesThe
libTSClientshall use the POSIX shared memory name/gptp_ptp_infoby default
Class view#
The Class Diagram is presented below:
Publish new data#
When TimeSlave Application has a new PtpTimeInfo snapshot, it publishes to the shared memory via the seqlock protocol:
Increment
seq(becomes odd — signals write in progress)memcpythe dataIncrement
seq(becomes even — signals write complete)
Receive data#
From TimeDaemon side, the receiver reads from the shared memory using the seqlock protocol with bounded retry:
Read
seq(must be even, otherwise retry)memcpythe dataRead
seqagain (must match step 1, otherwise retry — torn read detected)Return
std::optional<PtpTimeInfo>(empty if all 20 retries exhausted)
The seqlock protocol workflow is presented in the following sequence diagram:
Platform support#
TimeSlave supports two target platforms with platform-specific implementations selected at compile time via Bazel select():
Component |
Linux |
QNX |
|---|---|---|
Raw Socket |
|
QNX raw-socket shim |
Network Identity |
|
QNX-specific MAC resolution |
PHC Adjuster |
|
EMAC PTP ioctls |
HighPrecisionLocalSteadyClock |
|
QTIME clock API |
The IRawSocket and INetworkIdentity interfaces provide the abstraction boundary. Platform-specific source files are organized under score/TimeSlave/code/gptp/platform/linux/ and score/TimeSlave/code/gptp/platform/qnx/.
Instrumentation#
ProbeManager#
The ProbeManager is a singleton that records probe events at key processing points in the gPTP engine. Probe points include:
RxPacketReceived— Raw frame received from socketSyncFrameParsed— Sync message successfully parsedFollowUpProcessed— Offset computed from Sync/FollowUp pairOffsetComputed— Final offset value availablePdelayReqSent— PDelayReq frame transmittedPdelayCompleted— Peer delay measurement completedPhcAdjusted— PHC adjustment applied
The GPTP_PROBE() macro provides zero-overhead when probing is disabled.
Recorder#
Thread-safe CSV file writer that persists probe events and other diagnostic data. Each RecordEntry contains a timestamp, event type, offset, peer delay, sequence ID, and status flags.
Variability#
Configuration#
The GptpEngineOptions struct provides all configurable parameters for the gPTP engine:
Parameter |
Type |
Description |
|---|---|---|
|
string |
Network interface for gPTP frames (e.g., |
|
uint32_t |
Interval between PDelayReq transmissions |
|
uint32_t |
Timeout for Sync message reception before declaring timeout state |
|
int64_t |
Threshold for forward time jump detection |
|
int64_t |
Threshold for backward time jump detection |
|
PhcConfig |
PHC device path, step threshold, and enable flag |
The PhcConfig struct additionally contains:
Parameter |
Type |
Description |
|---|---|---|
|
bool |
Enable or disable PHC adjustment |
|
string |
Path to the PHC device (e.g., |
|
int64_t |
Offset threshold above which a step correction is applied instead of frequency slew |