Gas & Pipeline Networks
Networking for pipeline SCADA, leak detection, compressor station control, and long-distance communications along pipeline corridors with appropriate redundancy and security.
Power system networks must deliver millisecond-level determinism for protection schemes while supporting modern data exchange for grid optimisation – a dual requirement that challenges network design in electrically hostile substation environments where reliability directly affects grid stability and public safety.
Protection and control systems operate on timelines where milliseconds determine equipment survival and grid stability, while modern grid applications require extensive data exchange – network design must satisfy both without compromise.
Distance protection schemes typically operate within 20–100 milliseconds from fault detection to breaker operation. During this interval, multiple intelligent electronic devices (IEDs) must exchange sampled values and generic object-oriented substation event (GOOSE) messages with deterministic timing. Network delays or jitter directly affect protection coordination, potentially causing incorrect zone operation or failure to clear faults. Simultaneously, the same network carries less time-critical traffic for monitoring, maintenance, and grid optimisation. Traditional approaches used separate networks for protection and other functions, but modern substations converge these onto shared infrastructure to reduce cost and complexity – requiring careful design to maintain protection performance.
Effective power system networking starts with understanding timing requirements: protection traffic needs deterministic delivery with maximum latency bounds, while other applications tolerate more variability. Network architecture then implements appropriate quality of service (QoS), traffic shaping, and redundancy to meet these diverse needs. The physical layer must survive substation environments – extreme electromagnetic interference (EMI) from switching operations, wide temperature ranges, and potential ground potential rise during faults. Equipment selection prioritises reliability over features, with mean time between failures (MTBF) measured in decades rather than years.
Modern substation networks integrate protection, control, and monitoring systems while surviving harsh electrical environments and maintaining deterministic performance for critical functions.
Substation networks transition from serial connections and hardwired signals to Ethernet-based architectures that must maintain or improve upon traditional reliability while enabling new capabilities.
Traditional substations used point-to-point serial connections (RS-485, RS-232) for protection and control, with dedicated copper wiring for critical signals. Modern approaches use switched Ethernet networks following IEC 61850 standards, which define communication requirements for substation automation. The architecture typically includes process bus networks connecting merging units and sensors, station bus networks connecting protection and control IEDs, and gateway connections to wider utility networks.
Network design considers physical layout: bay-level switches near protection panels reduce cable lengths for time-critical traffic, while station-level switches aggregate traffic for wider communication. Redundancy implementations vary based on criticality – protection networks often use parallel redundancy protocol (PRP) or high-availability seamless redundancy (HSR) for zero-time failover, while less critical networks may use rapid spanning tree protocol (RSTP). Environmental considerations include temperature range (typically -40°C to +85°C for outdoor installations), immunity to EMI from circuit breaker operations, and protection against ground potential differences during faults. Industrial-grade switches from partners like Westermo provide the reliability and hardening needed for these environments.
IEC 61850 standardises substation communication but introduces network dependencies that affect protection performance – requiring careful implementation to realise benefits without compromising reliability.
IEC 61850 defines several communication services: manufacturing message specification (MMS) for client-server communication, GOOSE for fast peer-to-peer messaging, and sampled values (SV) for streaming measurement data. Each has different network requirements: GOOSE messages need low latency and deterministic delivery for protection tripping, SV streams require consistent bandwidth for measurement accuracy, while MMS tolerates more variability for configuration and monitoring.
Network implementation for IEC 61850 considers traffic patterns and timing. GOOSE messages use publisher-subscriber models with repetition rates that increase during events – the network must handle these bursts without affecting other traffic. SV streams generate continuous high-bandwidth traffic – a single merging unit at 80 samples per cycle generates approximately 5–7 Mbps. VLAN segmentation separates traffic types, while QoS prioritises protection messages. Clock synchronisation via precision time protocol (PTP – IEEE 1588) ensures accurate time-stamping across devices. Testing and commissioning validate network performance under worst-case conditions – not just average operation. Interoperability testing between different vendors' IEDs identifies implementation differences that could affect network performance.
Protection systems demand network performance that matches or exceeds traditional hardwired approaches, with redundancy that maintains continuous operation during single failures and deterministic timing that ensures coordinated fault response.
Modern protection systems distribute intelligence: line differential protection compares current measurements from multiple ends via communication channels, distance protection uses sampled values from merging units, and busbar protection coordinates multiple IEDs. Network performance directly affects protection security and dependability – delays can cause incorrect zone operation, while packet loss can delay fault clearance.
Protection network design starts with maximum allowable latency calculations based on protection algorithms. For example, line differential protection typically tolerates 5–15 milliseconds round-trip delay, including processing time in devices. The network must guarantee this under all conditions, including during other traffic bursts. Redundancy approaches include PRP which sends duplicate packets on separate networks for zero-time failover, or HSR which uses ring topology with similar characteristics. Network equipment in the protection path must have deterministic forwarding behaviour – store-and-forward switches with variable latency are unsuitable; cut-through switches with fixed latency may be required. Testing under fault conditions validates performance when protection systems generate maximum traffic.
Phasor measurement units require precise time synchronisation and consistent network performance to provide accurate grid visibility for stability monitoring and control.
Phasor measurement units (PMUs) provide time-synchronised measurements of voltage and current phasors across the grid, requiring network infrastructure that delivers precise timing and consistent data flow for wide-area monitoring and control.
PMUs measure voltage and current waveforms, time-stamping each measurement using global positioning system (GPS) or network-based synchronisation. These measurements enable wide-area visibility of grid dynamics, supporting applications like oscillation detection, islanding detection, and voltage stability monitoring. Data rates vary from 10–60 samples per second, with higher rates during transient conditions. Each PMU generates 100–500 kbps of data, which aggregates to significant bandwidth when collected from hundreds of units across a transmission system.
PMU data networks prioritise time synchronisation accuracy – typically 1 microsecond or better for transmission applications. This requires careful network design to minimise timing jitter and asymmetry in communication paths. PTP with hardware timestamping provides the necessary accuracy. Data collection architecture uses hierarchical aggregation: PMUs connect to local phasor data concentrators (PDCs), which forward to regional and then central PDCs. The network must handle both continuous streaming and burst traffic during grid events. Data latency requirements vary by application: real-time control needs 100 milliseconds or less, while off-line analysis tolerates seconds. Security measures protect PMU data integrity – corrupted measurements could lead to incorrect grid control actions.
Distributed energy resources (DERs) – solar photovoltaic (PV), wind, battery storage, electric vehicles – connect at distribution level, requiring communication networks that scale to thousands of endpoints while maintaining utility-grade reliability.
Traditional power systems had centralised generation with one-way power flow. Modern grids integrate DERs at distribution level, creating bidirectional power flows and requiring communication for coordination. Each DER may need connectivity for monitoring, control, and grid support functions. Scale becomes a challenge – a medium-sized utility might manage 100,000+ DERs, each requiring occasional communication.
Network architectures for DER integration use hierarchical approaches: field area networks (FANs) connect DERs to aggregation points, which then connect to utility systems. Technologies include cellular (4G/LTE, 5G), radio frequency mesh, power line communication (PLC), and fibre where available. Each technology has trade-offs: cellular offers ubiquity but variable performance, RF mesh provides coverage but limited bandwidth, PLC uses existing infrastructure but faces noise challenges. Network design considers data requirements: basic monitoring needs infrequent communication, while grid support functions like volt-var control need more regular updates. Cybersecurity is critical – compromised DERs could be manipulated to affect grid stability. Standards like IEEE 2030.5 (Smart Energy Profile 2.0) and IEC 61850-7-420 define communication protocols for DER integration.
Power system operational technology (OT) cybersecurity must protect critical infrastructure while maintaining grid reliability – balancing security measures with operational requirements for continuous electricity supply.
Electric power OT includes protection systems, SCADA, energy management systems (EMS), and distribution management systems (DMS). Security incidents could cause widespread outages, equipment damage, or even safety hazards. However, security measures cannot disrupt protection timing or control functions. The approach follows the utilities core guidance: segmentation between OT and IT networks, secure remote access for maintenance, intrusion detection tailored to power protocols, and resilience through redundancy.
Implementation starts with network segmentation using the Purdue Model adapted for utilities: Levels 0–2 for field devices, Level 3 for control systems, Level 3.5 for demilitarised zone (DMZ), and Levels 4–5 for enterprise. Firewalls at zone boundaries understand power protocols – not just port blocking. Secure remote access solutions provide controlled connectivity for vendors and maintenance without exposing systems to the internet. Intrusion detection systems recognise power-specific anomalies – unexpected GOOSE messages, unusual SCADA commands, or timing deviations in protection communications. Resilience design ensures security measures don't create single points of failure – redundant security gateways maintain connectivity even during maintenance or failures.
Throughput Technologies advises on electric power system networking that balances deterministic performance for protection with modern data exchange for grid optimisation, implemented in environments where electrical noise and reliability requirements challenge conventional network design.
Talk with a Solutions Specialist to design your power system network infrastructure.
Acceptable latency varies by protection type. Line differential protection typically allows 5–15 milliseconds round-trip delay including device processing. Distance protection using IEC 61850 sampled values needs consistent latency more than absolute speed – variation below 100 microseconds is often more important than average latency. Busbar protection with GOOSE messaging requires 3–10 milliseconds for peer-to-peer communication. The critical factor is often maximum latency under worst-case conditions rather than average performance. Network design must guarantee these maximums during fault conditions when protection traffic increases significantly. Testing should verify performance during simulated faults, not just normal operation.
Use gateway devices that translate between protocols. IEC 61850 to Modbus gateways allow legacy devices to participate in modern networks. For protection systems, consider hybrid approaches: new bays with IEC 61850, existing bays with traditional wiring, connected via gateway protection relays. Network segmentation separates legacy traffic – put legacy devices on separate VLANs with controlled access to IEC 61850 networks. Gradual migration allows replacement during maintenance cycles rather than complete overhaul. Interoperability testing is critical – different vendors' IEC 61850 implementations vary, and gateway devices may introduce timing differences that affect protection coordination. Document the migration plan with clear timelines for complete transition.
For mission-critical protection, parallel redundancy protocol (PRP) provides zero-time failover without packet loss. PRP uses duplicate networks with simultaneous transmission – if one path fails, the other delivers packets without interruption. High-availability seamless redundancy (HSR) provides similar benefits in ring topologies. For less critical applications, rapid spanning tree protocol (RSTP) offers sub-second recovery. The choice depends on protection criticality and cost: PRP requires duplicate infrastructure but guarantees continuous operation; RSTP is more economical but has brief interruption during failover. Consider also device-level redundancy – dual-network interfaces in protection IEDs, redundant switches, and diverse physical paths. Test failover regularly – untested redundancy often fails when needed most.
Individual phasor measurement units (PMUs) generate 100–500 kbps depending on configuration: 50/60 Hz reporting rate, number of measurement channels (typically 6–12), data quality flags, and compression. A transmission utility with 500 PMUs therefore needs 50–250 Mbps for raw data collection. Compression reduces this by 30–50%. More importantly, during grid disturbances, reporting rates may increase, creating temporary bandwidth spikes. Network design should accommodate peak rates with headroom for growth. Aggregation architecture helps: local phasor data concentrators (PDCs) reduce upstream traffic by filtering and consolidating data. Consider also management traffic, time synchronisation (PTP), and redundancy overhead when dimensioning links.
Implement security appropriate to each communication layer. At the device level: unique credentials per DER, secure boot, tamper detection. For network communications: transport layer security (TLS) with mutual authentication, certificate-based rather than password-based authentication. At the system level: segmentation separating DER networks from core utility systems, intrusion detection monitoring for anomalous DER behaviour. Key management is critical at scale – automated certificate provisioning and renewal systems. Consider also physical security for field devices – tamper-resistant enclosures, secure mounting. Standards compliance helps: IEEE 2030.5 and IEC 62351 define security requirements for DER communications. Regular security assessments should include DER infrastructure as it expands.
Networking for pipeline SCADA, leak detection, compressor station control, and long-distance communications along pipeline corridors with appropriate redundancy and security.
Network design for water treatment plants, distribution monitoring, pump station control, and wastewater management with reliability requirements for continuous public health protection.
Networking for nuclear facilities and other safety-critical utilities where deterministic performance, diversity, and regulatory compliance govern network architecture and implementation.