What happened to our electricity system on Friday August 9th 2019?

14 Aug 2019
Balancing the system

An AC power system must be operated within a certain band of frequency, in our case near to 50 Hz. This requires that, moment by moment, the generation of power and demand for it are matched. If they aren’t, the system frequency falls (when there is not enough generation) or rises (when there is too much). It is intended that market participants schedule their own generation to come on and switch during the day to match the changing pattern of demand. However, National Grid’s Electricity System Operator (ESO) function brings on extra generation or reduces it to get the broad balance right. The ESO also buys ‘dynamic’ services: automatic controls on generators where system frequency is monitored and power outputs are adjusted to fine tune the balance. This is known as ‘frequency response’. There are some energy storage facilities that also provide frequency response and, increasingly, large users of electricity can adjust their demand to contribute to the overall balance.

Friday 9th August

The basic general rule used by many system operators around the world is not only to make sure that everything on the system is within acceptable limits but also that that will be true even after any single unplanned event, such as a short circuit fault on a branch of the network. They therefore carry enough frequency response to cover for the sudden loss of the single largest generator or interconnector import. Unfortunately, on Friday afternoon at soon after 16:52, two sources of power were lost within less than a minute of each other: 790 MW from Hornsea 1 offshore wind farm and 660 MW from Little Barford gas-fired power station, the latter due to what its owner, RWE, said was a ‘technical fault’. The combined total loss of 1430 MW was significantly greater than what appears to have been the largest single infeed loss risk at the time.

A frequency trace to which we have access at Strathclyde shows that the fall in system frequency was arrested by the combination of responses on the system but dropped to below 49.2 Hz (Figure 1). However, the trace also shows a second drop in frequency about a minute after the first one.  With much of the scheduled frequency response capacity having been exhausted and not yet replaced, system frequency subsequently fell to less to 48.8 Hz at which point the first stage of ‘Low Frequency Demand Disconnection’ (LFDD) operated.

The triggering of a defence mechanism

LFDD (known in other countries as ‘under frequency load shedding’) is an automatic ‘defence measure’ installed on the distribution networks and designed to save the system from a complete collapse. It does so through restoring the balance between generation and demand by opening circuit breakers on portions of the distribution network to disconnect demand. It works in 9 successive tranches, each triggered if system frequency continues to fall.

The first tranche of LFDD, the only one that was triggered on Friday, is intended to disconnect 5% of demand under Operating Code No. 6 (OC6). However, on Friday, the disconnected demand seemed to include supplies to Network Rail signalling facilities. This, in turn, caused interruptions to train services. Even though system frequency was restored to around 50 Hz within 10 minutes of the initial generation losses (partly as a result of the demand disconnection) and National Grid said that “By 6.30pm, all demand was restored by the distribution network operators”, restoration of rail services seemingly took much longer.

Figure 1: GB system frequency before and after the disturbance on August 9th 2019
Some questions

Given that system frequency was falling, it could be said that LFDD succeeded in saving the system from a complete collapse, albeit at the expense of some disconnected demand. However, it seems to me that there are some particular questions that might now be asked:

  1. What caused the losses of power from Hornsea and Little Barford? Were they independent random events or was there, somehow, a connection between them?
  2. Why did system frequency fall as much as it did for the initial loss?
  3. Would it have been possible for the Distribution Network Operators (DNOs) to have implemented LFDD in such a way as to have avoided disconnecting Network Rail supplies?
  4. What level of resilience against losses of power supplies do Network Rail facilities have?

One of the most common causes of electricity network faults is lightning and we know that there was significant lightning activity in the East of England on Friday evening. One wonders if that might have had any influence on what happened on the electricity system.

One further thing that the electricity regulator, Ofgem, is likely to want to explore is the quality of information provided by the different parties, in particular what the ESO told the DNOs, what the DNOs understood, and what they told their customers, notably Network Rail. There is also the question of what Network Rail told the train operators and what the train operators told passengers. It seems to me that a key part of what different parties could and should have been told was when they might expect electricity supplies to be restored.

Some politicians and trades unionists have suggested that the incident is a sign of lack of investment in the electricity system. However, there’s no indication that I can see that this event is a result of any lack of major infrastructure such as transmission lines or generation capacity.

Also, there is no indication that the event had anything to do with the characteristics of wind as a source of electrical energy. The reduction in power from Hornsea was much faster than would be expected due to any changes in wind speed. Hornsea’s owner, Ørsted, said on Saturday that “automatic systems” had “significantly reduced” power. Another report said that Ørsted had confirmed that problems had occurred and that they are “investigating the cause, working closely with National Grid System Operator”. This suggests something particular to Hornsea 1 rather than connected to wind in general. It may or may not turn out to be significant that, although Hornsea 1 is exporting power onto the system, part of it is still under construction.

Inertia

Some reports have suggested that, at the time of the incident, the system’s inertia was too low or the ESO had not procured sufficient ‘flexible capacity’ such as frequency response.

The inertia of a power system refers to the kinetic energy stored in the rotating masses of generating plant that, through electro-magnetic interactions within the type of generator used in large, thermal power stations, is drawn upon automatically when there is a mismatch between total generation and demand. It helps to slow down a fall in system frequency and has become a topic of debate as the kind of equipment used in wind farms, HVDC interconnectors and arrays of solar panels doesn’t naturally provide it.

The ESO is obliged to operate Britain’s electricity transmission system in compliance with the Security and Quality of Supply Standard (SQSS). This sets out the basic rule that everything should still be ok even after one significant fault event. It includes the requirement that system frequency should stay within 50.5 and 49.5 Hz although, if there is a particularly large loss of infeed, it can go outside that but for no more than a minute. Contrary to some reports, 49.5 Hz is not “dangerously low” and excursions below it are extremely rare. The practical lower limit for system frequency as defined in the Grid Code is 47.5 Hz. As noted above, LFDD starts to operate at 48.8 Hz.

In its assessment of compliance with the supply standards, the system operators should take account of any impact of a transmission system disturbance on generation connected to the distribution network. It needs to work together with the network operators in order to do that.

If inertia or the volume of response are so low that the single largest infeed loss would lead to a breach of defined frequency limits, the ESO is obliged either to procure more response or re-dispatch generation via the Balancing Mechanism. This latter action might either reduce the size of the largest loss or ensure that the system has more inertia. If Ofgem launches an enquiry into the incident, it may well want to know whether the state of the system at the time of the incident was compliant with the SQSS.

One of the points of debate within the electricity sector as we see growing amounts of renewables on the system and increasing imports from the rest of Europe is whether the market arrangements currently in place for the procurement of frequency response are quite right for the future system. It is being asked whether, with some different product definitions, sufficient response might be bought more cheaply than would otherwise be the case.

On August 12th, Ofgem asked the ESO for an urgent interim report into the August 9th incident by August 16th and a final, detailed technical report by September 9.

The bigger picture

The incident on August 9th arguably highlights a range of wider issues. It might be argued, for example, that the ESO should carry enough reserves of frequency response to deal with two large losses of generation rather than just one. That is, it should cater for what’s called an “N-2” event rather than just, as is common around the world, “N-1”.

However, almost coincident losses of generation are very rare with only two examples – Friday’s and one on May 27th 2008 – that I can remember in Britain in the last 25 years, and frequency response and reserve is already quite expensive: together, ‘response’ and ‘reserve’ cost more than £270 million in 2018/19.

When considering whether any procedures should change, the additional costs of securing to “N-2” losses of infeed could be compared with those of measures that would significantly reduce the impact of demand disconnections on the rare occasions that they occur.

On the global scale of electricity system disturbances, e.g. in Jakarta and Western Java on August 4th and in Argentina in June when practically the whole country was blacked out, last week’s event in Britain was relatively small.  However, it was still massively disruptive for lots of people – largely, it seems, due to the impact on the railways. For example, there have been reports of certain trains’ power supplies failing and operators struggling to restart them. In the meantime, they would have been in the way of other trains. Finally, as a result of all the disruption, many trains will have been in the ‘wrong places’ relative to the normal timetable.

A worst case outcome of any disturbance for an electricity system operator is that the whole system goes down.  Recovery is then hugely challenging, not just for users of electricity such as the railway companies but also the power system operators.  Friday’s event is perhaps a reminder that, even though we have never suffered a whole system blackout in Britain and many of our system design and operation procedures have stood the test of time, restoration plans need to be regularly reviewed.

The nature of the system continues to change with, quite rightly in view of our emissions reduction commitments, more low carbon sources of power being used.  This means that the normal operating procedures and the codes and standards that govern the system also need to be kept under review, especially in light of our increasing dependency on electricity.

That increasing dependency on electricity raises perhaps the biggest societal questions. It is impossible to guarantee perfectly reliable electricity supplies. By international standards, supplies in Britain are, on average, very reliable. How much are we prepared to pay to make them more reliable? And, because they will never be perfect, do we – individuals, institutions and service providers – know how to cope when we experience an outage?