The Electric Industry Can Do Better

Back to Top

Power Reliabilty

Back in 1965, one of the founding fathers of Intel made a bold prediction: Every 12 months, the number of transistors on a chip would double and the transistor price would drop by approximately 50%. Called Moore’s Law, the observation was later revised. Though the transistor-increase/cost-reduction phenomena now happens every 18-24 months, the prediction conceptually has stood the test of time for five decades.

The point is, the semiconductor industry has done something extraordinary for its customers: Push the boundaries of what could be done, year after year. It did not settle with just doing OK. It has served its customers extremely well, with constant innovation. Hats off to the semiconductor industry for a job well done.

A direct comparison between the semiconductor industry and the electric utility industry is difficult due to the differences. However, the spirit of both industries is to serve their customers properly. While the semiconductor industry is product-driven, the electric utility industry lives and thrives in the world of asset ownership. Because of that, electric utilities create compelling business cases for capital investments. Billions of dollars are invested in the electric grid. Part of that goes into reliability-improvement programs. However, data from the Lawrence Berkeley National Laboratory show that, over a recent 12-year period, the reliability of the U.S. electrical power system (both SAIDI and SAIFI) remained flat, even after utilities poured billions of dollars annually into the U.S. grid.

As an industry, we should be asking ourselves some important questions:

• Even with compelling business cases and billions of dollars invested, why did reliability not improve during that 12-year period?

• What is the actual purpose of the grid investments that have been made?

• Are utilities properly serving their customers, when reliability remained flat for such a long period of time?

• Is the business case weighted and/or biased more toward the financial side than the reliability side?

• Is the current regulatory environment (which varies by region/state/country) set up to incentivize reliability, or not?

All are worthy questions to consider. More important is, how can we naturally increase the emphasis on the reliability side but still maintain an acceptable balance between financial performance and reliability improvement? That’s a difficult balance to attain.

I believe the answer lies in change because we can’t expect a different outcome if the rules and current business cases remain the same.

We are starting to see a wave of performance-based regulation (PBR) through several jurisdictions. As the name implies, PBR is designed to motivate utilities to focus on performance (i.e. reliability and other metrics) so they maximize their return on investment. A key to making PBR work is to provide an asymmetrical risk/reward mechanism that does not follow a linear path because a linear path not only is always the path of least resistance, but it also is the slowest path to change.

As a simple example, if a utility improves reliability by 10%, it captures 10% of the benefits. However, if the utility improves reliability by 15%, an additional PBR incentive of 5% would allow it to capture 20% of the benefits.

Regulation is extremely complex, which makes compliance and strategy difficult. It should always strive to provide higher reliability, which is what customers see and feel. But regulation must also help utilities stay financially healthy.

I’d be interested in learning your thoughts on this issue in the Comments below.


Soren Varela

Publication Date

November 21, 2019