Measuring performance is an important topic in renewables. In fact, I sometimes feel we talk about it way too much, without really understanding how Key Performance Indicators function and represent data. Don’t get me wrong, I am all for measuring and keeping track of asset management performance ratios. However, I do wish that the KPIs we track so vehemently really captured the entire picture…
KPIs are the numbers that tell investors and stakeholders exactly how the project is doing. Whether it generated as much electricity as we expected last month or if the losses surpassed our expectations. They tell us, in short, if the project is profitable or meeting our DSCR (Debt Service Coverage Ratio).
However, experts in performance asset management sometimes question the reliability of these very KPIs. Afterall, they’re based on baselines and assumptions that we’ve inherited from the incumbent – the traditional energy industry.
In this discussion, I want to tell you how we can make these indicators more representative because yes, industry standards are often inherently flawed. There’s a better, more adjusted way that we should be looking at.
And you’d rather be more than just ‘standard’, right?
The Problem with Performance Asset Management KPIs
It wouldn’t be incorrect to say that renewable energy has borrowed KPIs from its non-green counterpart—an industry that treats its resources as if they’re infinite. So, their logic is: “I can pump fuel into my generators 24 hours a day; the only limitation is how often my equipment needs to be shut off for repair or breaks down.”
Unfortunately, the same logic cannot be applied to renewables. Solar power, for instance, depends on factors like the amount of sun, time of year and varying temperatures.
So, as we measure the KPI: Plant Availability – one wonders how efficient this indicator is if the plant is ready to function but there is just no sunlight available to produce solar power? Not taking this consideration into account clearly means that using non-green standards is not optimal for renewable energy.
Here’s an example of how skewed performance asset management can become an issue. Let’s say a solar developer is buying 40,000 panels for a site. Each of these panels has a tolerance. This number is incorporated into a generic loss calculation, which is then modelled using industry-accepted averages. Sounds about right? But get this, when was the last time the industry revised this average, and questioned its accuracy?
Hence, the baseline is flawed, even before a single panel is installed. In an ideal scenario, we should flash-test these 40,000 panels – calculate the actual loss and its impact, string by string. And then analyze those strings every few years to see how consistently they are degrading.
While the feasibility of this method depends from one project to the next, the question remains – is there anyone willing to spend time and money doing so? Sadly, the answer is not an affirmative.
Losses Associated to Downtime
It’s common for financial asset managers to consider ‘unavailability’ i.e. how often the equipment was down, as a KPI. However, aside from simply looking at downtime, I’d argue that all teams should also look at the losses associated with downtime.
Let’s say, an inverter goes down on a snowy, cloudy afternoon in December. The same set of inverters go out at noon on a clear day in June. It takes two days to get these panels back up. This is not the same downtime. These are not the same losses.
What you need to know, apart from trends in equipment unavailability, is the actual losses due to unavailability, which will then be calculated against how much energy the project should be generating.
This is a meaningful set of data. It can tell you that whether or not you’re hemorrhaging money because your equipment keeps breaking down in the summer. Not only do you need to invest in how quickly you’re fixing this equipment, your team should absolutely look at implementing a proactive maintenance program to avoid this breakdown altogether.
Returns on Maintenance
So, you’ve figured out that your equipment breaks down at a certain time and you’ve invested money to address that problem—now what?
Similar to any industry, in renewables, you need to know if the improvements actually improved the asset’s performance. So, the same way that we calculate the return on investment (ROI) for an entire project, you should also be calculating the ROI on maintenance. This metric will help evaluate if the measures you’re putting in place are actually working.
At the end of the day, if you’re going to spend $10,000 to make $9,000, the repair is not worth it. Taking this into account means tracking performance improvements against maintenance investments. Previously, I’ve talked about a similar trade-off facing asset managers before they decide whether to re-power a solar project or not.
By realizing how effective your repairs are, or in retrospect, how (un)important some of them were – you can zero-in on the most significant issues facing a project and apply these learnings to future solar projects.
Now that is asset intelligence.
The Recommendations for Performance Asset Management KPIs
The solid foundation of any renewable project’s KPI should be weather-adjusted performance. With that in mind, an asset manager’s task becomes two pronged. He will now be asking a two-part question: 1) Is my plant functioning optimally… 2) in this weather?
According to this adjustment, you are taking weather predictions and calculating how your equipment should perform with a certain amount of sunlight and environmental considerations (temperature, wind speed, etc..). This is your weather-adjusted baseline. Consequently, you will now be relying on a combination of on-the-ground measurements, satellite imaging and private weather prediction services. The result? Potentially novel ways to re-forecast weather and performance.
However, be warned. This re-forecasted weather-adjusted performance may be a challenging task – owing to the climate change debate. With weather patterns all over the place, it may take our industry some effort to reconcile new weather expectations for the next 10 or 20 years with the 100-year history of data and its relative predictability.
Only by creating a robust weather-adjusted performance formula will you be able to set a new asset performance baseline.
Once you have the weather adjusted forecasts, I would argue that it’s necessary to re-do this calculation every one or two years. The goal? Better forecasts and improved performance.
This is where Interannual Variability – a statistical evaluation of the historical year over year weather comes in. While in solar energy, this difference tends to make a much smaller, it may have a huge impact for wind power.
For instance, annual performance that seems off base could be described by just interannual variability. However, if you are consistently outside a normal interannual variability you may need to re-evaluate your weather sources and build a weather adjusted baseline. Doing so, you will then avoid unnecessary repairs to equipment when it is really the weather that was mis-predicted.
As an industry, the accuracy of our weather adjusted KPIs will largely depend on the kind of support we get from governments. Accounting for changing patterns in weather may require collaboration between meteorologists and industry experts.
Such collaboration will also help put together private research groups that would shoulder the burden of dedicating time and energy to extensive research.
Consequently, we’d be equipped with timely industry insights and in turn, more optimal (predictable) projects.
Evaluate, Rinse & Repeat
I cannot stress enough on the importance of re-evaluating a project’s performance baselines. I won’t lie to ya’ – it’s a lot of work. It’s high impact. And it’s the recipe to success for renewable projects.