Overcoming the false promise of AI in clean energy operations

Stay in touch
Subscribe to our newsletter for expert insights, company updates, and the latest in renewable energy management solutions.
By Will Troppe, Senior Director of Product at Power Factors
AI isn’t new. The clean energy software space is littered with companies that have overpromised against their AI-powered capabilities over the last decade and failed to deliver. Some pivoted successfully; others are no longer around to tell the tale.
It’s different now. In the last several years, commercial large language models (LLMs) and “agentic” AI have changed the way people engage with their data and use their software. It feels like AI is steadily eating the world.
Clean energy operators, increasingly confronting the complexity associated with energy storage, are optimistic skeptics. They want to benefit from solutions that technology is just now making possible, but they’ve been burned. Why should this time be different?
Done right, AI can make clean energy operations dramatically more efficient. Streamlined, automated, agentic workflows that enable you to do more. Approachable analytics that free performance engineers to conduct deep dives and act. Done wrong, users contend with confusing black boxes and meaningless, unactionable analytics they distrust.
The promise is real — and better yet, it’s achievable. Doing AI right requires clean, structured data; the right AI tool for the right job; and lots of good data and experience. Let’s dive in.
Cloud Truth vs. Ground Truth
Too often, operators struggle to aggregate, contextualize, and understand their data at scale. The old cliché is “garbage in, garbage out”; it applies to AI systems and data platforms alike.
In battery energy storage systems (BESS), the stakes are even higher. Operators need to respond in real time to protect batteries from degradation and take advantage of revenue opportunities. But if the inputs are inaccessible, outdated, or unstructured, any recommendation AI makes is at best unhelpful, and at worst, harmful to asset and human life.
When your data platform isn’t up to date, there’s an inherent misalignment between what the system says is happening and what’s actually happening in the field. Your cloud truth differs from your ground truth. What you see in Windows doesn’t match what you see out your window.
Your team starts asking:
-
Is this event real?
-
Can I trust this KPI in my report?
-
Where can I double-check before I act?
We’ve seen it firsthand: lean asset management teams spend more time wrangling data than they do acting on it. Until you establish data trust on clean, structured data, most AI just adds noise to an already overloaded workflow.
What’s Missing: Standardization, Transparency, and Traceability
So, what’s the solution? It starts with adopting a data platform built for trust. One that gets three things right:
Standardization – Your data needs to speak the same language across systems. Clean, aggregated, normalized, and consistently structured data makes large-scale insights possible.
Transparency – Your team needs to understand how and why a recommendation was made. Platforms must expose the logic behind every output, not hide it in a black box. Validate the results of calculations and analytics in ways human users can understand: visually.
Traceability – You should be able to trace every event, insight, calculation, data point, or anomaly back to its origin in the field and through every data transformation along the way. From raw signal to recommendation, you need a clear and immediately accessible line of sight.
If you have to reverse-engineer your platform to understand what’s happening by digging into code or hefty documentation repositories, you’ve already lost trust. Platforms should be trustworthy by default, with issues surfaced to users by exception, alongside clear confidence intervals.
When those fundamentals are in place, AI becomes a tool you can rely on for smarter alerts, faster diagnostics, better forecasting, and real-time optimization. That means clearer signals, faster answers, and the confidence to act when it matters most.
The Right Tool for the Job
AI can be defined broadly: it’s an algorithm that performs an action otherwise done by a human. (In high school, I built a simple, rules-based AI that I played the card game “Uno” against. It was easier than making the game multiplayer!)
Our industry has had AI-powered analytics for decades. Algorithms could intelligently predict equipment failures due to historical deviations from accepted performance ranges or create “digital twins” to compare actual vs. theoretical performance to detect issues and trigger the need to recalibrate sensors or models. In the analytics world, we’ve always used the right AI algorithm for the right job, a practice we’ll continue even as the available library of AI algorithms expands.
The software industry is increasingly obsessed with AI powered by LLMs. As even the most novice ChatGPT user has come to understand, LLMs are not the best tool for every job. They, and their broader category, “generative AI,” are great for some purposes, like summarizing text. They’re not yet reliable or production-grade for other purposes, like structuring and cleaning data or extracting trustworthy and valuable insights from datasets across the board.
As clean energy software users increasingly expect agentic workflows in their applications, they must exercise caution. Work with vendors who know the strengths and limitations of any technology. Leverage tried-and-true methods where they’re still best. Augment them with agentic AI.
And as you build comfort with your next-generation AI tools, crawl, then walk, then run. Attack the most impactful, inefficient, frequently occurring use cases first. Consider use cases that are either subjective or easily determinate. LLMs hallucinate, so don’t use them for use cases that cannot tolerate hallucination and have low tolerance for false positives and negatives. Leverage human-in-the-loop workflows while you refine and before you automate. Don’t have your most junior engineers test the tools; target more mature users who can intuitively detect hallucinations.
Why AI Matters for BESS
Battery storage is a proving ground for AI in operations.
Push too hard and you degrade your battery. Play it too safe, and you miss revenue opportunities. That’s why BESS operators can’t afford to make decisions based on unclear data.
When insights are driven by standardized, transparent, and traceable data, AI can finally help operators balance performance, compliance, and profitability.
You gain confidence that:
-
You can verify availability for market participation
-
You can prove warranty compliance
-
You can benchmark performance across sites
-
You’re acting on real issues, not false alarms
For BESS operators, clear and trustworthy data is the difference between asset health and costly mistakes.
AI Isn’t the Future Without Trusted Data
In clean energy, newest-generation AI is well-positioned to augment human decision-making. Our space is still littered with examples of nuanced, inefficient, manual actions that require operators to scale linearly with fleet size, and AI can serve as a supercharged solution.
When the foundation is strong and has access to reliable and stable data, AI can deliver against its promises.
The future isn’t just AI making decisions for you. It’s AI that works with you, surfacing the right insights at the right time, so you can make the call that only a human can.
The goal isn’t “more AI.” It’s better outcomes.
And those start with better data.
Download our latest e-book:
5 overlooked gaps in your BESS strategy—and what they’re costing you
Will Troppe is Senior Director of Product at Power Factors. Follow him on LinkedIn for more on data trust, asset performance management, and clean energy operations.