A solar installer in Phoenix ran the same 8 kW system through three production estimators. PVWatts predicted 12,847 kWh per year. PVGIS predicted 13,201 kWh. The actual measured output after 12 months was 11,934 kWh. Both tools overestimated by 7.6% and 10.6% respectively. The installer had sized the inverter, negotiated a PPA rate, and promised a payback period based on numbers that never materialized.
This story repeats daily across the solar industry. Production estimates drive system sizing, financial projections, customer expectations, and bankability assessments. Yet most installers treat these estimates as ground truth rather than educated approximations.
This guide compares the three most widely used solar production estimators — PVWatts, PVGIS, and SurgePV’s integrated generation tool — head-to-head. We run identical systems through all three tools, expose where each model diverges from reality, and explain the engineering reasons behind those gaps. You will learn which tool to trust for which geography, how to apply sanity checks to any estimate, and why estimator accuracy matters less than most installers think.
TL;DR — Solar Production Estimator Comparison
PVWatts and PVGIS both deliver 5-10% annual accuracy for standard unshaded systems. PVWatts excels for US locations with ground-station TMY data. PVGIS excels for Europe and global coverage with ERA5 reanalysis. Both overestimate by 15-25% for shaded or non-standard systems unless manually corrected. SurgePV’s tool bridges both datasets with project-specific shading and temperature modeling. For bankable financial models, always apply a 10-15% production buffer below the estimate.
In this guide:
- What solar production estimators do — inputs, outputs, and algorithms explained
- PVWatts deep dive — NREL model, strengths, limitations, and accuracy record
- PVGIS deep dive — JRC model, European focus, global coverage, and limitations
- SurgePV generation tool — how integrated design software differs from standalone estimators
- Side-by-side comparison table — features, accuracy, coverage, and ease of use
- Accuracy testing: same system, three tools, real measured results
- Model differences: weather data, transposition models, temperature models, loss factors
- When to use each tool — use case recommendations by project type and geography
- Limitations all estimators share — what no calculator can model well
- How to validate estimator output against actual production data
- Contrarian view: why estimator accuracy matters less than installers think
Solar Production Estimator Comparison: Quick Answer
PVWatts and PVGIS are the two dominant solar production estimators. Both deliver reasonable accuracy for standard systems. Both fail predictably under specific conditions. The best solar production estimator for your project depends on location, system complexity, and what you need the number for.
| Estimator | Best For | Annual Accuracy (Unshaded) | Annual Accuracy (Shaded) | Global Coverage | Cost |
|---|---|---|---|---|---|
| PVWatts | US residential and commercial | 5-8% | 15-25% (uncorrected) | Limited | Free |
| PVGIS | European and global projects | 5-10% | 15-25% (uncorrected) | Full global | Free |
| SurgePV | Integrated design-to-proposal workflow | 4-7% | 8-15% (with shading) | Full global | Subscription |
For a standard 6 kW south-facing rooftop system at 30-degree tilt with no shading, all three tools produce annual estimates within 10% of each other. The differences emerge when systems depart from that simple baseline — which describes almost every real-world installation.
What Solar Production Estimators Actually Do
A solar production estimator is a mathematical model that converts weather data, system specifications, and site conditions into an annual or hourly energy yield prediction. Every estimator follows the same basic chain:
- Irradiance data — horizontal global irradiance from weather stations or satellite reanalysis
- Transposition — convert horizontal irradiance to plane-of-array irradiance using tilt, azimuth, and sun position
- Temperature correction — reduce output based on module operating temperature
- System losses — apply wiring, soiling, mismatch, availability, and degradation factors
- Inverter conversion — apply inverter efficiency curve and clipping limits
- Hourly summation — accumulate hourly results into monthly and annual totals
The differences between estimators lie in which datasets they use for step 1, which mathematical models they use for steps 2 and 3, and which default assumptions they apply for step 4.
Core Inputs Every Estimator Requires
| Input | Typical Value Range | Impact on Accuracy |
|---|---|---|
| Location (latitude/longitude) | Precise coordinates | 1-3% shift per degree of error |
| System capacity (kW DC) | 3-500 kW residential/commercial | Direct linear scaling |
| Module type | Mono PERC, TOPCon, HJT, thin-film | 2-8% efficiency difference |
| Tilt angle | 0-45 degrees typical | 5-15% annual variation |
| Azimuth | 0 (south) to +/- 90 degrees | 2-20% annual variation |
| Inverter efficiency | 96-99% | 1-4% system-level impact |
| DC-to-AC ratio | 1.0-1.3 | Affects clipping losses |
| Shading | 0-50% loss factor | Largest single error source |
| Soiling | 2-8% annual loss | Highly location-dependent |
| System availability | 97-99.5% | Affects long-term averages |
What Estimators Output
All major estimators produce:
- Annual production (kWh/year) — the headline number most installers quote
- Monthly production (kWh/month) — needed for cash flow modeling and seasonal analysis
- Hourly production (kWh/hour) — needed for self-consumption and battery sizing
- Performance ratio (%) — actual output divided by theoretical output at STC
- Specific yield (kWh/kWp/year) — normalized output for system comparison
Advanced tools also output:
- Inverter clipping losses — energy lost when DC input exceeds inverter AC rating
- Temperature losses — energy lost due to module heating above 25°C STC
- Shading losses — energy lost due to near-shading from buildings, trees, or terrain
- Spectral effects — wavelength-dependent efficiency variations
- Soiling and snow losses — location-specific environmental derating
PVWatts Deep Dive: NREL’s US Standard
PVWatts is the solar industry’s most widely referenced production estimator. Developed by the National Renewable Energy Laboratory (NREL) and first released in 1996, it has been validated against measured performance data from thousands of US installations over three decades.
How PVWatts Works
PVWatts uses a simplified hourly simulation based on the following methodology:
Weather data: NREL’s TMY2 (1961-1990), TMY3 (1991-2010), or TMYx (1998-2023) typical meteorological year datasets. These are constructed from actual ground-station measurements at 1,500+ locations across the United States. Each TMY file contains 8,760 hourly records of global horizontal irradiance, direct normal irradiance, diffuse horizontal irradiance, ambient temperature, and wind speed.
Transposition model: Perez all-weather model (1990). This model separates direct and diffuse irradiance components and calculates plane-of-array irradiance based on sun position, surface tilt, and atmospheric conditions. The Perez model is the industry standard for clear-sky and all-sky transposition.
Temperature model: Faiman model with wind speed correction. Module temperature is calculated from ambient temperature, plane-of-array irradiance, wind speed, and mounting type (open rack, roof mount, or building-integrated). The default NOCT (Nominal Operating Cell Temperature) is 45°C for standard modules.
System losses: Default 14% total system losses, broken down as:
| Loss Category | PVWatts Default | Adjustable? |
|---|---|---|
| Soiling | 2% | Yes |
| Shading | 0% | Yes |
| Snow | 0% | Yes |
| Mismatch | 2% | Yes |
| Wiring | 2% | Yes |
| Connections | 0.5% | Yes |
| Light-induced degradation | 1.5% | Yes |
| Nameplate rating | 1% | Yes |
| Age | 0% | Yes |
| Availability | 3% | Yes |
Inverter model: Simplified efficiency curve with default 96% efficiency. PVWatts models inverter clipping when DC input exceeds the AC rating multiplied by the DC-to-AC ratio.
PVWatts Strengths
Ground-truth validation. PVWatts has been validated against measured data from the NREL Solar Radiation Research Laboratory, the Sacramento Municipal Utility District, and numerous academic studies. A 2018 NREL study found median annual accuracy of 5.2% for 199 unshaded residential systems across the US.
US-specific datasets. The TMYx dataset incorporates recent weather patterns including changing cloud cover, temperature trends, and atmospheric conditions. This matters because solar resource in many US locations has shifted measurably over the past two decades.
Simple interface. The web interface requires no technical expertise. An installer can obtain a production estimate in under two minutes by entering location, system size, tilt, and azimuth.
API access. NREL provides a free API that allows developers to query PVWatts programmatically. This has made PVWatts the backbone of many solar quoting tools and lead generation platforms.
PVWatts Limitations
Limited international coverage. While PVWatts includes some international TMY data, coverage outside North America is sparse. For European, African, Asian, or South American projects, PVWatts often uses interpolated data from distant weather stations — reducing accuracy significantly.
No shading modeling. PVWatts accepts a single shading loss percentage as input. It does not model near-shading from buildings, trees, or terrain features. Installers must estimate shading losses separately and input them manually — a process that is often inaccurate.
Simplified temperature model. The Faiman model with default parameters works well for standard open-rack mounting in moderate climates. For roof-integrated systems, high-temperature climates, or systems with atypical airflow, the temperature model can diverge from reality by 5-10%.
Fixed DC-to-AC ratio default. PVWatts defaults to a 1.2 DC-to-AC ratio. While adjustable, many users accept the default without considering whether their actual inverter sizing matches. A 1.1 ratio versus 1.3 ratio changes annual clipping losses from under 1% to over 3% in high-irradiance locations.
No bifacial modeling. PVWatts does not model rear-side irradiance for bifacial modules. Installers using bifacial panels must apply a manual correction factor — typically 5-15% depending on albedo and mounting height.
PVGIS Deep Dive: The European Commission’s Global Tool
PVGIS (Photovoltaic Geographical Information System) is developed and maintained by the Joint Research Centre (JRC) of the European Commission. First released in 2001, it has evolved into a comprehensive solar resource assessment tool with full global coverage.
How PVGIS Works
PVGIS offers two main calculation modes: the classic PVGIS interface and the newer PVGIS 5.2 API. Both use the following methodology:
Weather data: PVGIS uses satellite-derived irradiance data rather than ground-station measurements. The primary dataset is ERA5 reanalysis from the Copernicus Climate Change Service, with spatial resolution of 0.25 degrees (approximately 25 km at mid-latitudes). PVGIS also offers SARAH-2 satellite data for Europe, Africa, and parts of Asia at 0.05-degree resolution (approximately 5 km).
Transposition model: PVGIS uses the Muneer model or the Reindl model depending on the dataset selected. The Muneer model is similar to Perez but uses slightly different diffuse irradiance treatment. The Reindl model is an empirical approach based on clearness index correlations.
Temperature model: PVGIS uses the Faiman model with user-selectable mounting type (free-standing, building-integrated, or facade). The default temperature coefficients are based on module technology selection (crystalline silicon, thin-film, or CIS).
System losses: PVGIS requires users to input a single “system loss” percentage rather than breaking losses into categories. The default is 14% — similar to PVWatts — but users must estimate and input the appropriate value.
Horizon and terrain shading: PVGIS includes a digital elevation model (DEM) and can calculate far-shading from terrain (mountains, hills) based on the horizon profile at the selected location. Near-shading from buildings and trees is not modeled.
PVGIS Strengths
Global coverage. PVGIS covers the entire planet using satellite-derived data. This makes it the default choice for projects in Africa, Asia, South America, and remote locations where ground-station data does not exist.
Higher spatial resolution for Europe. The SARAH-2 dataset provides 5 km resolution for Europe, Africa, and parts of Asia — significantly finer than PVWatts’ point-source TMY approach for international locations.
Terrain shading integration. PVGIS automatically calculates horizon profiles from digital elevation data. For mountainous regions — the Alps, Andes, Himalayas — this terrain shading correction improves accuracy substantially compared to flat-terrain assumptions.
Multiple dataset options. Users can choose between ERA5 (global, 25 km), SARAH-2 (Europe/Africa/Asia, 5 km), or NSRDB (US, 4 km) depending on location. This flexibility allows users to select the best available data for their project.
Snow modeling. PVGIS includes a snow cover model that reduces production during winter months based on historical snow depth data. This is important for northern European, alpine, and high-latitude installations.
Bifacial modeling (PVGIS 5.2). The latest version includes a bifacial module option that models rear-side irradiance based on albedo, mounting height, and row spacing. This is a significant advantage over PVWatts for bifacial projects.
PVGIS Limitations
Satellite data uncertainty. ERA5 reanalysis data is derived from atmospheric modeling and satellite observations, not direct ground measurements. In regions with complex cloud patterns or high aerosol variability, satellite data can diverge from ground truth by 10-15%.
Coarse temporal resolution for some datasets. The ERA5 dataset provides hourly resolution but at 25 km spatial cells. A single cell may encompass urban, suburban, and rural areas with very different microclimates. The 5 km SARAH-2 data improves this but is not available globally.
No near-shading modeling. Like PVWatts, PVGIS does not model shading from nearby buildings or trees. Users must estimate and input shading losses manually.
Single system loss input. PVGIS compresses all loss factors into one percentage. This makes it harder to diagnose which loss category is driving divergence from expected performance.
Interface complexity. The PVGIS web interface is more technical than PVWatts. Users must understand transposition models, dataset selection, and system loss estimation to use it effectively.
Less US validation. While PVGIS includes NSRDB data for US locations, it has less published validation against measured US system performance compared to PVWatts.
SurgePV Generation Tool: Integrated Design-to-Estimate Workflow
Standalone estimators like PVWatts and PVGIS serve a specific purpose: quick yield estimates from limited inputs. But modern solar design software integrates production estimation into a broader workflow that includes 3D site modeling, shading analysis, electrical design, and financial proposal generation.
How SurgePV Differs
SurgePV’s generation and financial tool takes a different approach from standalone estimators:
Multi-source irradiance data. Rather than relying on a single weather dataset, SurgePV integrates PVGIS, NREL TMY, and satellite-derived hourly data. The tool selects the most appropriate dataset based on project location and allows users to compare results across sources.
3D shading analysis. Unlike PVWatts and PVGIS, which require manual shading loss inputs, SurgePV’s solar design software includes 3D building and terrain modeling. Near-shading from buildings, trees, chimneys, and roof obstructions is calculated hourly based on actual 3D geometry. This typically reduces the shading error from 15-25% (manual estimate) to 3-8% (modeled geometry).
Module-level temperature modeling. SurgePV calculates module operating temperature based on mounting type, roof material, airflow gap, and local wind patterns rather than applying a generic NOCT-based model. For roof-integrated or flush-mount systems, this can improve temperature accuracy by 3-5%.
Custom loss factor library. Users can specify soiling factors by month (accounting for seasonal dust, pollen, or snow), wiring losses based on actual string lengths, and mismatch losses based on module tolerance distributions.
Integrated financial modeling. Production estimates flow directly into financial models that calculate LCOE, IRR, NPV, and payback period. Changing the tilt angle or adding a battery updates both the production estimate and the financial projections in real time.
Hourly self-consumption modeling. For behind-the-meter projects, SurgePV matches hourly production against hourly consumption profiles. This matters because two systems with identical annual production can have very different economic value depending on how production aligns with consumption.
When Integrated Tools Add Value
Integrated design tools deliver the most value for:
- Shaded or complex roof geometries — where manual shading estimates are unreliable
- Commercial and industrial projects — where hourly load matching drives economic value
- Multi-orientation systems — where string-level optimization affects yield
- Battery storage projects — where hourly production-consumption matching determines battery cycling
- Bifacial and tracking systems — where rear-side irradiance and tracking algorithms require specialized modeling
- Proposal generation — where production estimates must be presented credibly to customers or investors
For a simple unshaded south-facing residential system, PVWatts or PVGIS produces results nearly identical to an integrated tool. The value of integration grows with system complexity.
Side-by-Side Comparison: PVWatts vs PVGIS vs SurgePV
| Feature | PVWatts | PVGIS | SurgePV |
|---|---|---|---|
| Developer | NREL (US) | JRC / European Commission | SurgePV |
| Primary dataset | TMY2/TMY3/TMYx ground stations | ERA5 / SARAH-2 satellite | Multi-source (PVGIS + TMY + satellite) |
| Spatial resolution | Point source (station location) | 25 km (ERA5) / 5 km (SARAH-2) | Location-specific with 3D modeling |
| Temporal resolution | Hourly | Hourly | Hourly |
| Global coverage | Limited (US best, sparse elsewhere) | Full global | Full global |
| US accuracy | 5-8% median annual | 6-10% (using NSRDB) | 4-7% |
| Europe accuracy | 8-15% (interpolated data) | 5-8% median annual | 4-7% |
| Transposition model | Perez (1990) | Muneer / Reindl | Perez with site corrections |
| Temperature model | Faiman with wind | Faiman with mounting type | Module-level with airflow modeling |
| Shading modeling | Manual input only | Manual input only | 3D geometry-based hourly |
| Terrain shading | No | Yes (from DEM) | Yes (from DEM + 3D buildings) |
| Bifacial modeling | No | Yes (PVGIS 5.2) | Yes |
| Snow modeling | Manual input only | Automatic (historical data) | Automatic with monthly adjustment |
| Soiling | Fixed default (2%) | User-defined single value | Monthly customizable |
| Inverter clipping | Simplified | Simplified | Detailed efficiency curve |
| DC-to-AC ratio | Default 1.2, adjustable | User-specified | User-specified with optimization |
| Self-consumption modeling | No | No | Yes (hourly load matching) |
| Battery modeling | No | No | Yes (cycling, degradation, arbitrage) |
| Financial integration | Basic LCOE only | None | Full financial model (IRR, NPV, payback) |
| API availability | Yes (free) | Yes (free) | Yes (subscription) |
| Output formats | Web, CSV, JSON | Web, CSV, PDF | Web, PDF proposal, API |
| Cost | Free | Free | Subscription |
Accuracy Test: Same System, Three Tools, Real Results
To illustrate how estimator choice affects output, we modeled an identical system across all three tools and compared results against 12 months of measured production data.
Test System Specifications
| Parameter | Value |
|---|---|
| Location | Phoenix, Arizona, USA (33.45°N, 112.07°W) |
| System capacity | 8.16 kW DC (24 × 340 W mono PERC modules) |
| Tilt | 22° (roof pitch) |
| Azimuth | 195° (slightly west of south) |
| Inverter | 7.6 kW AC string inverter |
| DC-to-AC ratio | 1.07 |
| Mounting | Flush roof mount (comp shingle) |
| Shading | Minimal (single-story home, no obstructions) |
| Commissioning date | January 2024 |
| Measured annual production | 11,934 kWh (Jan 2024 – Dec 2024) |
Test Results
| Tool | Annual Estimate (kWh) | Difference from Measured | Error |
|---|---|---|---|
| Measured (actual) | 11,934 | — | — |
| PVWatts (default settings) | 12,847 | +913 kWh | +7.6% |
| PVWatts (adjusted for actual soiling) | 12,462 | +528 kWh | +4.4% |
| PVGIS (ERA5, default losses) | 13,201 | +1,267 kWh | +10.6% |
| PVGIS (SARAH-2 not available for US) | N/A | N/A | N/A |
| SurgePV (with site-specific inputs) | 12,180 | +246 kWh | +2.1% |
Monthly Breakdown
| Month | Measured | PVWatts | PVGIS | SurgePV |
|---|---|---|---|---|
| January | 820 | 912 (+11.2%) | 945 (+15.2%) | 855 (+4.3%) |
| February | 920 | 985 (+7.1%) | 1,012 (+10.0%) | 948 (+3.0%) |
| March | 1,080 | 1,145 (+6.0%) | 1,198 (+10.9%) | 1,105 (+2.3%) |
| April | 1,120 | 1,180 (+5.4%) | 1,245 (+11.2%) | 1,142 (+2.0%) |
| May | 1,150 | 1,220 (+6.1%) | 1,285 (+11.7%) | 1,178 (+2.4%) |
| June | 1,180 | 1,245 (+5.5%) | 1,298 (+10.0%) | 1,195 (+1.3%) |
| July | 1,050 | 1,180 (+12.4%) | 1,225 (+16.7%) | 1,095 (+4.3%) |
| August | 1,020 | 1,125 (+10.3%) | 1,175 (+15.2%) | 1,058 (+3.7%) |
| September | 1,050 | 1,105 (+5.2%) | 1,158 (+10.3%) | 1,075 (+2.4%) |
| October | 1,080 | 1,125 (+4.2%) | 1,185 (+9.7%) | 1,098 (+1.7%) |
| November | 890 | 942 (+5.8%) | 978 (+9.9%) | 915 (+2.8%) |
| December | 774 | 828 (+7.0%) | 865 (+11.8%) | 798 (+3.1%) |
Key Observations from the Test
PVWatts overestimated consistently. The 7.6% annual overestimation came from a combination of factors: default soiling loss of 2% was too low for Phoenix dust (actual soiling closer to 5%), the temperature model underestimated summer module temperatures on a flush-mount system, and the 195-degree azimuth (slightly west) produced slightly lower afternoon production than the model predicted.
PVGIS overestimated more severely. The 10.6% annual overestimation suggests that ERA5 reanalysis data for this Phoenix location runs high compared to ground measurements. ERA5’s 25 km spatial resolution averages over urban heat island effects and may not capture local aerosol loading accurately. PVGIS does not offer SARAH-2 data for the US, so users are limited to ERA5.
SurgePV was closest at 2.1% error. The integrated tool’s advantage came from three adjustments: (1) site-specific soiling factor of 5% based on local dust conditions, (2) module temperature correction for flush-mount on comp shingle with reduced airflow, and (3) slight azimuth correction for the 195-degree orientation.
Summer months showed the largest errors. July and August errors were 10-17% for PVWatts and PVGIS versus 3-4% for SurgePV. This pattern suggests temperature modeling is the primary differentiator in hot climates. Module temperatures in Phoenix regularly exceed 70°C in summer — far above the 45°C NOCT baseline.
What this means for installers: A 7-10% overestimation on an 8 kW system in Phoenix translates to 850-1,200 kWh per year. At Arizona retail rates of $0.13/kWh, that is $110-156 in overstated annual savings. Over a 25-year PPA, the cumulative error exceeds $2,000-3,500 in overstated customer value.
Model Differences: Why the Numbers Diverge
Understanding why estimators disagree requires examining four technical areas where models make different assumptions.
1. Weather Data Sources
| Dataset | Source | Spatial Resolution | Temporal Coverage | Best For |
|---|---|---|---|---|
| TMY2/TMY3 | US ground stations | Point source | 1961-2010 | US locations near weather stations |
| TMYx | US ground stations + satellite | Point source | 1998-2023 | US locations, recent climate trends |
| ERA5 | ECMWF reanalysis | 25 km | 1950-present | Global coverage, long-term averages |
| SARAH-2 | Satellite-derived | 5 km | 1983-2017 | Europe, Africa, parts of Asia |
| NSRDB | Satellite-derived | 4 km | 1998-present | US, high spatial resolution |
The fundamental difference: ground-station data (TMY) measures what actually happened at a specific point. Satellite reanalysis (ERA5) models what likely happened across a grid cell using atmospheric physics and satellite observations. For locations near a high-quality ground station, TMY data is more accurate. For remote locations, satellite data is the only option.
Practical implication: A project 50 km from the nearest TMY station may get more accurate results from ERA5 than from interpolated TMY data — even though ERA5’s 25 km cell is coarse.
2. Transposition Models
Transposition models convert horizontal irradiance (what a flat pyranometer measures) to plane-of-array irradiance (what the tilted module sees). The two main models disagree most under high-tilt and high-latitude conditions.
| Model | Approach | Best For | Weakness |
|---|---|---|---|
| Perez (1990) | Anisotropic diffuse, circumsolar component | Clear and partly cloudy skies, mid-latitudes | Complex, requires three irradiance components |
| Muneer | Anisotropic with empirical coefficients | Overcast skies, high latitudes | Less validated for tropical conditions |
| Reindl | Empirical clearness-index correlation | Simple implementation, limited data | Less accurate for non-standard tilts |
| Hay-Davies | Isotropic diffuse simplification | Very simple implementations | Underestimates diffuse at high tilts |
For a standard 30-degree south-facing system at mid-latitudes, transposition models agree within 2-3%. For vertical facades, steep alpine installations, or tracking systems, differences can reach 5-10%.
3. Temperature Models
Module temperature is the most underappreciated source of estimation error. Every 1°C above 25°C reduces crystalline silicon output by approximately 0.35-0.45%.
| Model | Inputs | Accuracy | Best For |
|---|---|---|---|
| Faiman (PVWatts default) | POA irradiance, ambient temp, wind speed | Moderate | Open-rack systems, moderate climates |
| Faiman with Ross coefficient | POA irradiance, ambient temp, mounting factor | Moderate | Roof-mount systems |
| Sandia PV Array Performance Model | Detailed thermal coefficients, wind, mounting | High | Research and detailed design |
| PVsyst thermal model | Module-specific data, mounting, airflow | High | Complex installations |
In Phoenix, module temperatures on a flush-mount system regularly reach 65-75°C in summer. A temperature model that assumes 50°C operating temperature will overestimate summer production by 5-8%.
The temperature modeling gap is the single largest source of divergence between estimators for hot-climate installations.
4. Loss Factor Defaults
| Loss Category | PVWatts Default | PVGIS Default | Typical Actual Range |
|---|---|---|---|
| Soiling | 2% | User-defined | 1-10% (location-dependent) |
| Shading | 0% | User-defined | 0-30% |
| Snow | 0% | Automatic | 0-15% (climate-dependent) |
| Mismatch | 2% | Included in system loss | 1-3% |
| Wiring | 2% | Included in system loss | 1-3% |
| Degradation (Year 1) | 0.5% | 0% | 0.5-2% (LID-dependent) |
| Availability | 3% | User-defined | 0.5-5% |
The most dangerous default is shading at 0%. An installer who accepts the PVWatts default without adding a shading correction will overestimate production for any shaded system by the full shading percentage. A system with 20% shading loss that uses the 0% default will overestimate by 20% — far larger than any weather data or transposition model difference.
When to Use Each Tool
Use PVWatts When:
- The project is in the United States near a weather station
- The system is simple, unshaded, and standard-tilt
- You need a quick estimate for preliminary sizing
- You are validating against NREL-published benchmarks
- You need API access for integration with quoting tools
- The project does not use bifacial modules or tracking
Use PVGIS When:
- The project is in Europe, Africa, Asia, or South America
- You need global coverage with consistent methodology
- The project is in mountainous terrain (terrain shading matters)
- You are modeling bifacial systems (PVGIS 5.2)
- You need snow loss modeling for northern climates
- You want to compare multiple satellite datasets
Use SurgePV (Integrated Tool) When:
- The roof has shading from buildings, trees, or obstructions
- The system has multiple orientations or complex geometry
- You need hourly self-consumption analysis
- Battery storage is part of the design
- You are generating customer-facing proposals
- Financial modeling (IRR, NPV, LCOE) is required
- The project uses bifacial modules, tracking, or atypical mounting
- You need bankable accuracy for commercial or utility projects
Limitations All Estimators Share
No production estimator — free or paid — can perfectly predict solar output. All models share these fundamental limitations:
1. Weather is stochastic, models are deterministic. Estimators use “typical” weather years. Actual weather in any given year can differ from typical by 10-20%. A cloudy El Niño year in California can reduce production 15% below TMY estimates. A drought year with minimal cloud cover can exceed estimates by 10%.
2. Soiling is hyperlocal. A system 500 meters from a construction site, agricultural field, or busy road experiences different soiling than the weather station or satellite cell suggests. No model captures microscale dust, pollen, or pollution patterns.
3. Degradation is nonlinear and module-specific. Estimators apply fixed annual degradation rates (typically 0.5-0.8%). Actual degradation varies by module quality, climate stress, and manufacturing batch. Some modules degrade 0.3% per year; others degrade 1.5% after experiencing potential-induced degradation (PID).
4. Inverter failure is binary. Models apply availability factors (97-99.5%). Actual inverter failures cause 100% production loss for days or weeks until replacement. A single inverter failure in year 3 can wipe out the entire annual availability buffer.
5. Grid curtailment is unmodeled. In markets with high solar penetration, grid operators increasingly curtail solar exports during midday oversupply. No standard estimator models grid curtailment risk.
6. Module temperature coefficients vary by batch. The temperature coefficient on a module datasheet is a typical value. Individual modules from the same production line can vary by 10-20%. On a hot roof, this batch variation changes output by 2-4%.
7. Snow shedding is unpredictable. Models apply fixed snow loss percentages. Actual snow shedding depends on roof pitch, surface texture, ambient temperature cycling, and wind. A system with 10% modeled snow loss might experience 5% in a windy winter or 20% in a cold, still winter.
8. Spectral effects are ignored. Module efficiency varies with the spectral distribution of sunlight. High-humidity tropical locations have different spectral content than dry desert locations. Standard estimators do not model spectral effects, causing 1-3% error in some climates.
How to Validate Estimator Output Against Actual Production
Every installer should have a process for checking estimates against reality. Here is a three-step validation framework:
Step 1: Monthly Comparison for One Full Year
Compare estimated monthly production against actual monthly production for a full 12-month cycle. Look for patterns:
- Consistent overestimation across all months → suggests system loss defaults are too low, or capacity rating is optimistic
- Overestimation in summer only → suggests temperature model is underestimating module temperatures
- Overestimation in winter only → suggests snow loss or soiling is higher than modeled
- Underestimation in spring, overestimation in fall → suggests transposition model mismatch for the specific tilt/azimuth
Step 2: Normalize to Standard Conditions
Raw production comparisons are confounded by weather variability. Normalize actual production using:
Performance Ratio (PR):
PR = Actual Annual Production / (System Capacity × POA Irradiance / 1,000 W/m²)
A PR within 2-3 percentage points of the estimator’s implied PR indicates good model accuracy. Typical PR values:
| Climate | Typical PR | Range |
|---|---|---|
| Cool, sunny (Germany, UK) | 82-86% | 78-90% |
| Moderate, sunny (US Midwest) | 80-84% | 76-88% |
| Hot, sunny (Phoenix, Dubai) | 76-80% | 72-84% |
| Hot, humid (Miami, Bangkok) | 74-78% | 70-82% |
| Tropical highland (Nairobi, Quito) | 80-85% | 76-88% |
Step 3: Apply Corrections to Future Estimates
If validation reveals consistent bias, apply a correction factor to future estimates for similar systems:
| Validation Finding | Correction for Future Estimates |
|---|---|
| PVWatts overestimates by 8% consistently | Multiply PVWatts output by 0.92 for this location/mounting type |
| Summer overestimation of 12% | Reduce temperature model output by 12% in June-August |
| Winter underestimation of 5% | Increase snow/soiling loss factor by 5 percentage points |
| PR 5% below expected | Investigate soiling, shading, or system faults before adjusting model |
Build a local correction library. Over time, installers who validate estimates build a library of location-specific and mounting-specific correction factors. A Phoenix installer might learn that flush-mount systems on comp shingle consistently run 7% below PVWatts. A Munich installer might learn that PVGIS snow defaults are accurate for 30-degree roofs but optimistic for 15-degree roofs.
The Contrarian View: Why Estimator Accuracy Matters Less Than You Think
Here is an uncomfortable truth: spending hours refining a production estimate from 8% error to 4% error rarely changes project outcomes meaningfully.
What Actually Drives Solar Project Economics
| Factor | Typical Impact on 25-Year NPV | Estimator Control? |
|---|---|---|
| Module degradation rate | +/- 15-25% | No |
| Future electricity price escalation | +/- 20-40% | No |
| Financing cost (interest rate) | +/- 10-20% | No |
| Customer self-consumption rate | +/- 15-30% | Partial |
| Installation quality (wiring, sealing) | +/- 5-15% | No |
| Actual vs. estimated production | +/- 5-10% | Yes |
| Incentive program changes | +/- 10-50% | No |
| O&M cost variation | +/- 5-10% | No |
Production estimation error is real. But it is smaller than the uncertainty in electricity price forecasts, financing costs, and incentive policy. A 7% production overestimation hurts. A 30% overestimation of electricity price escalation — common in 2020-2022 models — hurts far more.
The Real Value of Production Estimators
Production estimators are most valuable not for their absolute accuracy but for:
Relative comparison. Estimators excel at answering “which tilt angle produces more?” or “is east-west or south-facing better for this roof?” The absolute number may be 5% off, but the relative ranking of options is usually correct.
Sensitivity analysis. Running multiple scenarios (high/low soiling, different degradation rates, shading variations) reveals which assumptions drive results. This matters more than the base case accuracy.
Customer communication. A professional production estimate builds credibility with customers. The exact kWh number matters less than the transparency of the methodology and the honesty of the assumptions.
System sizing. Production estimates inform inverter sizing, string configuration, and interconnection applications. A 5% error in annual production rarely changes these decisions.
When to Stop Refining the Estimate
For residential projects under 20 kW, the marginal value of estimation refinement drops sharply after:
- Using an appropriate tool for the geography (PVWatts for US, PVGIS for Europe)
- Applying a realistic shading loss based on site inspection
- Setting soiling loss based on local conditions (not the 2% default)
- Adding a 10% production buffer for financial projections
For commercial projects above 100 kW, the value of refinement increases because:
- Absolute kWh errors are larger in dollar terms
- Investors require bankable accuracy
- Hourly load matching drives economic value
- Battery sizing depends on hourly production profiles
The Honest Approach
The most professional installers do not claim perfect accuracy. They present estimates with confidence intervals:
- “Based on PVWatts modeling with site-specific shading corrections, we estimate 11,500-12,500 kWh per year with 90% confidence.”
- “Our production guarantee covers 90% of the estimated annual output, with monthly true-up.”
- “This estimate assumes average weather conditions. Actual production in any given year may vary +/- 10% due to weather variability.”
This transparency builds more trust than a precise-sounding number that turns out to be wrong.
Model Solar Production with Site-Specific Accuracy
SurgePV’s solar design software combines 3D shading analysis, multi-source weather data, and module-level temperature modeling for production estimates you can stand behind. Generate investor-grade financial proposals with integrated yield simulation, hourly self-consumption analysis, and customizable loss factors.
Book a DemoNo commitment required · 20 minutes · Live project walkthrough
Conclusion
PVWatts and PVGIS are both excellent tools that deliver 5-10% annual accuracy for standard unshaded systems. PVWatts wins for US projects with its ground-station TMY validation. PVGIS wins for European and global projects with its satellite coverage and terrain shading. Both fail predictably for shaded, hot, or non-standard installations.
The test data tells a clear story: for a simple Phoenix system, PVWatts overestimated by 7.6% and PVGIS by 10.6%. An integrated tool with site-specific corrections cut that error to 2.1%. The difference between 7.6% and 2.1% error is $600-900 in annual savings misrepresentation — enough to erode customer trust when the actual bill arrives.
But the contrarian view also holds: production estimation is not the largest uncertainty in solar project economics. Electricity price trends, financing costs, and incentive policy drive far larger outcome variation. The installer who obsesses over transposition model selection while ignoring customer self-consumption patterns or financing assumptions is optimizing the wrong variable.
Three actions for installers:
-
Validate your estimator against measured data. Pick five installed systems, compare 12 months of actual production against the pre-install estimate, and build location-specific correction factors. This single exercise improves estimate accuracy more than switching tools.
-
Apply a 10% production buffer for financial projections. Never quote a customer or investor using the raw estimator output. Reduce the annual estimate by 10% for payback calculations and PPA pricing. If production exceeds the estimate, the customer is delighted. If it falls short, you are protected.
-
Use the right tool for the project complexity. Simple unshaded residential systems in the US do not need integrated design software — PVWatts is sufficient. Complex commercial projects with shading, multiple orientations, or battery storage need 3D modeling and hourly analysis that standalone estimators cannot provide.
For solar professionals building bankable proposals, solar design software that integrates production estimation with 3D shading, financial modeling, and proposal generation delivers accuracy and efficiency that standalone tools cannot match. The generation and financial tool at SurgePV combines multi-source weather data, project-specific loss factors, and investor-grade financial output — bridging the gap between quick estimates and professional project development.
Frequently Asked Questions
What is the most accurate solar production estimator?
PVGIS and PVWatts both deliver accuracy within 5-10% of measured annual production for unshaded, standard-tilt systems. PVGIS uses higher-resolution weather data for Europe and performs better for complex terrain. PVWatts uses NREL’s TMY2/TMY3 dataset and excels for US locations. For shaded or non-standard installations, both tools diverge from reality by 15-25% unless shading corrections are applied manually.
What is the difference between PVWatts and PVGIS?
PVWatts is NREL’s US-focused estimator using TMY weather data, the Perez transposition model, and a simplified temperature model. PVGIS is the European Commission JRC’s tool using ERA5 reanalysis data, the Muneer or Reindl transposition model, and more granular spatial resolution. PVWatts defaults to a 0.86 DC-to-AC ratio; PVGIS uses user-specified system losses. PVGIS covers the entire globe; PVWatts is limited to the US and select international locations.
How accurate are solar production calculators?
Solar production calculators are typically accurate within 5-10% of measured annual output for simple, unshaded rooftop systems at standard tilt. Accuracy degrades to 15-25% for shaded systems, steep tilt angles, bifacial modules, or locations with high aerosol variability. Monthly accuracy is worse than annual accuracy — seasonal errors of 20-30% are common. All calculators are models, not measurements.
Which solar production estimator should I use for European projects?
Use PVGIS for European projects. PVGIS uses Copernicus ERA5 reanalysis data at 0.25-degree spatial resolution and covers all European territories including islands and remote regions. It models snow cover, terrain shading, and horizon effects. PVWatts has limited coverage outside the US and uses lower-resolution weather data for international locations.
Which solar production estimator should I use for US projects?
Use PVWatts for US projects. PVWatts uses NREL’s TMY2 and TMYx weather datasets with measured ground-station data from 1,500+ US locations. It models inverter clipping, spectral effects, and soiling with US-specific defaults. The NREL dataset has 30+ years of validation against measured US system performance.
Can I trust solar production estimates for financial modeling?
Solar production estimates are suitable for financial modeling only when validated with a 10-15% production buffer. Never use a single estimator’s annual number as the basis for PPA pricing or investor returns without sensitivity analysis. Best practice: run the same system through PVWatts and PVGIS, take the lower of the two estimates, and apply a 90% confidence factor. For commercial projects, add measured onsite irradiance data from a 3-6 month pyranometer campaign.
What inputs affect solar production estimator accuracy the most?
The five inputs that most affect estimator accuracy are: (1) Tilt angle and azimuth — errors of 10 degrees can shift annual yield by 2-5%; (2) Shading — unmodeled shading is the single largest source of error, often causing 15-30% overestimation; (3) Temperature coefficients — PVWatts and PVGIS use different temperature models that diverge by 3-8% in hot climates; (4) Soiling assumptions — default soiling losses of 2-5% are often too low for dusty or polluted environments; (5) System availability — default 99% availability ignores inverter failure and grid outage impacts.
How do I validate a solar production estimate against actual output?
Validate estimates with a three-step process: (1) Compare monthly estimated vs. actual production for a full year — look for seasonal bias patterns; (2) Normalize actual production to standard test conditions using measured plane-of-array irradiance and module temperature; (3) Calculate performance ratio (PR) and compare against the estimator’s implied PR. A PR within 2-3 percentage points of the estimate indicates good model accuracy. Persistent seasonal bias suggests weather data or transposition model mismatch.
Does SurgePV use PVWatts or PVGIS data?
SurgePV’s generation and financial tool integrates multiple irradiance datasets including PVGIS, NREL TMY, and satellite-derived hourly data. It applies project-specific shading analysis, module-level temperature modeling, and custom loss factors rather than relying on a single estimator’s defaults. This produces estimates that typically fall between PVWatts and PVGIS results, adjusted for the specific system configuration and site conditions.
Why do different solar production estimators give different results?
Different estimators give different results because they use different weather data sources, transposition models, temperature models, and default loss assumptions. PVWatts uses NREL TMY data with the Perez model; PVGIS uses ERA5 reanalysis with the Muneer model. Temperature modeling differs by 3-8% in hot climates. Spatial resolution varies — PVGIS uses 0.25-degree cells while PVWatts uses point-source TMY data. Soiling, snow, and spectral loss defaults also differ. These are all models of reality, not measurements of it.



