Why Grid Parity is Meaningless
One common theme of many environmentalist talks and essays is that Photo-Voltaic (PV) technology is semiconductor based and so improves like transistor counts.
We’ve vastly improved computation, and so PV will follow the same path and we can leverage all that “chip making” know-how to save the day.
This is simply not true.
PV technology has improved, but the physics is very different to computation. The basic narrative that PV technology developed after the computer, and is so now catching up doesn’t even make any sense considering the Photo-Voltaic effect was discovered in 1839 over a century before the first silicon based computer.
So, the reality is if photovoltaic technology improved in the same way our computers have improved … then PV cells would have taken over already around a century ago, before computers were even a thing. Or did PV just squander a century head start of exponential growth?
Didn’t happen because no incentive to invest heavily until recent decades?
Well, that just begs the question as to why not, and the short answer is the kinds of efficiency gains we see in computer chips are simply not possible with photovoltaic cells.
If it was, someone would have sat down and figured that out well over a century ago to justify the necessary investments to make it happen.
What is that physical difference?
Although they look similar, use similar material, and are both produced in high-tech impressively fancy ways, a computer chip does something very different than a photovoltaic cell.
A computer chip processes information and each piece of information has no “necessary size”.
We can conceive of information as completely immaterial … of course using our brains that are made of matter. Nevertheless, the fact we can imagine information as still existing without material (such as supernatural beings outside our physical universe still “knowing things”) is a useful bound simply to make the point that we can get as close to this ideal as possible, the atomic scale, and still have information.
Hence, when computers started, even before electronics, storing information in gears, there’s an absolutely gargantuan difference between a gear and the near atomic scale limit of transistors, in terms of material used.
Making transistors smaller does not just allow the same thing to be made with less material (and therefore much cheaper with the right economies of scale), but it also makes the computer faster and more power efficient. The first programmable computers occupied whole buildings, weighed many, many tons and consumed hundreds of kilowatts! If it was just the material saving going down to something that fits in your hand, it wouldn’t be all that impressive if a computer today was still a dozen hertz, had only hundreds of bytes of memory and consumed the power of a city block. You’d literally need your own fleet of nuclear reactors to play any modern game (and even then, at an incredibly slow frame rate, excitement in frames per years and decades; as the clock speed and latency problem between chips wouldn’t be solved by simply building a giant amount of them).
These efficiency bonuses of not only saving on material, but using less power, faster clock speeds, shorter latency between components (of simply being smaller and closer together), all together are what drives exponential improvement of computer power.
If we run the though experiment of the bare minimum conceptual photovoltaic cell, in order to do its job it must be material. An immaterial PV cell wouldn’t interact with sunlight, so just imagining angels or whomever with immaterial photovoltaic panels standing in the sun is difficult to argue the PV cells are really “there”.
True, we could go full scholastic angel on pins computing or collecting sunlight style debate to try to resolve the matter at this juncture, but it’s clearly less convincing than immaterial information from conception.
For PV cells to be said to it’s job, it needs to occupy space to collect that sunlight.
This is the key difference. We can make a super tiny PV cell, no technical problem in doing that … just it then occupies a super tiny space and capture a super tiny amount of solar power.
The lower bounds are difficult to get rid of
Making any structure at all that stands outside in the rain, wind and elements, requires a minimum amount of atoms. Making a surface area that captures sunlight also requires a minimum amount of atoms.
Back to an analogy with computers, it would be similar to being able to improve efficiency and power consumption of computers … but they are still as big as a house and nothing can be done about that. Costs will have a lower bound that is proportional to simply building house structures which has nothing to do with electronics, and would make the radical cost reductions we’ve seen simply impossible. A “smart phone” in this scenario would still be a science fiction impossible dream as you’ll never be able to carry a whole house around in your pocket.
To make the analogy go the other way, if a PV field could be shrunk to the size of your hand … but still capture megawatts, even hundreds of megawatts, would unlock the same kind of previous fanciful dream applications in energy as has our smartphones in computation.
The reality is we simply can’t shrink an acre of PV panels down to the palm of your hand … and still capture an acre of sunlight. It’s simply impossible. We are simply stuck with the building size constraint (we can reflect sunlight around, but the minimum bound of simply “occupying space” doesn’t change).
So, even if our PV cell functional surface that captures the solar energy is made atomically thin and let’s say no cost, basic structural elements are needed to support that surface area and withstand erosion and damages wouldn’t change much.
Make matters worse, that potential ideal of atomically thin is relatively far from possible in our physical world, as minimum thickness of the photon capturing cannot be smaller than the wave length of the light, else the light starts to pass right through. A wave length of typical blue (or longer wave) sunlight we’re interested to capture is pretty big on the atomic scale.
But it gets even worse than that, as wires are needed to collect all that electricity and there’s little room for improvement on the minimum wire gauge that’s needed for the current — at the cell level, then panel, then arrays of panels, and finally a whole field of panels — and just this copper to collect the electricity represents another more-or-less fixed cost per unit area of panel.
In terms of size, we have a minimal “structural” size and weight. Even simply focusing on the functional part, there’s a much higher lower-bound of matter to cover all that surface area and collect the light than the lower-bound of computation.
Why are there “exponential” cost reduction moments?
Having said all that, there have been periods of “exponential” cost reductions of PV panels; these have more to do with markets, government subsidy, and the investment in R&D and fabrication capacity, than with an essentially relentless century long process of both cost reduction and efficiency gains as seen in computers. Cost improvements most of the time stay pretty flat, with the odd technical or capacity driven breakthroughs making a significant jump (but no where near comparable to improvements in computation).
Due to the lower-bound costs of supporting components and processes per surface area to actually install the cells and connect them to the grid— which is not only the costs associated with structures so far discussed, there’s also also land purchase and preparation and installation labour and planning and everything else associated with a construction project — there are simply lower bounds of cost that cannot be broken with improvements to PV technology as such (would require breakthrough in fabricating aluminum frames, ground work, installation, basic wiring; all fairly mature industries of simple processes — so operating already close to material and energy bounds of cost — in which few breakthroughs are expected).
What they don’t tell you (probably because they don’t know)
It gets even a lot worse than even the above, just keeps getting worse.
Although there’s little expectation of breakthroughs of the simple processes needed to install photovoltaic panels … those lower material and cost bounds aren’t somehow fixed.
Increase the price of aluminum, copper, transformers, transport, and these lower-bound costs of actually getting PV cells on the ground and plugged in go up.
The availability of these things can simply be disrupted. Container transport is now 10 times higher costs due to the disruption of the pandemic to industrial planning at various levels. PV panels take up space, it’s what they do, and if you want a lot of panels you need a lot of containers.
Another big variable is the cost of copper. There is simply no “good” alternative to copper for all the wiring that fields of photovoltaic need.
This is the general Achilles heal of the “electric eco vision” of the future.
Even forgetting about the PV itself, simply building grid capacity, transformers, and all the electric motors and batteries required to displace a decent part of fossil fuel transport energy, represents an incredibly large amount of copper and other material (and, yes, if copper gets more expensive, other materials, like aluminum, can become competitive, but you still need a lot of that material, and doesn’t actually make things cheaper if it’s only competitive because copper has become expensive and things are more expensive). Scaling these sorts of primary industries simply doesn’t happen overnight, nor in the 2 decades of carbon budget we have to avoid deleterious effects of climate change.
The battery problem hasn’t gone away
Basically for the same reasons, batteries need to take up a minimum of space and have followed a similar cost reduction path as PV panels … very slow improvements to cost-effectiveness over the last century since the first practical battery powered cars in the early 1900’s.
Non-chemical batteries also exist, but they all share requiring a lot of material.
Only a small proportion of the grid can be fed by intermittent renewables until energy storage of one form or another is requires; otherwise the grid becomes widely unstable, which not only means blackouts but also damage to a lot of grid components and equipment attached to the grid.
So grid parity isn’t even grid parity of the whole grid, which would need the long term storage to be realistically a substitute (i.e. parity) of the whole grid. What we get news of is grid parity at the variable power margins of the grid the rest of the grid can handle without storage.
What stabilizes the electric grid is both base-load power (which serves to simply make the variable power problem small enough to manage) and responsive power that can rapidly respond by increasing or decreasing power production to balance the load on the grid (which must remain within a few percent of the supply and consumption of power or the grid destabilizes).
Electricity only represents 20% of energy consumption.
Simply getting solar electricity to a decent proportion of current electricity use is an absolutely massive task.
If you actually sit down and start thinking through the material, fab-capacity, transport and installation skills, implications of not only achieving this — which keeps the rest of society the same, just with a different electricity input — but also displacing other non-electric fossil energy consumption — which implies radical material change to the rest of society as well — the task, in the time to solve climate change, is simply no longer feasible.
Had we started in the 70s with big investments in both renewable technology (which we did know about back then, and solar thermal and wind energy that was already economically viable then, had fossil fuels been taxed to internalize their global ecocidal and genocidal cost) as well as big investments into not only energy efficiency of our technology but how we organize society to use energy (cough, cough, suburbia), then, indeed, we could now be enjoying a clean, perhaps even plausibly sustainable, electricity based civilization.
Why is Grid Parity Meaningless
It is useful to know the cost of photovoltaic compared to other forms of electricity at the moment, and with different levelized costing and so on.
By itself, however, it is meaningless.
The cost of anything right is useful to know, sure, but does not necessarily inform the cost of more of that thing not obstacles to having more of that thing or even if the cost would be the same in simply different conditions right now.
Grid parity is not a plan. The way many environmentalists, especially what passes for eco-journalism, have talked about it for a decade is that reaching this metric is some sort of solution. That once “grid parity” has been achieved that we can pack our bags an go home.
Well, grid parity has already been achieved, even compared with natural gas … yet business as usual scenario of the IPPC is still 6 degrees warming.
If grid parity was “a plan” or some critical milestone that would solve climate change, maybe even without a plan, wouldn’t business as usual simply use the cheapest energy source available? Which is now solar? So … the whole world will now be solar?
Hopefully, the concepts I’ve laid down already inform you why that’s not the case.
“Grid-parity” right now, simply means there are some sweet spots to install PV right now at a pretty cheap rate of power, but, from this, we cannot deduce that PV will suddenly take over the world’s supply of electricity (like faster computer chips simply make older chips redundant very quickly).
The amount material required for the really large scale change we would need in the electricity grid, not to mention making it even bigger to take other energy loads, is absolutely gargantuan with plenty of obstacles. “Grid parity” is, in itself, good, and it has already put pressure on coal plants in a business-as-usual economic efficiency sort of way.
It is, however, too late to scale renewable electricity technology to solve climate change. The task is simply impossible.
Oh yeah, and the solar flares
The first guy to talk to me about solar flares and the dangers of a Carrington event was Risto Isomaki, around 2012. It seemed unbelievable … but also at the same time completely believable that our elites would ignore another near existential threat to civilization.
After doing some more research, the risk of a Carrington event is upwards of 10% per decade. It’s also not entirely certain just how big such an event could get. We have “ok” statistics observing the sun with satellites and seeing how many Carrington scale events there are (that miss us), so, from this it fairly easy to calculate the odds of one’s we see hitting us. The room for doubt is that big solar flares are pretty big things in space, occupying several degrees, so the chance the first “bigger than we’ve seen so far” is pointing our way, and not safely missing us to just improve out statistical understanding, is not minute (it’s small, but not a risk that can be ignored). In addition to this risk that the “big one” happens to hit us, there’s also the risk that some solar cycles simply make far bigger mass ejections than we’ve seen so far. We’ve only observed a few solar cycles with sophisticated satellites, as they take 11 years, so this is far from the statistical understanding we’d have with observations of hundreds of cycles. If there are cycles that simply produce bigger and more flares, then the odds of getting hit with an even larger one increases.
Depending on the number of EHV transformers immediately available for replacement, outages in highly impacted regions could last from weeks to months. In fact, geomagnetic storms weaker than the extreme, Carrington-level storm still have the potential to be extremely costly if transformer damage is concentrated in small regions with large populations. Given the potential for large-scale, long-term economic and societal chaos, it is necessary to evaluate preparatory and mitigative measures. There are currently several space satellites in operation that can provide warnings of incoming CMEs on the timescale of hours to days, timescales that could allow grid operators to take preventative measures before the storm hits.
A Carrington-level, extreme geomagnetic storm is almost inevitable in the future. While the probability of an extreme storm occurring is relatively low at any given time, it is almost inevitable that one will occur eventually. Historical auroral records suggest a return period of 50 years for Quebec-level storms and 150 years for very extreme storms, such as the Carrington Event that occurred 154 years ago.
Which isn’t very good quality data of the frequency intensity of the extreme case, such as the original Carrington event, and the largest one’s we’ve directly observed with satellites and telescopes, are already pretty big and the risk of serious damage to our electricity grid per decade is fairly high.
If one down plays the risks (and only cares about the developed world), it’s not “too bad”, as even a bad one may only affect some areas, but the risks extreme events are both worse and more frequent than we currently expect cannot be excluded. It is a large gamble to have essentially zero mitigation measures in place for this problem.
It’s possible to make electricity grids more resilient to large solar mass ejections.
But we haven’t.
We will have a 3 day warning between seeing the light from the flair and the charged particles hitting our atmosphere, and shutting down the grid is a good idea, but damage can still be substantial. Even though voltage would be low, applied over the large distances our electricity grids occupy, can build up enough power to jump breakers and damage transformers, even if the grid is off. You could of course design that not-to-happen … but, we haven’t.
We haven’t done a lot of things.
There is one solution though.