Pages

Friday 28 March 2014

Timing is everything

A common measure of sustainability is the percentage of energy generated from renewable resources such as wind, solar, tidal, hydro, bio-mass etc.  Often the time period on which this statistic is estimated is a year.  Equally important is the timing of supply and demand.  The classic example is solar generation, the graph below illustrates the demand for electricity on a typical spring day and the solar irradiance available to contribute to meeting it, a similar graph could be drawn for wind and the time period extended to include seasonal variations.

The two ways of meeting the overnight demand are storage and alternative means of generation.  Most energy economies are evolving to adapt to diverse means of generation.  At the present time it is hard to make a good case for storage as most energy economies can absorb what wind and solar installations can offer them and frequently, they are given priority when working out how to meet demand.  In general, there are few surpluses of energy which can be accumulated in a storage system, even if such a system is available.  I don't have a handle on the relative risks and economics of utility scale storage and generation, but at a guess, maintaining a fossil/nuclear generating capability is the "low" risk option.  The approach makes wind and solar sources incremental parts of the energy mix which need backing up with an equivalent amount of conventional capacity.

The case for storage is that it is a step towards sustainability.  At its most basic, the harvest from solar panels during the day can be stored and used to keep the lights on after dark.  Within the arid regions towards the equator, where there a clear skies and relatively small seasonal variations, this could be a workable scenario.  In the temperate regions, more complex system are needed with a mix of solar and wind.  Solar works well in summer, but the winter yields are low, wind works better in winter and on some days neither produce very much.

I'm currently messing with a very small scale storage project in which a small computer attempts to keep itself alive by "buying" sustainable energy, this could be done as a computer similar (which is happening as a parallel task), but the having some hardware, makes it both fun (other relevant words are frustrating and expensive) and more instructive than a bunch of numbers from a computer programme.  There in one economic nicety, you can attempt to use off-peak electricity which is approx. 7p/kwh where possible in preference to normal daytime rates which are close to 20p/kwh.  If you used this approach to ensure that a high proportion of the electricity you use was from renewable resources, you would have some capital and operating costs beyond those normally associated with turning the lights on.

Living next to a railway station used by commuters, I've become aware that there are an increasing number of electric cars around, typically, these are priced at around £20k after a £5k government subsidy.  Apart from their high cost, electric vehicles charged by off-peak electricity are an attractive concept, in effect they are storage on wheels.  An interesting policy study would be the  effect of providing similar support for including storage into homes and offices.


Friday 21 March 2014

Clouds and Irradiance - A Simple Model


Simulating the performance of a solar energy system requires a model for solar irradiance.  Solar irradiance is a function of Sun-Earth geometry and atmospheric conditions of which cloud cover is the most significant.  On a typical summer day in the south of England  there will be a few or scattered cumulus clouds which might reduce the global horizontal irradiance to 80% of its clear sky level, whilst in winter a thick layer of stratus can reduce this to 10 - 20%.

As with everything else on this blog, this is unreviewed work-in-progress and should be treated with caution.  This is post is a simplified description of a project, it is planned to compile a more detailed account of the work at a later date.

The basis of the model is the attenuation of clear sky irradiance caused by the presence of clouds, this is described by a variable called the clear sky factor (CSF) which is defined as:


The graph below was compiled from data collected around noon in June 2011 illustrates the variation in CSF with cloud cover.  for the clear sky it constant at 1.00, for overcast conditions it is more or less constant at approximately 0.15.  On a day of scattered cumulus cloud the CSF fluctuated widely, the low levels recorded when the cloud passed between the measuring device and the sun, the CSF was close to that of the overcast sky, during the period of transition between cloud and clear sky, the CSF exceeded 1.0 due to an increase in diffuse irradiance, this might be called the cloud fringe effect. 


For air mass values in the range 1.2 to 6.0, CSF appears to be more or less independent of air mass which is allows a model of solar irradiance to be based simply on an estimate of clear sky irradiance and a description of the cloud cover.

GHI was chosen as a measure of irradiance because it is the most commonly collected form of irradiance data.  A basic measuring device is simply a small horizontally mounted PV cell.  Clear sky irradiance is influenced by factors such a aerosols and water vapour, whilst there are some excellent models which take these into account, for most locations little is known about the state of the atmosphere at a specific time, especially when clouds are present in the sky.  After some experimentation, a correlation developed by the Meinells which requires only air mass as an input was found to produce a reasonable estimate of Direct Normal Irradiance (DNI).  GHI is a combination of direct normal irradiance and diffuse horizontal irradiance (DHI), as no equivalent formula to the Meinell one could be found for diffuse irradiance, one was derived from local observations using a simple shaded radiometer, this latter formula is subject to revision as more data becomes available.

In the south of England, the "economic" range of air mass is approximately 1.2 to 6.0, for these values the plane parallel formula for Air Mass produces a workable estimate and offers some computational convenience.  This simplification may not be appropriate for regions such as Arizona where there is significant DNI at much higher values of air mass.

The formulas used for the estimated clear sky GHI are:


The most readily available source of cloud cover, apart from looking upwards to the sky, is the METAR reports used in aviation.  These include a description of the sky, if one or more layers of cloud are present, there will be a description of its base height and extent, e.g. SCT040 means scattered cloud at 4,000 feet.  A basic description of the extents is is shown below:
  • FEW - up to 2 octas
  • SCaTtered - 3 - 4 octas
  • BroKeN - 5 - 7 octas
  • OVerCast - 8 octas, no blue sky visible
The widespread use of these description made them a logical choice for use as the basis of a model.

One way of creating a model it use distributions of CSF for a given cloud extent.  The graphs below are summaries of the effect of low cloud (less than 6,000 feet) in a maritime temperate climate, also known as Sussex of Cfb in the Koppen system of climate classification.  The distributions for few, scattered and broken cloud are typically bimodal.  the low mode describes the CSF when the sun is obscured by cloud and the high mode is the interval of clear sky between the passage of clouds.

When there are only a few clouds in the sky, the average value of CSF is around 0.8 and there is a greater probability of high values of CSF (i.e. the overall attenuation is small).




As the extent of the cloud increases, the average value of CSF falls for scattered clouds and the probability of CSF being either high or low is approximately equal.  This is consistent with the definition of scattered cloud which is that up to half the sky contains cloud.

 Part of the definition of broken cloud is that there is at least some blue sky visible even though most of the sky is full of cloud, this is reflected in the summary graph.

There is no blue sky visible under an overcast sky and the distribution is unimodal and the mean is low.
These summaries are over  simplifications designed to allow a simple model.  Other factors which are important are the height of the equivalent summaries  cloud, high cloud are significantly different and the values of CSF much greater.  The model as currently configured, simply takes the highest, and most dense layer, which effectively assumes a simple sky, often the sky is complex, especially during the passage of fronts.  Part of the work in progress is to determine the variation in attenuation between climates, for example the nature of solar irradiance in the desert regions of Arizona, are significantly different from those on the south coast of England.

Friday 14 March 2014

I'd like to call this research.....

But its also having fun with a kite.

The kite was given to me as a Christmas present by my son (aged 23 at the time, him not me).  It's a small kite which fits easily into a pocket and would be ideal for business people.  One can imagine herding the participants of a meeting into the car park and sending the kite aloft.  Then point upwards and exclaim "that's were we should be", followed by a profound silence and the sensible one saying "shall we go back inside now?".

Despite a complete lack of electronics and telemetry, the kite gives an insight into the the nature of wind at heights between 5 and 25 metres (maybe higher, I replaced the original string with a longer one and its still climbing).  Last Saturday when the wind was blowing 3 - 4 m/s, I took the kite to two different locations.  The first was a beach where the wind was coming in off the sea and the second was a public park in an urban area a mile or so north of the seafront.  The behaviour of the kite was significantly different at each site.

On the beach, the kite effortlessly took to the air and was stable at a height of 5m and the trailing streamers had no difficulty in keeping it head up to the wind.  Once the string was fully unwound, the kite sat comfortably in the sky, maybe with the odd flutter, but I was able to hook the handle on to the bike and take the photo at the top of the page.  After half an hour, I felt guilty about bringing it back to Earth.

At the park, things were different.  I almost pointed out to the man mowing the cricket pitch who cast disapproving looks in my direction, that what he was going to do on the grass later in the day was no less ridiculous than what I was doing but life's too short for pointless conversations.  Here the kite  struggled with turbulence up to, say, 15m and getting it airborne required many attempts, hoping that it would acquire enough upward motion to counter any downward influences.  With patience, it was possible to get the string fully extended, but the kite was never stable and was prone to twisting and each twist tied a small knot in the string.  I suggest that the number of knots in a kite string is a measure of wind turbulence.  Nor did the kite stay aloft.  Typically in urban areas, wind is not a smooth stream of air, but a series of gusts, often with significant gaps between then, it was in these gaps that the kite spiralled to the ground.  The kite suggested that conditions measured at 2m, extend upwards to at least 25m.  It was also clear that wind speed increased with height, even though the turbulence persisted.

When the next opportunity presents itself, I will take the kite into the hills to the north of the town, just because its fun.

PS - 18-Apr-2014 - I found myself on the same beach yesterday, this time the wind was coming from the NE and was blowing from the land to the sea.  The turbulence that is observable inland made it hard to get the kite airborne and once aloft, it started to drift earthwards as soon as a gust had subsided.

Friday 7 March 2014

The Rayleigh Distribution

One of the reasons for modelling a system is to get an understanding of how it might work before you build it, the logic being that paper are spreadsheets are cheap whilst metal and earth moving are expensive.  There are many ways of doing this, one is to use historic data as the input, for example a time series of wind speed measurements, another is use some form maths/stats function to mimic the real world.  Each approach has its strengths and weaknesses.

I'm currently messing with a storage based project, some initial work was done on a computer, but to make it a bit more fun, I'm building a very small prototype which is controlled by weather reports from the internet.  This provides some insights which a purely maths and stats approach would hide, not the least of these is the economics of the process.  In a mathematical model, changing a parameter is often just a matter of editing a line of code, with a physical model, changing something usually involves a cycle ride and spending money.  This has forced some decisions on how much information is needed to drive the system and what is nice-to-have which leads to a discussion on the merits of a big budget which allows flexibility and experimentation or a small budget in which constraints may foster creativity.   Such is life at the cutting edge of R and D.

As with anything on this blog, this is unreviewed work in progress which should be treated with caution.

When I first became interested in sustainable energy systems, I found a source of data which provided the average wind speed for a given location. the figure for my backyard was 5.0 m/s (I think), this might be the case for level ground devoid of trees and houses, but it was a start.  To make an estimate of the energy that might be extracted from a stream of moving air, it is necessary have a distribution of the wind speed.  One solution is to use the Rayleigh distribution, which with the help of spreadsheet can take an average wind speed and turn it into a histogram which is an estimate of the of the number of hours per year a given wind speed will occur.  The Rayleigh distribution is a special case of the Weibull in which the shape factor is fixed at 2.0 which simplifies the process.  Both the Rayleigh and Weibull distributions can be used to model wind speed.  The relationship between the maths and the wind is empirical which simply means "it works" without a providing a causal mechanism.  This sort of model is common in science, the goals scored in a football match can follow a Poison distribution, rainfall can follow a Gamma distribution, extremes can be described with the Pareto and Gumbel distributions (amongst others).

The graph below shows the distribution of wind speed which is estimated by the Rayleigh model and observed wind speed for a selected location with an average wind speed of 5 ms/s:

In this case, there is reasonable agreement between the estimated and observed values, in part because the dataset was chosen for this reason.

As part of a project to learn about the nature of wind as an energy source, I collected data from diverse location, which I'm still studying.  The inference from this, so far, is that the Rayleigh distribution describes the wind speed distribution at an "ideal" location.  It provides a reasonable model where the terrain is flat or a plateau and also for offshore wind in locations poleward of the tropics and for upper air soundings at the 850 mb level.  It does not provice a good model where the terrain is complex or for urban areas.  In these cases, a better solution might be to use a Weibull distribution where the shape factor is in the range 1.2 to 1.8.  In these cases, the Rayleigh distribution will over estimate the energy yield.

This is a link to a more detailed description of the Rayleigh Distribution, it will be updated over the next few weeks as other work is completed.

Rayleigh Distribution