Category Archives: strategy

Motivation for probability as degree of belief

We begin with some quotes from a few writers, financiers and probability mathematicians.

• Tagore: ‘If you shut the door to all errors, the truth will be shut out (too).’

• Chekhov: ‘Knowledge is of no value, unless you put it into practice.’

• Munger (4): ‘You don’t have to be brilliant, only a little bit wiser than the other guys on average, for a long time.’

• Markowitz (1987) (5): ‘The rational investor is a Bayesian.’


• Box and Tiao (1973): ‘It follows that inferences that are unacceptable must come from inappropriate assumption/s not from our inferential logic.’

• Jaynes (1976): ‘..the record is clear; orthodox arguments against Bayes Theorem, and in favour of confidence intervals, have never considered such mundane things as demonstrable facts concerning performance.’

• Feller (1968): “In the long run, Bayesians win.”

Well, the mathematicians and financiers are clear about Bayesian probability, while the writers indirectly so: Tagore and Chekhov were astute. If no alternatives are given, then our approach and its probabilities will often be seriously affected, which fact Tagore appears to allude to. How often do we lament not using or remembering all we know – Chekhov seems aware of this.

We seem to be prepared in these times to exclude possibilities, `worlds’, options. In so doing, when we look at Bayes famous theorem for a minute, we see our inevitable error.

Munger and Feller’s quotations go hand in hand and lead us to the approach discussed here of logical or inductive, plausible inference. Our approach here acknowledges or is guided by the above motivating words. Let us try to spot places in life where we might think of some of these words.

How do you or your team make decisions?

If you don’t have a ‘plan’ for this, then my Art of Decision book should give you something to work with. Many people choose to make their decisions intuitively and rapidly (2). This may well suit many of us. But can you justify it? Is there nothing that a bit more reflection and logic can bring, on average and over the long run, that might help improve the way we bash forwards in life, business, diplomacy and statecraft?

What is good and easy, and what is not good and hard?

If there was any ‘give’ at all from the reader in answering the first question above, then this second one is a nice followup. If you can assess what has been ‘not good and hard’ in terms of decisions, then read on: there may just be something here for you.

What is probability as extended logic? (3)
Probability is our rational degree of belief . . . in the truth of a proposition.

Logic is the mode of reasoning. It is logic that is extended from only applying to binary certainty and impossibility to. . .

. . . all states of degree of belief (uncertainty) in between these two endpoints. 

page9image55192128

2 as described in the book ‘Blink’ by Malcolm Gladwell.

3 This phrase was used, for example, by Jaynes (2003).

page9image55192128

What is decision theory?

The description of the process of how to use the information available to determine the

most desirable action available to the ‘Agent’ (the person or organisation who acts).

These definitions are general and seem to allow wide application. As a bonus, the ideas that underpin decisionmaking, i.e. our topic, also relate to artificial intelligence and machine learning, and thus will be of interest to those trying to give themselves a good base for understanding those rapidly developing areas.

page9image55192128

(4) d. 2023 at age 99. With Warren Buffett, he built up a fund of almost $1 trillion over several decades. 
(5) Mean-Variance Analysis, Oxford: Blackwell

page10image54911104

We turn to the great Roman scholar de Finetti, who said in 1946: “Probability does not exist’’.

What did he mean by this? Here, we shall look at Bayesian subjective (6) probability as extended logic. We compare it with orthodox, frequentist ad-hoc statistics. We look at the pros and cons of probability, utility and Bayes’ logic and ask why it is not used more often. In card game terms, like contract bridge or other games, there is partial, differing states of knowledge across the game players, and there is the different concept of ‘missing a trick’, which is to say, we made a mistake given what we knew; subjective probability and Bayes-backed decision logic is about being rational and avoiding missing costly ‘tricks’, especially after a certain time or in the long run, by virtue of ‘playing’ consistently optimally.

Above is a quote of de Finetti (7), who was known for his intellect and beautiful writing. He meant that your probability of an event is subjective rather than objective. That probability does not exist in the same way that the ‘ether’ scientists thought existed before the Michelson-Morley experiments demonstrated its likely non-existence in the early 20th century.

Probability is relative not objective. It is a function of your state of knowledge, the possible options you are aware of, and the observed data that you may have, and which you trust. When these have been used up we equivocate between the alternatives. We do this in the sense that we choose our probability distribution so as to use all the information we have, not throwing any away, and so as not to add any more ‘information’ that we do not have8. As you find out more or get more data, you can update your probabilities. This up-to-date probability distribution is one of your key tools for prediction and making decisions. Many people don’t write it down. Such information may be tacit and there will be some sort of ‘mental model’ in operation. If you try to work with probability, it is likely that you may not be using the above logic, i.e. probability theory. You may be making decisions by some other process. . .

In a 2022 lecture by the Nobel Laureate, Professor ‘t Hooft9 expounded a theory in which everything that happens is determined in advance.10

Why do I bring this up? Let us go back to de Finetti. Since we can never know all the ‘initial conditions’ in their minute detail, then our world is subjective, based on our state of knowledge, and this leads to other theories, including that of probability logic,

which is my topic here. ’t Hooft’s theory is all very well. (11)
As human beings, we find this situation really tricky. There may be false intuition.

There may be ‘groupthink’. Alternatives may be absent from the calculations (we come back to this later).

page10image54911104

6 Subjective because I am looking at the decision from my point of view, with my state of knowledge.

7 See appendix on ‘History & key figures. . .
8 This is how the ‘Maximum Entropy Principle’ works, and there is an explicit example of how this works mathematically in the first section of the Mathematical Miscellany later.
9 He won the prize in the late 1990s for his work with a colleague on making a theory of subatomic particle forces make sense.
10 This was called the ‘N1 theory’.
11 And to digress, we may wonder how (bad or good) it is for humanity to live life under such a hypothesis.

page11image55192544

The famous ether experiment mentioned above is an example of the great majority of top scientists (physicists), in fairly modern times, believing in something that turned out later literally to be non-existent, like the Emperor’s New Clothes.

In the ‘polemic’ section of his paper about different kinds of estimation intervals (1976), the late, eminent physicist, E T Jaynes, wrote ‘. . . orthodox arguments against Laplace’s use of Bayes’ theorem and in favour of “confidence Intervals” have never considered such mundane things as demonstrable facts concerning performance.’

Jaynes went on to say that ‘on such grounds (i.e. that we may not give probability statements in terms of anything but random variables (12)), we may not use (Bayesian) derivations, which in each case lead us more easily to a result that is either the same as the best orthodox frequentist result, or demonstrably superior to it’.

Jaynes went on: ‘We are told by frequentists that we must say ‘the % number of times that the confidence interval covers the true value of the parameter’ not ‘the probability that the true value of the parameter lies in the credibility interval’. And: ‘The foundation stone of the orthodox school of thought is the dogmatic insistence that the word probability must be interpreted as frequency in some random experiment.’ Often that ‘experiment’ involves made-up, randomised data in some imaginary and only descriptive, rather than a useful prescriptive (13), model. Often, we can’t actually repeat the experiment directly or even do it once. Many organisations will want a prescription for their situation in the here-and-now, rather than a description of what may happen with a given frequency in some ad hoc and imaginary model that uses any amount of made-up data.

Liberally quoting again, Jaynes continues: ‘The only valid criterion for choosing is which approach leads us to the more reasonable and useful results?

‘In almost every case, the Bayesian result is easier to get at and more elegant. The main reason for this is that both the ad hoc step of choosing a statistic and the ensuing mathematical problem finding its sampling distribution are eliminated.

‘In virtually every real problem of real life the direct probabilities are not determined by any real random experiment; they are calculated from a theoretical model whose choice involves ‘subjective’ judgement. . . and then ‘objective’ calibration and maximum entropy equivocation between outcomes we don’t know(14). Here, ‘maximum entropy’ simply means not putting in any more information once we’ve used up all the information we believe we actually have.

‘Our job is not to follow blindly a rule which would prove correct 95% of the time in the long run; there are an infinite number of radically different rules, all with this property.

page11image55192544

12 In his book, de Finetti avoids the term ‘variable’ as it suggests a number which ‘varies’, which he considers a strange concept related to the frequentist idea of multiple or many idealised identical trials where the parameter we want to describe is fixed, and the data is not fixed, which viewpoint probability logic reverses. He uses the phrase: random quantity instead.

13 What should we believe? What should we therefore do? 

14 See Objective Bayesianism by Williamson (2010)

page12image55200192

Things never stay put for the long run. Our job is to draw the conclusions that are most likely to be right in the specific case at hand; indeed, the problems in which it is most important that we get this theory right or just the ones where we know from the start that the experiment can never be repeated.’ (See Appendix VIII for some application sectors.)

‘In the great majority of real applications, long run performance is of no concern to us, because it will never be realised.’

And finally, Jaynes said that ‘the information we receive is often not a direct proposition, but is an indirect claim that a proposition is true, from some “noisy” source that is itself not wholly reliable’. The great Hungarian logician and problem-solver Pólya deals with such situations in his 1954 works around plausible inference, and we cover the basics of this in this book.

Most people are happy to use logic when dealing with certainty and impossibility. This is the standard architecture for sextillions  of electronic devices, for example.

Where there is uncertainty between these extremes of logic, let us use the theory of probability as extended logic.

Realistic decision-making for time-related quantities in business

Business projects, sales programmes, often go to double the time and double the cost: how would Bayes have accounted and planned for these?

I now turn to important and sometimes critical time-measures that are used in business decision-making, strategic planning and valuation, such as ‘sales cycle’ time, customer lifetime, and various ‘time-to-market’ quantities, such as time to proof of concept or time to develop a first version of a product.

Bayesian analysis enables us to make good and common sense estimates in this area, where frequency statistics fails. It allows us to use sparse, past observations of positive cases, all of our recent observations where no good result has yet happened, and a subjective knowledge, all treated together and in an objective way, using all of the above information and data and nothing but this. That is, it will be a maximum entropy treatment of the problem where we only use the data we have and nothing more, as accurately as is possible.

We assume that the model for the probability that the time taken to success, t, in ‘quarters’ of a year, is an exponential distribution eλt for any positive t > 0. λ will be the mean rate of success for the next case in point. We have available some similar prior data, over a period of t quarters, where we had n clients and r ≤ n successful sales (footnote 0).

Let T = r + (nr)t

be the total number of quarters (i.e. three-month periods of time) in which we have observed lack of success in selling something, e.g. a product or service, and where

t^=1rtj

is the mean observed time to success tj for the jth data point.

Let the inverse θ = λ−1, be the mean time to success, for the quantity we want to estimate predictively, and track or monitor carefully, ideally in real time, from as early as possible in our business development efforts, for example, the mean sales-cycle time, i.e. the time from first contact with a new client to the time of the first sale, or possibly the time between sales, or new marketing campaigns, product releases or versions and so on. We shall create an acceptance test at some given level of credibility or rational degree of belief, P, for this θ to be above a selected test value θ0, with some degree of belief my team of executives are comfortable with or interested in.

I wish to obtain an expression telling me the predicted time-to-success in quarters is above (or below) θ0 in terms of θ0 and T, n and r, i.e.given all the available evidence.

By our hypothesis (model), the probability that the lifetime θ > θ0 is given by eλθ0.

The prior probability for the subjective belief in the mean time taken ts is taken to be distributed exponentially around this value, ps(λ) = tseλts, which is the maximally-equivocal (footnote 1) most objective assumption.

The small probability from the test, for a given value of λT, given the evidence in the test data, and our best expert opinion, leading to T, is given by the probability ‘density’

p(dλT,n,t1,...,tr)=1r!(λT)reλTd(λT)

Multiplying the probability that the time is greater than θ0 by this probability for each value of λ, and integrating over all positive values of λ, I find that the probability that the next sales person or next case of customer lifetime or time to sale is greater than our selected lifetime for the test, θ0 is given by

 

p(θ>θ0n,r,T,θ0)=01r!(λT)reλTλθ0d(λT)=p(D,θ0)

 

 

Where p(D,θ0) is the posterior probability as a function of our data D and (acceptance) case in point θ0, and which after some straightforward algebra turns out to be a simple expression from which result one can obtain the numerical value with T having been shifted by the inclusion of the subjective expert time, ts, T → T + ts, which is our subjective, common sense, maximum entropy, prior belief as to the mean length of time in quarters for this quantity.

Suppose we have an acceptance probability, of P * 100% that our rational, mean sales cycle time for the next customer or time-to-market for a product or service is less than some time θ0. I thus test whetherp(D,θ0) < P. If this inequality is true, (we have chosen P such that) our team will accept and work with this case, because it is sufficiently unlikely for us that the time to sale or sales cycle is longer than θ0. Alternatively, I can determine what θ0 is, for a given limiting value of P, say, 20%. For example: taking some data, where n = 8, r = 6, the expert belief is that the sales time mean is

ts=174

, i.e. just over a year, and there were specific successes, say, at tj = (3,4,4,4,4.5,6)quarters corresponding to our r = 6, and we run the new test for t = 2 quarters. We want to be 80% sure that our next impact-endeavour for sales/etc will not last more than some given θ0 that we want to determine. I put in the values, and find that T = 33.75, continuing to determine θ0 I find that with odds of 4:1 on, time/lifetime/time-to-X is no greater than 8.7 quarters.

Suppose that we had more data, say an average of

tj¯=174

quarters with r = 15 actual successes and n = 20 trials. We decide to rely on the data and set

ts=174

. Now T = 78. Keeping the same probability acceptance or odds requirement at 80% or 4:1 on, we find θ0 ≤ 8.25 quarters. If we were considering customer lifetime, rather than sales cycle time or similar measures like time to proof of concept etc, we benefit when the lifetime of the customer is more than a given value of time θ0, and so we may look at tests where P > 80% and so on.

If we omit the quantity ts, we find that the threshold θ0 = 7.8 quarters, only a small tightening, since the weight of one subjective ’data’ is much smaller than the effect of so many, O(n) ‘real’ data points.

Now I wish to consider the case where we run a test for a time t with n opportunities. After a time t, we obtain a first success (footnote 2), so that then r = 1 and we note the value of . I then set ts = t and I have also  = t. T reduces to T = (n+1)t, and if we look at the case θ0 = t, our probability reduces to an expression that is a function of n:

 

p(θ>tn,1,t,θ0=t)=[n+1n+2]2

 

Since ∞ > n ≥ 1 then  

49p(θ>tn,1,t,θ0=t)<1

, i.e. if we are only testing one case and we stop this test after time t with one success r = 1 = n, this gives us our minimal probability that the mean is θ ≥ t, all agreeing with common sense, and interesting that the only case where we can achieve a greater than 50:50 probability of θ < t = ts =  is when we only tested n = 1 to success. This is of course probing the niches of sparse data, but in business, one often wishes to move ahead with a single ‘proof of concept’. It is interesting to be able to quantify the risks in this way.

If we consider the (extreme) case where we have no data, only our subjective belief (footnote 3), quantified as ts. Let us take

θ0=mts

, m an integer, then our probability p(∅,θ0) of taking this time reduces to

 

p(θ>θ00,0,ts,θ0=mts)=[11+m]

 

This means that at m = 1 the probability of being greater or less than θ0 is a half, which is common sense. If we want to have odds, say, of 4:1 on, or a probability of only 20% of being above θ0 quarters, then we require m = 4, and the relationship between the odds to 1 and m is simple.

Again this all meets with common sense, but shows us how to deal with a near or complete absence of data, as well as how the situation changes with more and more data. The moral is that for fairly sparse data, when we seek relatively high levels of degree of belief in our sales or time needed the next time we attempt something, the Reverend Bayes is not too forgiving, although he is more forthcoming with useful and most concise information than an equivalent frequency statistics analysis. As we accumulate more and more data, we can see the value of the data very directly, as we have quantified how our risks are reduced as it comes in.

The results seem to fit our experiences with delays and over-budget projects. We must take risks with our salespeople and our planning times, but with this analysis, we are able to quantify and understand these calculated risks and rewards and plan accordingly.

One can update this model with a two-parameter model that reduces to it, but which allows for a shape (hyper)parameter which gives flexibility around prior data, such as the general observation that immediate success, failure or general `event’ is not common, the position of the mean relative to the mode, and also around learning/unlearning since the resulting process need not be memoryless  (see another blog here!)

  1. or customer lifetimes, or types of time-to-market, or general completions/successes etc.)↩︎
  2. highest entropy, which uncertainty measure is given by S =  − ∑spslog ps.↩︎
  3. e.g. a sale in a new segment/geography/product/service↩︎
  4. if we neither have any data nor a subjective belief, the model finally breaks down, but that is all you can ask of the model, and a good Bayesian would not want the model to ‘work’ under such circumstances!↩︎

Home Energy and Technology (HEAT) 2008 Conference Summary

Conference Home

A buzz of over 100 people and 10 exhibitors discussed for 10 hours in 12 talks, 5 special elevator pitches, 2 panel sessions, and 3 hours of social networking, ways to cut in a big way emissions to do with the built environment, as well as ways of building their businesses. The sins of greenwashing: ‘every little helps’; vagueness (what are the figures?); lack of proof; lying; hidden costs or omissions; mentioning only the better of two bad things; were all at the forefront of the minds of delegates at this independent conference on home energy and technology. 

The day began with a description of the difficulties for the early pioneers: solar thermal with storage of heat. Few had such systems, according to a quick audience survey, but the experience in building them into an old house were clear: hire specialists, do not choose builders who don’t care about energy matters. If it is not in their hearts and minds, they will not enjoy the work and you will waste effort and money just getting them to install it properly, and indeed do all the basic things needed to facilitate it, such as lagging all pipes and cutting out drafts.
Using a super efficient solar accumulator tank, much larger than a normal one, and with vastly greater insulation, one could reduce the loss of energy from it to 40-100W as compared to nearly 500W from a normal tank, thus reducing temperature loss overnight to 2-3C, rather than 25-30C. By taking in energy from solar thermal flatplate panels or collectors, one could heat this water and store that energy as a thermal store; and then use it for the radiators and hot water taps. By early March in Cambridge, an average system is heating the water up to more than 40C on a day with a decent run of sunshine.

An example was given of a very careful person’s carbon footprint: but air travel to visit relatives in the US made it very difficult to be significantly below average. The solar thermal system led to a heating-related emissions level of just one third the national average.

The keynote speaker in the morning stated that energy use was set to increase by 57% between 2002 and 2025 according to EPRI, ACEEE and IMS research.
With a chip design company, he was focused on efficient end-use of energy, with the other two pillars of sustainable energy mentioned as renewables and efficient delivery of primary energy to end-use. Poor efficiency of electrical output used in motion was singled out at the area his company could do most with regarding reduction of energy use, and was half of all electrical energy consumption. He suggested that 60% of energy in electrical motors could be saved and that small motors were the best places to look to achieve this (those of less than 10-20kW power).

The claim was that switching to energy efficient motor driven systems can save Europe up to 341 billion kWh/Euro 31 billion per year in electricity (according to BERR, Eurostat and SEEEM 2006). This would translate to about 42 billion kWh for the UK, which is 2 kWh a person a day. This is a small amount of our energy, perhaps about 11% of our electrical energy usage in the UK (not including losses in conversion). In the UK we all use about 125 kWh/day across all types of energy, 18kWh a person a day is in electrical usage but a further 27kWh a person a day is in losses from the primary sources of electrical energy. (see Mackay “Sustainable Energy”).

The speaker went on to say that a range of stakeholders benefit from energy efficiency: Who? Manufacturers can have a more reliable product; service providers should have fewer product returns; The customer has lower energy bills; The government is helped with its energy (sustainability and security) policy; and last but not least, the environment is better protected.

The next speaker presented micro and nano scale energy re-cycling techniques. He looked at savings in electronic design for packaging, air conditioning battery packs. His intelligent output driver would “reduces power losses in driven load by up to 75% and reduce the number of components in a system.” He promised technology for the future which would recycle energy within chips.

Another speaker suggested that refrigeration is a significant source of emissions and inefficiencies and could be helped by better technologies.

Two speakers talked of the need to measure energy and emissions (in the home) through smart metering, so that the homeowner could make better decisions. For example, putting washing into a tumble dryer might be 10% of ones energy bill, whereas putting that washing on the line costs zero or near zero energy and emissions, though it takes more time and effort. The point is that by knowing what the ‘big tickets’ are one can address those and not waste too much time on those which are not, such as phone chargers (less than 0.1% of the average bill).

Looking at the grid for electricity as a whole, another speaker showed a picture of the US from the sky at night, looking very bright across of high percentage of the land mass, and said that $1000Bn of investment was needed to right an aging grid seeing increasing demand.

She noted the stringent regulations on carbon emissions to come. The upgraded smart grid would see many distributed resources complementing central renewable generation of various kinds. This would make grids more resilient to various problems such as faults and natural disasters and would be optimised for variable load factors. The grid would provide higher quality power. Time of use pricing would become more visible and wholesale markets better integrated (“market empowerment”).

The smart grid would not only do all the above, but enable energy storage as needed, so as not to waste it so much and to reduce the overall levels needed to be generated. There would be benefits to utilities and demand smoothing with distributed generation and storage.

The speaker talked of a ‘jigsaw puzzle’ involving utility company and consumer and their settlement based around a number of factors: consumption or ‘negawatts’ (not); small scale generation; large-scale generation; and efficient distribution.

There was a business opportunity for those controlling the home side of the smart grid; consumers would see benefits from the smart grid, which this speaker claimed was real, not theoretical.

The idea of patenting inventions and various other legal areas such as trademarks and copyrighting were presented as ways to procure competitive defence, offensive strategy and licensing models and last but not least as a bargaining chip/selling point and valuation driver in negotiations.

The solar session followed after lunch networking.
The first speaker talked of solar thermal. He had carried out pilot studies with various types of household: e.g. two adults and a child; three children; one adult and two children; two adults and so on. Generally, over one year, they diverted from 30% to 70% of their gas usage to ‘free’ solar.

He went on to show that solar thermal and air source heat pumps gave the biggest ‘bang for buck’ on CO2 emissions reductions as compared with PV and micro-wind. PV was not far off the pace; wind was very low.

BERRs renewable energy strategy document suggested 7 million solar thermal installations was the target. This would correspond to a quarter of all homes? The HEAT audience survey showed about 3% having installed solar thermal technology.

A speaker on transparent solar energy said that “PV is one of the world’s fastest growing industries -averaging 34% cagr for 30 years and 44% in past 5 years with doubling in 2008 alone. Its installed capacity was only 252MW in 2008, 3073MW in 2007 and some 5000MW in 2008. The PV market was worth some Euro 6bn in 2007 and projected to be worth Euro 10bn in 2008 growing to Euro 30bn by 2012. The forecast for 2013 is $100bn revenues and 23GW (LuxResearch); production doubled in 2008 and forecast to reach 29000MW in 2012. Commercial investment in PV in 2007 alone has been Euro 32bn rising 77% over the previous year.

PV’s growth was reflected in application shift. In 1997 only 8% PV was grid connected. In 2007 90% was grid connected”. Yet, he went on, “the opportunity is just touching the surface. Germany and Spain alone represent 70% of demand and Japan and California most of the
remainder.”

PV market drivers are energy security, fuel costs/cost volatility (grid parity within reach), global warming and imposed regulations and feed-in tariffs.

Another PV speaker looked at how PV was first “off-planet” then “off-grid” and now “on-grid”.

A chart was shown giving the price of PV generated electricity in 1990 to 2040, with worst case values coming down to Euro 40 cents in 2010 and then down to Euro 20 cents in 2020 and on down to Euro 10 cents by 2035. Best case prices were at Euro 18, 13 and 5 cents at those corresponding dates. Prices reached those of the grid between 2008 and 2020 for highest grid prices and between 2020 and 2034 for lowest grid prices.

A chart of countries’ situations showed Italy reaching grid parity now, California, Spain and Australia reaching it well before 2020, and other nations trailing, notably China and India. The south of the UK appeared to fall just after the 2020 grid parity curve.

Examples were given of EU nations’ feed-in tariffs, such as France, where a very favourable, guaranteed tariff payment to those people generating PV of Euro 57 cents per kWh. It is clear that for most systems this would mean that there would be a net income back to the customer for that 20 year period. The UK lags badly this kind of initiative, and one wonders what political capital problem there can be in implementing it. One hopes given France and Germany have done it, this is not the reason why the UK cannot follow: the UK can decide to it in spite of this, if needs be.

The business model resulting from such FiTs was presented, giving a claimed 12-14% IRR (which takes into account capex for the installation). The building owner would lease her roof space for the system thus obtaining an income from it that wasn’t there before, the PV developer would install it, and the third party investor would benefit from that positive IRR.

An alternative model was that the bank would finance the installation by a service company for a homeowner, who would obtain a small income for ten years and a larger one for a further ten to twenty years. The service company would also design the system and arrange the finance with the bank.

It was estimated that from 20% up to 59% of electricity in the largest 5 Western European states could be derived from solar PV: 178 to 512 of 860 Terawatt hours a year.

The industry’s target as of 2008 was 12% of EU electricity to come from PV by 2020, corresponding to a 350GWp (Gigawatt peak power).

A member of the audience discussed his own PV system and claimed that it had provided more than half of his electricity on tap and had paid for the entire bill through selling back to the energy ‘provider’.

The final speaker in the solar session had a rather different technology, that of concentrated solar power. This works by converting heat produced from solar thermal energy directed from a system of mirrors to steam and then the steam drives turbines that create electricity, which is sent into the grid.

The speaker suggested that carbon capture and storage doesn’t work, because the CO2 leaks out and was susceptible to natural shocks. He also claimed that it was uneconomic and not ready.

He described nuclear energy as “one major incident from shutdown” and cited the power plants, fuel reprocessing and toxic nuclear waste as possible causes of this. He noted the 10-15 year implementation timescale that was almost as bad as for CCS.

He noted the challenges for renewables of load-matching, financing and political collaboration.

Concentrated solar power was beneficial in that it had lower costs, higher cell efficiencies and low areas needed.
It could be applied to cooling as well as heating. Also: air-conditioning; desalination; power generation and water pumping.

The CSP technology would also meet green targets reduce business exposure to energy price spikes and decouple the timing of projects from the availability of the power grid or gas networks.

There was a discussion on ‘Solar communities’, timely infrastructure, walking communities, mobility, lower cost long term and silent energy. The idea was that cities have become, like SUVs, ‘unfit for purpose’.

In the final session, an investment fund spoke of energy efficiency as key to the energy problem. They showed a German Advisory Council chart claiming that solar would provide just under 25% of global primary energy by 2050, with fossil fuels and nuclear still at 50%, and other renewable carriers making up the 25% or so combined. By 2100, they foresaw that solar energy would represent 75% of the mix, with fossil fuels and nuclear having reduced right down to 10-15%.

A company offering the construction of code 5-6 “passive homes” under modern methods of construction, in very short timescales from partial kit form, gave a highly interesting talk. Clearly, since we are currently generally at code level 3, this would be a tremendous leap-frog. Housing associations are very interested in passive homes, for obvious reasons. The current building industry slump clearly isn’t helping this type of development go ahead.
The speakers told the conference that: “Domestic property contributes 27% of UK’s CO2 emissions. The Government is seeking to reduce the emissions from new homes to zero carbon in all new housing by 2016.”

Space heating was claimed to be just under 60% of household energy consumption.
Assuming that the space heating causes approximately the same level of emissions per unit energy as the rest of the energy use, this means that by eliminating the need for space-heating we could save a maximum of 16% of our CO2 emissions nationally by building or retrofitting to only Passive Houses. Of course this is unlikely! But for any new builds, it can be done.

What is passive house? The speaker likened it to a super efficient thermos flask rather than a wasteful hot-plate for a jug of coffee. With the thermos flask, the coffee remains hot for a period of time, and the flask is “off-grid”.

The aim is to keep the heat within the house. One uses heat exchanger coils to transfer heat from outgoing ‘used air’ to the fresh air coming in. That is, a mechanical ventilation system with heat recovery provides clean and healthy air around the clock, eliminating the need to air the house manually. The building is airtight. The design is such that the southfacing aspects allow in warmth through triple-glazed windows. The walls are thicker. More sustainable materials than concrete and steel are typically used. The way that heat is lost through the roof and floor is dealt with with new technologies. While nothing is ever perfect, this set of arrangements makes the house vastly better at keeping in heat.

The first offsite-manufactured Passive House in the world was built in Ireland in 2003, in just 25 days!

The UK Government wants 3 million new homes built by 2020….35% to be social and affordable! All new homes to be zero-carbon from 2016. This speaker claimed the solution was to apply pre engineered, offsite manufactured, Passive
House technology to all new homes in UK.

We look forward to 2009 seeing you for continuing conversations and discussions on 19 June, 25 September, 3-4 December!