Category Archives: strategy

Student-b: Realistic decision-making for time-related quantities in business

Business projects, sales programmes, often go to double the time and double the cost: how would Bayes have accounted and planned for these?

I now turn to important and sometimes critical time-measures that are used in business decision-making, strategic planning and valuation, such as ‘sales cycle’ time, customer lifetime, and various ‘time-to-market’ quantities, such as time to proof of concept or time to develop a first version of a product.

Bayesian analysis enables us to make good and common sense estimates in this area, where frequency statistics fails. It allows us to use sparse, past observations of positive cases, all of our recent observations where no good result has yet happened, and a subjective knowledge, all treated together and in an objective way, using all of the above information and data and nothing but this. That is, it will be a maximum entropy treatment of the problem where we only use the data we have and nothing more, as accurately as is possible.

We assume that the model for the probability that the time taken to success, t, in ‘quarters’ of a year, is an exponential distribution eλt for any positive t > 0. λ will be the mean rate of success for the next case in point. We have available some similar prior data, over a period of t quarters, where we had n clients and r ≤ n successful sales (footnote 0).

Let T = r + (nr)t

be the total number of quarters (i.e. three-month periods of time) in which we have observed lack of success in selling something, e.g. a product or service, and where


is the mean observed time to success tj for the jth data point.

Let the inverse θ = λ−1, be the mean time to success, for the quantity we want to estimate predictively, and track or monitor carefully, ideally in real time, from as early as possible in our business development efforts, for example, the mean sales-cycle time, i.e. the time from first contact with a new client to the time of the first sale, or possibly the time between sales, or new marketing campaigns, product releases or versions and so on. We shall create an acceptance test at some given level of credibility or rational degree of belief, P, for this θ to be above a selected test value θ0, with some degree of belief my team of executives are comfortable with or interested in.

I wish to obtain an expression telling me the predicted time-to-success in quarters is above (or below) θ0 in terms of θ0 and T, n and r, i.e.given all the available evidence.

By our hypothesis (model), the probability that the lifetime θ > θ0 is given by eλθ0.

The prior probability for the subjective belief in the mean time taken ts is taken to be distributed exponentially around this value, ps(λ) = tseλts, which is the maximally-equivocal (footnote 1) most objective assumption.

The small probability from the test, for a given value of λT, given the evidence in the test data, and our best expert opinion, leading to T, is given by the probability ‘density’


Multiplying the probability that the time is greater than θ0 by this probability for each value of λ, and integrating over all positive values of λ, I find that the probability that the next sales person or next case of customer lifetime or time to sale is greater than our selected lifetime for the test, θ0 is given by





Where p(D,θ0) is the posterior probability as a function of our data D and (acceptance) case in point θ0, and which after some straightforward algebra turns out to be a simple expression from which result one can obtain the numerical value with T having been shifted by the inclusion of the subjective expert time, ts, T → T + ts, which is our subjective, common sense, maximum entropy, prior belief as to the mean length of time in quarters for this quantity.

Suppose we have an acceptance probability, of P * 100% that our rational, mean sales cycle time for the next customer or time-to-market for a product or service is less than some time θ0. I thus test whetherp(D,θ0) < P. If this inequality is true, (we have chosen P such that) our team will accept and work with this case, because it is sufficiently unlikely for us that the time to sale or sales cycle is longer than θ0. Alternatively, I can determine what θ0 is, for a given limiting value of P, say, 20%. For example: taking some data, where n = 8, r = 6, the expert belief is that the sales time mean is


, i.e. just over a year, and there were specific successes, say, at tj = (3,4,4,4,4.5,6)quarters corresponding to our r = 6, and we run the new test for t = 2 quarters. We want to be 80% sure that our next impact-endeavour for sales/etc will not last more than some given θ0 that we want to determine. I put in the values, and find that T = 33.75, continuing to determine θ0 I find that with odds of 4:1 on, time/lifetime/time-to-X is no greater than 8.7 quarters.

Suppose that we had more data, say an average of


quarters with r = 15 actual successes and n = 20 trials. We decide to rely on the data and set


. Now T = 78. Keeping the same probability acceptance or odds requirement at 80% or 4:1 on, we find θ0 ≤ 8.25 quarters. If we were considering customer lifetime, rather than sales cycle time or similar measures like time to proof of concept etc, we benefit when the lifetime of the customer is more than a given value of time θ0, and so we may look at tests where P > 80% and so on.

If we omit the quantity ts, we find that the threshold θ0 = 7.8 quarters, only a small tightening, since the weight of one subjective ’data’ is much smaller than the effect of so many, O(n) ‘real’ data points.

Now I wish to consider the case where we run a test for a time t with n opportunities. After a time t, we obtain a first success (footnote 2), so that then r = 1 and we note the value of . I then set ts = t and I have also  = t. T reduces to T = (n+1)t, and if we look at the case θ0 = t, our probability reduces to an expression that is a function of n:




Since ∞ > n ≥ 1 then  


, i.e. if we are only testing one case and we stop this test after time t with one success r = 1 = n, this gives us our minimal probability that the mean is θ ≥ t, all agreeing with common sense, and interesting that the only case where we can achieve a greater than 50:50 probability of θ < t = ts =  is when we only tested n = 1 to success. This is of course probing the niches of sparse data, but in business, one often wishes to move ahead with a single ‘proof of concept’. It is interesting to be able to quantify the risks in this way.

If we consider the (extreme) case where we have no data, only our subjective belief (footnote 3), quantified as ts. Let us take


, m an integer, then our probability p(∅,θ0) of taking this time reduces to




This means that at m = 1 the probability of being greater or less than θ0 is a half, which is common sense. If we want to have odds, say, of 4:1 on, or a probability of only 20% of being above θ0 quarters, then we require m = 4, and the relationship between the odds to 1 and m is simple.

Again this all meets with common sense, but shows us how to deal with a near or complete absence of data, as well as how the situation changes with more and more data. The moral is that for fairly sparse data, when we seek relatively high levels of degree of belief in our sales or time needed the next time we attempt something, the Reverend Bayes is not too forgiving, although he is more forthcoming with useful and most concise information than an equivalent frequency statistics analysis. As we accumulate more and more data, we can see the value of the data very directly, as we have quantified how our risks are reduced as it comes in.

The results seem to fit our experiences with delays and over-budget projects. We must take risks with our salespeople and our planning times, but with this analysis, we are able to quantify and understand these calculated risks and rewards and plan accordingly.

One can update this model with a two-parameter model that reduces to it, but which allows for a shape (hyper)parameter which gives flexibility around prior data, such as the general observation that immediate success, failure or general `event’ is not common, the position of the mean relative to the mode, and also around learning/unlearning since the resulting process need not be memoryless  (see another blog here!)

  1. or customer lifetimes, or types of time-to-market, or general completions/successes etc.)↩︎
  2. highest entropy, which uncertainty measure is given by S =  − ∑spslog ps.↩︎
  3. e.g. a sale in a new segment/geography/product/service↩︎
  4. if we neither have any data nor a subjective belief, the model finally breaks down, but that is all you can ask of the model, and a good Bayesian would not want the model to ‘work’ under such circumstances!↩︎

Home Energy and Technology (HEAT) 2008 Conference Summary

Conference Home

A buzz of over 100 people and 10 exhibitors discussed for 10 hours in 12 talks, 5 special elevator pitches, 2 panel sessions, and 3 hours of social networking, ways to cut in a big way emissions to do with the built environment, as well as ways of building their businesses. The sins of greenwashing: ‘every little helps’; vagueness (what are the figures?); lack of proof; lying; hidden costs or omissions; mentioning only the better of two bad things; were all at the forefront of the minds of delegates at this independent conference on home energy and technology. 

The day began with a description of the difficulties for the early pioneers: solar thermal with storage of heat. Few had such systems, according to a quick audience survey, but the experience in building them into an old house were clear: hire specialists, do not choose builders who don’t care about energy matters. If it is not in their hearts and minds, they will not enjoy the work and you will waste effort and money just getting them to install it properly, and indeed do all the basic things needed to facilitate it, such as lagging all pipes and cutting out drafts.
Using a super efficient solar accumulator tank, much larger than a normal one, and with vastly greater insulation, one could reduce the loss of energy from it to 40-100W as compared to nearly 500W from a normal tank, thus reducing temperature loss overnight to 2-3C, rather than 25-30C. By taking in energy from solar thermal flatplate panels or collectors, one could heat this water and store that energy as a thermal store; and then use it for the radiators and hot water taps. By early March in Cambridge, an average system is heating the water up to more than 40C on a day with a decent run of sunshine.

An example was given of a very careful person’s carbon footprint: but air travel to visit relatives in the US made it very difficult to be significantly below average. The solar thermal system led to a heating-related emissions level of just one third the national average.

The keynote speaker in the morning stated that energy use was set to increase by 57% between 2002 and 2025 according to EPRI, ACEEE and IMS research.
With a chip design company, he was focused on efficient end-use of energy, with the other two pillars of sustainable energy mentioned as renewables and efficient delivery of primary energy to end-use. Poor efficiency of electrical output used in motion was singled out at the area his company could do most with regarding reduction of energy use, and was half of all electrical energy consumption. He suggested that 60% of energy in electrical motors could be saved and that small motors were the best places to look to achieve this (those of less than 10-20kW power).

The claim was that switching to energy efficient motor driven systems can save Europe up to 341 billion kWh/Euro 31 billion per year in electricity (according to BERR, Eurostat and SEEEM 2006). This would translate to about 42 billion kWh for the UK, which is 2 kWh a person a day. This is a small amount of our energy, perhaps about 11% of our electrical energy usage in the UK (not including losses in conversion). In the UK we all use about 125 kWh/day across all types of energy, 18kWh a person a day is in electrical usage but a further 27kWh a person a day is in losses from the primary sources of electrical energy. (see Mackay “Sustainable Energy”).

The speaker went on to say that a range of stakeholders benefit from energy efficiency: Who? Manufacturers can have a more reliable product; service providers should have fewer product returns; The customer has lower energy bills; The government is helped with its energy (sustainability and security) policy; and last but not least, the environment is better protected.

The next speaker presented micro and nano scale energy re-cycling techniques. He looked at savings in electronic design for packaging, air conditioning battery packs. His intelligent output driver would “reduces power losses in driven load by up to 75% and reduce the number of components in a system.” He promised technology for the future which would recycle energy within chips.

Another speaker suggested that refrigeration is a significant source of emissions and inefficiencies and could be helped by better technologies.

Two speakers talked of the need to measure energy and emissions (in the home) through smart metering, so that the homeowner could make better decisions. For example, putting washing into a tumble dryer might be 10% of ones energy bill, whereas putting that washing on the line costs zero or near zero energy and emissions, though it takes more time and effort. The point is that by knowing what the ‘big tickets’ are one can address those and not waste too much time on those which are not, such as phone chargers (less than 0.1% of the average bill).

Looking at the grid for electricity as a whole, another speaker showed a picture of the US from the sky at night, looking very bright across of high percentage of the land mass, and said that $1000Bn of investment was needed to right an aging grid seeing increasing demand.

She noted the stringent regulations on carbon emissions to come. The upgraded smart grid would see many distributed resources complementing central renewable generation of various kinds. This would make grids more resilient to various problems such as faults and natural disasters and would be optimised for variable load factors. The grid would provide higher quality power. Time of use pricing would become more visible and wholesale markets better integrated (“market empowerment”).

The smart grid would not only do all the above, but enable energy storage as needed, so as not to waste it so much and to reduce the overall levels needed to be generated. There would be benefits to utilities and demand smoothing with distributed generation and storage.

The speaker talked of a ‘jigsaw puzzle’ involving utility company and consumer and their settlement based around a number of factors: consumption or ‘negawatts’ (not); small scale generation; large-scale generation; and efficient distribution.

There was a business opportunity for those controlling the home side of the smart grid; consumers would see benefits from the smart grid, which this speaker claimed was real, not theoretical.

The idea of patenting inventions and various other legal areas such as trademarks and copyrighting were presented as ways to procure competitive defence, offensive strategy and licensing models and last but not least as a bargaining chip/selling point and valuation driver in negotiations.

The solar session followed after lunch networking.
The first speaker talked of solar thermal. He had carried out pilot studies with various types of household: e.g. two adults and a child; three children; one adult and two children; two adults and so on. Generally, over one year, they diverted from 30% to 70% of their gas usage to ‘free’ solar.

He went on to show that solar thermal and air source heat pumps gave the biggest ‘bang for buck’ on CO2 emissions reductions as compared with PV and micro-wind. PV was not far off the pace; wind was very low.

BERRs renewable energy strategy document suggested 7 million solar thermal installations was the target. This would correspond to a quarter of all homes? The HEAT audience survey showed about 3% having installed solar thermal technology.

A speaker on transparent solar energy said that “PV is one of the world’s fastest growing industries -averaging 34% cagr for 30 years and 44% in past 5 years with doubling in 2008 alone. Its installed capacity was only 252MW in 2008, 3073MW in 2007 and some 5000MW in 2008. The PV market was worth some Euro 6bn in 2007 and projected to be worth Euro 10bn in 2008 growing to Euro 30bn by 2012. The forecast for 2013 is $100bn revenues and 23GW (LuxResearch); production doubled in 2008 and forecast to reach 29000MW in 2012. Commercial investment in PV in 2007 alone has been Euro 32bn rising 77% over the previous year.

PV’s growth was reflected in application shift. In 1997 only 8% PV was grid connected. In 2007 90% was grid connected”. Yet, he went on, “the opportunity is just touching the surface. Germany and Spain alone represent 70% of demand and Japan and California most of the

PV market drivers are energy security, fuel costs/cost volatility (grid parity within reach), global warming and imposed regulations and feed-in tariffs.

Another PV speaker looked at how PV was first “off-planet” then “off-grid” and now “on-grid”.

A chart was shown giving the price of PV generated electricity in 1990 to 2040, with worst case values coming down to Euro 40 cents in 2010 and then down to Euro 20 cents in 2020 and on down to Euro 10 cents by 2035. Best case prices were at Euro 18, 13 and 5 cents at those corresponding dates. Prices reached those of the grid between 2008 and 2020 for highest grid prices and between 2020 and 2034 for lowest grid prices.

A chart of countries’ situations showed Italy reaching grid parity now, California, Spain and Australia reaching it well before 2020, and other nations trailing, notably China and India. The south of the UK appeared to fall just after the 2020 grid parity curve.

Examples were given of EU nations’ feed-in tariffs, such as France, where a very favourable, guaranteed tariff payment to those people generating PV of Euro 57 cents per kWh. It is clear that for most systems this would mean that there would be a net income back to the customer for that 20 year period. The UK lags badly this kind of initiative, and one wonders what political capital problem there can be in implementing it. One hopes given France and Germany have done it, this is not the reason why the UK cannot follow: the UK can decide to it in spite of this, if needs be.

The business model resulting from such FiTs was presented, giving a claimed 12-14% IRR (which takes into account capex for the installation). The building owner would lease her roof space for the system thus obtaining an income from it that wasn’t there before, the PV developer would install it, and the third party investor would benefit from that positive IRR.

An alternative model was that the bank would finance the installation by a service company for a homeowner, who would obtain a small income for ten years and a larger one for a further ten to twenty years. The service company would also design the system and arrange the finance with the bank.

It was estimated that from 20% up to 59% of electricity in the largest 5 Western European states could be derived from solar PV: 178 to 512 of 860 Terawatt hours a year.

The industry’s target as of 2008 was 12% of EU electricity to come from PV by 2020, corresponding to a 350GWp (Gigawatt peak power).

A member of the audience discussed his own PV system and claimed that it had provided more than half of his electricity on tap and had paid for the entire bill through selling back to the energy ‘provider’.

The final speaker in the solar session had a rather different technology, that of concentrated solar power. This works by converting heat produced from solar thermal energy directed from a system of mirrors to steam and then the steam drives turbines that create electricity, which is sent into the grid.

The speaker suggested that carbon capture and storage doesn’t work, because the CO2 leaks out and was susceptible to natural shocks. He also claimed that it was uneconomic and not ready.

He described nuclear energy as “one major incident from shutdown” and cited the power plants, fuel reprocessing and toxic nuclear waste as possible causes of this. He noted the 10-15 year implementation timescale that was almost as bad as for CCS.

He noted the challenges for renewables of load-matching, financing and political collaboration.

Concentrated solar power was beneficial in that it had lower costs, higher cell efficiencies and low areas needed.
It could be applied to cooling as well as heating. Also: air-conditioning; desalination; power generation and water pumping.

The CSP technology would also meet green targets reduce business exposure to energy price spikes and decouple the timing of projects from the availability of the power grid or gas networks.

There was a discussion on ‘Solar communities’, timely infrastructure, walking communities, mobility, lower cost long term and silent energy. The idea was that cities have become, like SUVs, ‘unfit for purpose’.

In the final session, an investment fund spoke of energy efficiency as key to the energy problem. They showed a German Advisory Council chart claiming that solar would provide just under 25% of global primary energy by 2050, with fossil fuels and nuclear still at 50%, and other renewable carriers making up the 25% or so combined. By 2100, they foresaw that solar energy would represent 75% of the mix, with fossil fuels and nuclear having reduced right down to 10-15%.

A company offering the construction of code 5-6 “passive homes” under modern methods of construction, in very short timescales from partial kit form, gave a highly interesting talk. Clearly, since we are currently generally at code level 3, this would be a tremendous leap-frog. Housing associations are very interested in passive homes, for obvious reasons. The current building industry slump clearly isn’t helping this type of development go ahead.
The speakers told the conference that: “Domestic property contributes 27% of UK’s CO2 emissions. The Government is seeking to reduce the emissions from new homes to zero carbon in all new housing by 2016.”

Space heating was claimed to be just under 60% of household energy consumption.
Assuming that the space heating causes approximately the same level of emissions per unit energy as the rest of the energy use, this means that by eliminating the need for space-heating we could save a maximum of 16% of our CO2 emissions nationally by building or retrofitting to only Passive Houses. Of course this is unlikely! But for any new builds, it can be done.

What is passive house? The speaker likened it to a super efficient thermos flask rather than a wasteful hot-plate for a jug of coffee. With the thermos flask, the coffee remains hot for a period of time, and the flask is “off-grid”.

The aim is to keep the heat within the house. One uses heat exchanger coils to transfer heat from outgoing ‘used air’ to the fresh air coming in. That is, a mechanical ventilation system with heat recovery provides clean and healthy air around the clock, eliminating the need to air the house manually. The building is airtight. The design is such that the southfacing aspects allow in warmth through triple-glazed windows. The walls are thicker. More sustainable materials than concrete and steel are typically used. The way that heat is lost through the roof and floor is dealt with with new technologies. While nothing is ever perfect, this set of arrangements makes the house vastly better at keeping in heat.

The first offsite-manufactured Passive House in the world was built in Ireland in 2003, in just 25 days!

The UK Government wants 3 million new homes built by 2020….35% to be social and affordable! All new homes to be zero-carbon from 2016. This speaker claimed the solution was to apply pre engineered, offsite manufactured, Passive
House technology to all new homes in UK.

We look forward to 2009 seeing you for continuing conversations and discussions on 19 June, 25 September, 3-4 December!