Feeds:
Posts
Comments

In my last post, I offered four possible future funding scenarios for higher education in the UK. In the period since this post (which, admittedly, has been longer than I would have liked) we have seen the publication of the Browne Report on higher education and the subsequent choice of a new university funding system by the coalition government.

I suggested, back in 2010, that the most likely course of action to plug the impending university funding gap would be to transfer the burden from the taxpayer to the graduate. With the withdrawal of HEFCE funding for band C and D subjects (the arts and humanities, primarily, drawing calls of philistinism from many quarters), and the increase in the tuition fees cap to £9,000, this  is exactly what the current administration has done.

What is not so clear, however, is the effect that these changes will have on the HE marketplace. They certainly haven’t created much in the way of price-led competition; the average fee level being perilously close to the £9,000 maximum, and nowhere near the naive £7,500 suggested by the Government.

In fact, contrary to my prediction in the previous post on this subject, there appears to be little to differentiate Russell Group and post-92 institutions from a cost perspective; all are proposing fee levels at either £9,000 or very close. to that figure.

The question, perhaps, is whether these prices are sustainable. A degree as a product is effectively a Veblen good (that is, one in which price has an inverse effect on demand compared to normal expectations of the relationship), and, like the famous Veblen good – champagne – it can be tainted by perceptions of poor quality when the price is too low. However, this doesn’t mean that a Veblen good can ignore the normal price/demand relationship; Moet et Chandon would have a hard job selling their bog standard NV bubbly for the same price as Krug.

One wonders how many of the post 92s will continue to be able to flog their own equivalent of £25 wine for the same price as Oxbridge’s Krug. This is something that I will be investigating in forthcoming posts.

Public sector funding cuts. Four words that, perhaps, define the immediate future for the UK’s economy, and those of many of its competitors. Four words that may be greeted with a certain unwholesome relish by many of the more right-wing members of society, or may be viewed as a precursor to some sort of educational/civil Armageddon by the more ‘end of the world is nigh’ variety of public sector manager. But how will these cuts affect students in the UK over the next few years, and how will the notion of inclusive higher education, which was so key to the previous Government’s higher education policy, fare in a society where the word ‘austerity’ has suddenly begun to be used in sentences that aren’t specifically related to the country’s economic condition in the post-World War 2 rationing era?

Before that question can be answered, it is, perhaps, worthwhile examining the UK’s current position in the global HE hierarchy. With more universities in the QS 2009 World University Rankings top 600 than any nation bar the United States, and with four institutions featuring in the top ten, the UK currently occupies a place at the metaphorical ‘top table’ of international HE providers; a legacy of significant investment in the sector since the late 1990s, and a highly developed research culture in the leading institutions.

As I have already mentioned in a previous post on the future for knowledge transfer in HE, the UK’s top higher education institutions are skilled at producing world-class research at a lower cost level than many of their G8 competitors. I put forward the viewpoint in that post, that the relatively high efficiency levels of UK universities, together with the likely forthcoming increase in tuition fees, would largely obviate the detrimental effects of public spending cuts on university budgets. With the emergence of the Conservative/Liberal coalition since I wrote that article, though, I feel it is worth revisiting my arguments, in order to ascertain how they fare in a political landscape that is considerably different from that of a few months ago.

The first thing to consider is the expected rise in tuition fees. In my post I envisaged an increase to a level that would offset the likely significant reduction in the per student contract from the higher education funding council, but which would also take into account the price elasticity of demand for courses. As recent research from OpinionPanel has shown that overall demand for a degree remains fairly price  inelastic up to around £6,000 per year, this might seem a reasonable estimate as to where the forthcoming Browne report will recommend setting the maximum fee level.

However, there are a number of issues that may affect the likelihood of this happening. First, the Liberal Democrats are ideologically opposed to tuition fees and are unlikely to support the Tories should they attempt to push an increase through the Commons. Given the lack of a clear Conservative majority, it is probable that significant concessions  – possibly related to student  support –  would need to be made before this could make a successful journey through Parliament. Second, as a likely consequence of this, a financial support package to cover the increased fees would need to be made available, which, at interest rates that are generally well below the Government’s own cost of borrowing, is an expensive proposition. The long tail of student loan repayments also affects the financial viability of such a proposition.

For a short while it seemed as if the coalition was moving towards the implementation of a graduate tax as an alterative to this; a solution that would have saved face for the Liberals. Common sense, for once, prevailed, though, as Ministers were quickly made aware of the numerous logistical difficulties inherent in the implementation of a tax-based repayment system, not least of which is the possibility of a brain drain caused by high performing graduates avoiding higher tax repayments by working abroad. The chances of a graduate tax featuring amongst the recommendations in Lord Browne’s report on HE funding when it’s released in the Autumn now seem about as likely as BP winning the 2010 Greenpeace award for corporate environmental responsibility.

So, given these issues, what are the options open to the Government? I think we can envisage four main possible scenarios, of varying likelihood:

Scenario 1 – Elitism (likelihood – very low)

Those who decry the proliferation of media studies graduates from (to coin a phrase from the comedian Frankie Boyle) ‘universities that used to be swimming pools’ tend to favour this scenario, whereby funding is directed in a concentrated manner towards traditional academic subjects in the research intensive universities, and severely limited elsewhere.

This has the advantages of rewarding success, addressing concerns about ‘dumbing down’ of degrees and reducing costs, but also has the rather significant disadvantages of being detrimental to social mobility, diversity and equality and, as research on HE participation has shown, the strength of the UK’s knowledge economy. Very unlikely to be implemented.

Scenario 2 – The Status Quo (likelihood –  low)

Not endlessly repeated twelve bar blues chord sequences and suspect hairstyle/waistcoat combinations; rather the continuation of the current system.

The obvious disadvantages of this are many: demand continuing to outstrip supply; high cost to the public purse; and per student funding declining in real terms. The upsides would be limited job losses in the sector, and a relative lack of political fall-out. An unlikely course of action.

Scenario 3 – Cut and Cut Some More (likelihood – medium)

Had the Tories achieved a sizeable majority, this would have been the most likely outcome for the sector. But they didn’t, so it isn’t, although it still remains a slim possibility.

The rationale for this scenario is that significant efficiency savings could be made in HE by cutting non-essential programmes of activity. These might include such funding streams as widening participation, third stream support, but could also extend to core funding, such as the until now sacrosanct unit of resource.

The upsides would be primarily fiscal, although the fact that it would provide a justification for dismantling many of the Labour-instituted programmes that are anathema to some senior Tories would no doubt prove attractive.

On the debit side, it is probable that there would be significant political unrest if the cuts were deep, and the negative effects on the UK economy, unemployment and the competitiveness of the country’s universities internationally could outweigh any advantages gained through reducing the public sector borrowing requirement.

Scenario 4 – With One Hand Giveth (or Appear to Giveth), With the Other Hand Taketh Away (likelihood – high)

This scenario appears to be the most likely one: make smaller cuts than in scenario 3, but balance the books by allowing a significant increase in tuition fees. Through the approval of a greater number of private providers, and a removal of funding used to prop up  failing universities, the sector would be forced to become more market-led, thus adhering to one of the golden principles of the political theory that dare not speak its name (otherwise known as Thatcherism).

This has the primary advantage of shifting the onus of fiscal responsibility from the state to the individual, and consequently allowing acceptable service levels to be maintained whilst reducing the overall higher education budget.

As previously mentioned, the main difficulty with this scenario relates to the fact that any increase in fees will not fit well with the Liberal Democrat view of higher education as a right, rather than a privilege. However, given that the Liberals have been reputedly offered the route of abstention on any vote concerning tuition fees rises, it is perfectly plausible, given the current division of seats between the main parties, that it could be pushed through parliament without their support. How palatable this is to the Lib Dems and their party faithful will be determined by the relative unattractiveness of the alternatives.

Of course, should scenario 4 play out as expected, there is still the small matter of student support funding to consider. Especially important in this respect is the increase in the tuition fee loan necessitated by any rise in tuition fees themselves. Given the disparity in the cost of borrowing and the cost of lending, the long lead times and the relatively high non-repayment rates of student loans, this is more expensive than it might at first seem.

The answer might be to charge a rate of interest on the loans that reflects the actual cost of borrowing. This would be in the region of 2-2.5% above the RPI (as a opposed to the current rate, which is RPI+0%). Whilst this would help to plug the gap between acceptable public expenditure and anticipated cost of support, it would be something of a double whammy – higher fees and higher interest – for prospective students and their families, particularly those in the middle income bracket who wouldn’t benefit from the non-repayable grants and bursaries, which are likely, in the next few years, to have their upper thresholds set at progressively lower income levels.

My opinion is that the Browne Report will recommend a significant increase in fees, but that differential pricing will actually come into play this time round, and universities will be much more aggressive than they have been in the past at pricing themselves at a level that reflects their market position. This didn’t happen with the previous top-up fees increase for two reasons: first, the maximum fees level was only just high enough to support the courses offered, thus rendering lower fees levels financially unfeasible; second, most institutions were unwilling to risk their degrees being viewed as ‘cut price’, when at fees levels between £0 and £3,500 demand was estimated to be relatively inelastic (something borne out by the lack of any change in demand when fees were increased from around £1,200 to £3,000 per annum).

However, given that demand is expected to be affected by pricing at fees levels of over £5,000 (according to a report commissioned by UUK in 2009), it is likely that many of the HEIs in the competitive middle to bottom end of the market (ie those that ‘recruit’ rather than ‘select’ students) will compete on price, particularly in light of the fact that many of these institutions have cost levels that would enable profits to be made even at relatively low fees levels.

Russell Group institutions, where costs are high, due to low student staff ratios, and higher general resource levels, may not find this route as appealing, though, which suggests a two tier system: pre 92 institutions and some of the more highly regarded post 92s may charge the full amount, and will be the premium branded products (the Apple iPhones, if you like, of the HE market), whilst the majority of ex-polytechnics and colleges of HE will compete on price, to a greater or lesser extent (like the more generic handset manufacturers do in the mobile telephony market).

Should this scenario play out, it will be interesting to see who the winners and losers are, as consumers adjust to what almost amounts to a free market. My guess is that those with high cost bases and positions towards the bottom of the league tables will suffer the most from true differential pricing.

Are Friends Electric?

With all of the fuss about electric cars lately, one could be forgiven for reaching two apparently obvious conclusions: first, that they’re a new phenomenon, and second, that they’re a panacea for the world’s purported environmental issues.

In reality, though, neither conclusion would be correct: electric cars actually were pre-eminent in the late 19th century, many years before the internal combustion engine was perfected (witness Camille Jenatzy‘s breaking of the 100 km/h barrier in an electric car in 1899), and the notion that they are carbon neutral is laughable, given that the energy with which they’re powered has to be produced by some means – usually involving the burning of fossil fuels at power stations.

The latter point, together with the issue of range and ease of recharging, doesn’t seem to be troubling some of the world’s largest car manufacturers, though.  Green sells – or so their marketing departments believe – and, despite its shortcomings the electric car seems to be the perfect vehicle through which companies can tap into this lucrative new market.

It is tempting, then, as some commentators have done, to witness the widespread development of the electric car, and the improvements made over the last few years to battery life and vehicle performance, and assume that in twenty years time we’ll all be silently whirring our way through cities and the countryside; the noisy, agricultural roar of the internal combustion engine a thing of the past in all but the poorest countries.

I don’t believe, though, that things will work out quite like this, for a number of reasons. First, the infrastructure to enable rapid charging of electric vehicles at the roadside (in other words, the equivalent of fuel stations) would be hugely costly to introduce on a national basis, outside of major cities. Second, the drain on the national grid of a wholesale move to electric power would require the building of an inordinate number of traditional power stations (thereby eliminating any environmental gain), and/or the construction of large-scale renewable energy generators (politically sensitive and expensive). Third, the widespread acceptance of electric vehicles will take longer than a couple of decades to achieve, mainly due to consumer concerns over the necessary infrastructure being sufficient for their needs.

This is not to say that the electric car will be the damp squib it has been in previous decades; major cities, where the infrastructure can be made available in a cost-effective manner, and where emissions levels are of primary importance, may stipulate that all vehicles within their bounds are electric-powered, or may provide powerful disincentives to those who wish to drive their petrol or diesel-powered vehicles within the city. The upshot of this could be a two-tiered system of car ownership – electric cars for major city dwellers and more traditional vehicles for those in rural areas.

It is likely, then, that the type of car used for the majority of long journeys, and trips beyond the city limits, will bear some similarities to the popular cars of today. The internal combustion engine will still be present, but will have such features as direct injection, infinitely variable valve timing and will be supplemented by highly efficient hybrid systems (think Williams F1 flywheel high power KERS, rather than heavy, low power, batteries, Prius). All of these technologies are currently available – they just require combining and mass producing in a cost-effective way.

The upshot would be, for example, a mid-sized hatch with a one litre turbocharged engine, producing 100 BHP, supplemented by an 80 BHP hybrid flywheel KERS system. With twenty years worth of R&D, fuel economy could easily reach 150+ MPG combined in such a car. The small size of the engine would also make packaging easier, and would help to achieve a low kerb weight, thereby aiding efficiency.

The trend towards making bigger heavier cars with each model’s iteration will, I think, be reversed over the next few years. NCAP test results may have become foremost in many consumers’ minds over the past decade, but with green issues taking over from safety as the cause du jour (or should that be décennie?) car manufacturers should are likely to become more willing to promote the virtues of light weight (something that Lotus, of course, has been doing since the middle of the last century).

Expect to see the widespread use of bonded aluminium chassis (à la Elise, Evora and Jag XF/XK/XJ), at least on premium brands, and the average weight of a mid-sized hatchback to dip back to sub-ton levels within the next ten years.

And that can only be a good thing!

Back in the middle of the last decade, when balancing the UK’s books was a simple matter of ascertaining which part of the public sector would be allowed to be the most profligate in any given year, the Labour Government, devoid for once of significant worries about inflation, unemployment or public sector debt,  became particularly concerned with the role that the country would take in the global economy in future years.

One result of this concern was the commissioning of  a long-winded review of science and innovation policies, somewhat optimistically entitled ‘The Race to the Top’, which was headed up by failed scientist/successful grocer Lord Sainsbury of Turville.

Those of you with sufficiently long memories might recall that the track record of such reviews has been patchy at best (The Ryder Report, which created the lumbering, strike ridden dinosaur that was British Leyland, springs to mind, for one), but this did not deter  Sainsbury, who set about visualising a New Britain with the sort of missionary-like zeal that he had presumably once applied to reducing the margins of the farmers who supplied meat to his supermarkets.

He envisioned Britain’s future as lying not in the dark satanic mills of the industrial revolution, which  long since had been replicated in cheap-labour-driven Asian countries, but in the creation of a knowledge-based economy focused on producing high value-added goods and services.  And key to this economy were the organisations that deal with the acquisition and dissemination of knowledge – universities.

To be fair, the notion of making universities the cornerstone of the new knowledge-based Britain actually seemed fairly sensible; after all, the UK could no longer compete with countries like China and India at the low-end of the market, so harnessing its world-class higher education institutions to drive innovation in high technology and other knowledge-based industries could be a way of creating a competitive advantage at the high-end.

Unfortunately, Sainsbury’s predictive powers didn’t stretch as far as  foreseeing the sub-prime crisis and the subsequent ballooning of public sector debt. His recommendations, which mainly involved the Government spending significant sums of money, now seem like relics from a more prosperous age, especially in light of the recent announcement by Lord Vader… sorry, Mandelson, that higher education would face a cut of over £440 million to its budget in 2010 and 2011.

Which brings me to the really interesting question – in the light of these proposed cutbacks (which, incidentally, are unlikely to be reversed should there be a change of government in May) what will happen to universities next, and where does this leave the vision of a knowledge-based high-tech Britain?

As I mentioned earlier, the UK has a world-class higher education sector, with four institutions ranked in the top ten globally, and over 50 in the top 600 (QS World Rankings 2009). Unlike its major competitor, the US, whose universities occupy the other six places in the top ten, the UK’s institutions are all public (bar one), all charge a tuition fee that is considerably less than the cost of course delivery, and all are thus very much dependent on the distribution of government funds for their continued existence.

This uneasy reliance on the public purse means that UK universities are not only vulnerable to changes in government funding, they are also susceptible to the whims of each administration, many of which conflict with the ideas and precepts that the higher education sector holds most dear.

So, with public funding being reduced significantly over the next three years, and with UK universities unable to increase the level of tuition fees (unlike Ivy League institutions, which can effectively charge whatever they think the market can sustain) some commentators are arguing that there is little hope that they can retain their position amongst the world’s elite.

I don’t agree with this view, however, for a number of reasons.

First, all the political signs are pointing towards universities being granted an increase in the level of tuition fees that they can charge . Although this is an issue that has been skillfully sidestepped by both of the major political parties (Labour, with Tory support, ensuring that Lord Browne’s review of tuition fees was commenced before the election – as it was required to do by law – but is scheduled to actually report after it), there seems little doubt that students will be forced to pay significantly more for their higher education in future years. Provided this doesn’t result in a decrease in student numbers (and recent research from OpinionPanel shows that demand for HE remains relatively price inelastic up to a fee level of around £6000 per year),  the net effect should be a balancing of the overall budgets.

Second, the top UK universities have always been extremely efficient at producing world-class research. I know this may sound bizarre, given that many of the competing international institutions are private organisations, unencumbered by the sort of time and effort-sapping Victorian bureaucracy that is common amongst the UK’s Russell Group institutions, but the facts speak for themselves:  four UK universities placed in the top five internationally in the 2009 QS rankings, despite the fact that their income is considerably lower than their immediate rivals. The University of Cambridge, for example, has an annual income of less than half of that of Yale, yet despite this, it has been ranked above its US competitor in three of the last four years. Thus, even if incomes are reduced, unless these reductions are of cataclysmic proportions (which would be political suicide anyway), higher education institutions should still be able to compete on the world stage.

Clearly then, provided sensible choices are made, the UK’s university sector will not become the sinking ship that some have suggested. This does not mean, however, that the vision of a knowledge-based Britain can also be saved: maintaining world-class universities is one thing – ensuring that this translates into meaningful results for the economy is quite another. Or, to be clearer: the outcomes of the best higher education research can quite easily have extremely limited economic impact.

Of course, there are ways to assess the likely impact of research, and distribute funding accordingly, but any such system will suffer not only from resistance from the academic community, but also from the fundamental problem that it’s extremely difficult to predict how wholly theoretical research undertaken today may be applied to practical issues in the future. Quantum physics, for example, could have been accused at any point prior to the last fifteen years of being a navel-gazing subject that provides little benefit to mankind, other than the imposition of an extra layer of complexity upon our understanding of the world, yet today it has found a supremely practical application in the newly developing field of quantum computing. Numerous other examples of research initially thought to be purely theoretical eventually yielding beneficial results are littered throughout history. All would have been far more difficult to achieve had the scientists originally involved been forced into conducting only applied research.

The real challenge, then, for the UK, is to generate the necessary economic impact from its universities’ research, without stifling areas of development that may appear fiscally worthless but have substantial benefits at some future date. How successful the Government is with this will determine whether the UK moves towards being a provider of high value-added goods and services or continues to have a more mixed economy that is susceptible to competition from the Far East.

It wasn’t much more than 10 years ago that Apple was a washed-up computer company, whose disastrous forays into PDAs and other consumer-targeted devices had led to it being comprehensively outmanoeuvred in the marketplace by a rapidly expanding Microsoft. Nowadays, whilst the reverse situation is not quite true, it is certainly the case that the release of a new Apple product garners significantly more column inches in the press than anything announced from its Redmond-based competitor, which seems always to be metaphorically tarred with the ‘dull but worthy’ brush.

And so it was last Wednesday, when Apple unveiled its latest product. In the months leading up to the launch, chat rooms and forums had been buzzing with rumours about exactly what this product might be. In fact, such was the level of anticipation, one could be forgiven for thinking that Apple CEO, Steve Jobs, was about to announce that he’d discovered the final resting place of the Ark of the Covenant.

Unfortunately, he hadn’t; what we actually got was a tablet PC, named the iPad (which must have taken the Apple branding department, ooh… minutes to think up.) And whilst it certainly looked sleek and user friendly, it didn’t, at least on first appearances, have any paradigm-shifting features or functionality.

But, then again, neither did the iPod, or, dare I say it, the iPhone; Apple’s success in recent years has been built less on pure innovation than on integrating existing features in a single device, and offering a well-executed means of adding content, be it media or software. In this respect the iPad is no different: it provides wi-fi, 3G (in the more expensive models), accelerometers and a large colour display, and most importantly, will be backed up with a fully featured online media store from which users can download books, magazines, and presumably applications.

I said ‘most importantly’ there because it is content distribution that has been the area in which Apple has made the biggest impact in changing the way consumers purchase and use media. One only needs to look at the 8.5 billion songs downloaded on iTunes since its launch, or the billion plus applications downloaded to iPhones and iTouches to realise that what the iPad really represents is the opportunity for the publishing industry to radically re-invent its distribution and sales mechanisms.

Which leads me, in what you may think was a rather long-winded way, to the future of paper and the printed medium in general.

The death knell of the printed word has been sounding from some quarters almost since the first web browser was made commercially available. Online publishing was the way forward, we were told back at the end of the last millenium. The printed word had no chance of competing with a method of distribution where the marginal costs were virtually zero. Web versions of major magazines and newspapers began to spring up on a daily basis. Paper purists wept.

Ten years on, however, the Sunday Times still gives paperboys hernias, and no WH Smiths store is complete without a row of middle-aged men staring blankly at the pages of car magazines. Sure, there have been some major closures – the Face and Melody Maker spring to mind – but there has not been the print apocalypse that was forecast. In fact, remarkably little has really changed.

This, I believe, is not a consequence of some deep-seated human need to connect with the physical medium of ink on paper, but is related to the technical limitations of the online medium. Websites are great for short articles, for videos, interactivity and nuggets of information, but no-one is going to use a desktop or laptop to read a book or even a magazine in a website format. They may, however, use a thin, light e-book reader, to read specially produced e-books and e-magazines,  especially if the content is readily available, and at a cost that is below that of the physical version. This is where the iPad comes in.

Now, at this point, some of you may be wondering why I haven’t mentioned the plethora of existing e-book readers in the marketplace, all of which, with the exception of Amazon’s Kindle and the very recent lookalikes from Sony et al, have been disappointing flops. Surely, you may argue, if e-books are so great, why have sales of these e-book readers been so poor?

The answer is that until now e-readers have lacked a simple, cohesive platform for the distribution of content, and have not offered sufficient features to make the expense of buying one seem worthwhile. The Kindle has been more successful than its predecessors, mainly because it is fully integrated with Amazon’s own e-book system, but it still lacks a solid selling point, unless you are someone who buys books in sufficient quantities for the lower price of e-books to offset the cost of the Kindle itself.

The difference with the iPad is not that it’s cheaper than the Kindle, or other e-readers – in fact the reverse is actually true. No – the real reason why the iPad will turn the e-reader market on its head, and change the way we buy  and consume the printed word, is its desirability and adaptability. Buy a Kindle and you have a dull beige box that looks like it was designed by the same people who designed IBM PCs in the 1980s (ie people who wear short sleeve shirts with ties). Buy an iPad and you have a perfectly executed piece of post-modern industrial design. The Kindle is a one trick pony. The iPad can play games, sense motion in x, y and z planes, and can almost be used in place of a laptop, if necessary. Combine this with Apple’s content distribution system, which is the market leader in the music and application download fields, and it becomes clear that the iPad may even sell in larger volumes than the iPod or iPhone. It has the ‘want one factor’ previously missing in previous e-readers, and its multi-functionality will provide the average consumer with greater justification for  purchasing it than the single-use devices from Amazon, Sony and other competitors.

I’m not saying that Apple will have the market entirely to itself over the next few years; it is likely that other companies will follow suit, in the same way that Android based mobile phones have been developed to compete with iPhones in the last year or so. What I do believe, though, is that Apple will have the lion’s share of the market, and that any sales to competitors will only serve to increase the size of the total market, rather than steal customers from Apple, in much the same way that Ferrari and Lamborghini sales rose signficantly in the last decade despite an increase in the number of companies manufacturing supercars.

The implications of this for the print and media industry are clear – if the growth in sales of online magazines and books for the iPad mirrors that displayed over the last five years  by music downloaded from iTunes, then they will need to radically re-define their distribution and business models.

Let’s look at the figures in a bit more detail to back up this assertion. Sales of downloaded music, which were virtually zero prior to 2004, grew exponentially after the release of the iPod, and are now predicted to reach $4.3 billion dollars by 2012 (http://www.forrester.com/rb/Research/end_of_music_industry_as_we_know/q/id/43759/t/2) – a figure that is greater than the forecast sales for CDs. iTunes has a share of around 70% of the global market, and it seems likely that this will remain constant, or increase, between now and 2012. In other words, within the next two years, downloading will become the most popular method for consumers to purchase music, and iTunes will be the dominant choice of retailer from which these purchases will be made.

The print market is lagging around 6 years behind the music market, but if we apply these figures to books and magazines, we can hypothesise that 50/50 print/download sales could be reached as soon as 2016. In fact, providing uptake of the iPad is as high as I’ve predicted, we could see such a shift even sooner, as printed books and magazines have much higher unit costs of production than CDs, and, thus, there is a stronger financial imperative for publishers to promote the higher margin (yet, hopefully cheaper to the consumer) e-versions.

Whether this happens is almost entirely in your hands.

The Inverted Pyramid

I  thought I’d kick off my futurology blog with a look at my own theory of the inverted pyramid, which examines the underlying assumptions made in any body of knowledge and asks whether they are sufficiently stable to support the structure above. We’ll consider the applications for futurology in a little while, but before we proceed any further a little more explanation as to exactly what I mean is required.

As with most explanations, it is probably easier to demonstrate by example than to undergo a delineation of the theory in abstract terms, so I’ll start by looking at how the inverted pyramid applies to a practical subject – namely to that most contentious of issues – religion – and, more specifically, Christianity.

With over 2000 years of history behind it, and with innumerable subtly (and in some cases, not so subtly) different denominations, Christianity is comprised of an extraordinarily broad mix of pseudo-historical facts, articles of faith and much-debated interpretations of scripture. Yet whilst the notion of, say, original sin, or the transubstantiation of the mass, may have resulted in centuries of disagreement between the religion’s sects, the body of knowledge over which they both agree and disagree is, without exception, predicated on a single premise – that God exists. Remove the premise – that which is represented as the inverted point of the pyramid that forms the foundations – and the entire body of knowledge (that represented by the pyramid itself), with all of its attendant mythology and debate, crumbles.

Whilst there have been many attempts to prove the existence of God over the years – there have been ontological and teleological arguments (the latter resurrected in recent years by the intelligent design brigade) – none is what we could call a proof in the empirical or logical sense of the word. It is, thus,  a thin and not entirely robust premise that is upholding the inverted pyramid that is the religion’s belief system. Of course, Christianity is not alone in this;  all of the world’s major religions are susceptible to the same flaw in their reasoning.

One could conclude then, that if a body of knowledge predicated on a single premise is to be robust enough to undergo rigorous scrutiny and still maintain its form, then that premise needs to be one that is exceptionally stable; one that, despite its solitary nature, is strong enough to support the broadening weight above it. The problem, of course, is that an inverted pyramid is inherently unstable, and that no matter how strong the initial premise is, the spreading weight above will almost always result in an eventual collapse.

You may, by now, have started to see how this theory can be applied to futurology. The futurologist is concerned, more than anything, with the strength of the atoms of fact gleaned from the present, through which he or she will extrapolate future scenarios. Should the futurologist rely too heavily on too small a number of premises, or choose ones that are insufficiently robust, then the likelihood of their future scenarios being accurate is heavily reduced.

Let’s take a look, then, at the inverted pyramid in action, in the realm of that great exponent of futurology – science fiction – and, more specifically, the 1970s UK TV series, Space: 1999. It is, perhaps, a little misleading to choose such a programme to illustrate my point, given that it was produced purely as entertainment and not as a means of providing a serious vision of mankind’s future development, but I have decided to include it simply because I can think of no other example that demonstrates so succinctly the link between a weak initial premise and poor predictions of future scenarios.

As you would expect, Space: 1999  is indeed set in space, and the year really is 1999. And whilst 1999 in the actual world was the year of the dotcom explosion, the millenium bug and  the iBook, in the fictional world of Space:1999, mankind has established a permanent base and a nuclear waste dump on the moon, and computers are multi-coloured flashing objects with tiny monochrome screens. Had the writers and producers been given marks for the accuracy of their depiction of a world only 25 years forward in time from their own, it is probable that an Iceland-in-the -Eurovision Song-Contest-like nul points would have been the resounding chorus.

It is important to remember, though, that Space:1999 was produced at a time when manned lunar landings were still fresh in the collective memory, and when space travel seemed to many to be the ‘next big leap for mankind’. Given this background, it is easy to see why the show might extrapolate from the success of the Apollo missions, a world, a quarter of a century in the future, in which mankind had begun to colonise the moon. Where the show’s creators went wrong was not to test this assumption; if they had done so they would have understood that the costs and physical requirements of such a venture would have precluded it from happening, certainly at any point before the middle of the following century. With this key premise removed, much of the imagined world of the series  – the inverted pyramid itself  – collapses.

So, where does this leave the futurologist? The answer is: with a need to build his future scenarios on multiple, tested and robust assumptions. In other words, we must ensure not only that the foundations of our pyramid are stable, but that it is also supported at as many points as possible – the pyramid should actually more resemble a square. In reality, this means examining each assumption carefully, then cross-checking the effects of each assumption against the others. Assumptions can be assigned a score, based on their robustness, and only the highest scoring ones, provided that they are sufficient in number and agreement, can be used as the founding premises of our future scenarios.

We’ll examine some examples of this technique in operation in a future posting.

JP