Feeds:
Posts
Comments

Posts Tagged ‘futurology’

Back in the middle of the last decade, when balancing the UK’s books was a simple matter of ascertaining which part of the public sector would be allowed to be the most profligate in any given year, the Labour Government, devoid for once of significant worries about inflation, unemployment or public sector debt,  became particularly concerned with the role that the country would take in the global economy in future years.

One result of this concern was the commissioning of  a long-winded review of science and innovation policies, somewhat optimistically entitled ‘The Race to the Top’, which was headed up by failed scientist/successful grocer Lord Sainsbury of Turville.

Those of you with sufficiently long memories might recall that the track record of such reviews has been patchy at best (The Ryder Report, which created the lumbering, strike ridden dinosaur that was British Leyland, springs to mind, for one), but this did not deter  Sainsbury, who set about visualising a New Britain with the sort of missionary-like zeal that he had presumably once applied to reducing the margins of the farmers who supplied meat to his supermarkets.

He envisioned Britain’s future as lying not in the dark satanic mills of the industrial revolution, which  long since had been replicated in cheap-labour-driven Asian countries, but in the creation of a knowledge-based economy focused on producing high value-added goods and services.  And key to this economy were the organisations that deal with the acquisition and dissemination of knowledge – universities.

To be fair, the notion of making universities the cornerstone of the new knowledge-based Britain actually seemed fairly sensible; after all, the UK could no longer compete with countries like China and India at the low-end of the market, so harnessing its world-class higher education institutions to drive innovation in high technology and other knowledge-based industries could be a way of creating a competitive advantage at the high-end.

Unfortunately, Sainsbury’s predictive powers didn’t stretch as far as  foreseeing the sub-prime crisis and the subsequent ballooning of public sector debt. His recommendations, which mainly involved the Government spending significant sums of money, now seem like relics from a more prosperous age, especially in light of the recent announcement by Lord Vader… sorry, Mandelson, that higher education would face a cut of over £440 million to its budget in 2010 and 2011.

Which brings me to the really interesting question – in the light of these proposed cutbacks (which, incidentally, are unlikely to be reversed should there be a change of government in May) what will happen to universities next, and where does this leave the vision of a knowledge-based high-tech Britain?

As I mentioned earlier, the UK has a world-class higher education sector, with four institutions ranked in the top ten globally, and over 50 in the top 600 (QS World Rankings 2009). Unlike its major competitor, the US, whose universities occupy the other six places in the top ten, the UK’s institutions are all public (bar one), all charge a tuition fee that is considerably less than the cost of course delivery, and all are thus very much dependent on the distribution of government funds for their continued existence.

This uneasy reliance on the public purse means that UK universities are not only vulnerable to changes in government funding, they are also susceptible to the whims of each administration, many of which conflict with the ideas and precepts that the higher education sector holds most dear.

So, with public funding being reduced significantly over the next three years, and with UK universities unable to increase the level of tuition fees (unlike Ivy League institutions, which can effectively charge whatever they think the market can sustain) some commentators are arguing that there is little hope that they can retain their position amongst the world’s elite.

I don’t agree with this view, however, for a number of reasons.

First, all the political signs are pointing towards universities being granted an increase in the level of tuition fees that they can charge . Although this is an issue that has been skillfully sidestepped by both of the major political parties (Labour, with Tory support, ensuring that Lord Browne’s review of tuition fees was commenced before the election – as it was required to do by law – but is scheduled to actually report after it), there seems little doubt that students will be forced to pay significantly more for their higher education in future years. Provided this doesn’t result in a decrease in student numbers (and recent research from OpinionPanel shows that demand for HE remains relatively price inelastic up to a fee level of around £6000 per year),  the net effect should be a balancing of the overall budgets.

Second, the top UK universities have always been extremely efficient at producing world-class research. I know this may sound bizarre, given that many of the competing international institutions are private organisations, unencumbered by the sort of time and effort-sapping Victorian bureaucracy that is common amongst the UK’s Russell Group institutions, but the facts speak for themselves:  four UK universities placed in the top five internationally in the 2009 QS rankings, despite the fact that their income is considerably lower than their immediate rivals. The University of Cambridge, for example, has an annual income of less than half of that of Yale, yet despite this, it has been ranked above its US competitor in three of the last four years. Thus, even if incomes are reduced, unless these reductions are of cataclysmic proportions (which would be political suicide anyway), higher education institutions should still be able to compete on the world stage.

Clearly then, provided sensible choices are made, the UK’s university sector will not become the sinking ship that some have suggested. This does not mean, however, that the vision of a knowledge-based Britain can also be saved: maintaining world-class universities is one thing – ensuring that this translates into meaningful results for the economy is quite another. Or, to be clearer: the outcomes of the best higher education research can quite easily have extremely limited economic impact.

Of course, there are ways to assess the likely impact of research, and distribute funding accordingly, but any such system will suffer not only from resistance from the academic community, but also from the fundamental problem that it’s extremely difficult to predict how wholly theoretical research undertaken today may be applied to practical issues in the future. Quantum physics, for example, could have been accused at any point prior to the last fifteen years of being a navel-gazing subject that provides little benefit to mankind, other than the imposition of an extra layer of complexity upon our understanding of the world, yet today it has found a supremely practical application in the newly developing field of quantum computing. Numerous other examples of research initially thought to be purely theoretical eventually yielding beneficial results are littered throughout history. All would have been far more difficult to achieve had the scientists originally involved been forced into conducting only applied research.

The real challenge, then, for the UK, is to generate the necessary economic impact from its universities’ research, without stifling areas of development that may appear fiscally worthless but have substantial benefits at some future date. How successful the Government is with this will determine whether the UK moves towards being a provider of high value-added goods and services or continues to have a more mixed economy that is susceptible to competition from the Far East.

Advertisements

Read Full Post »

It wasn’t much more than 10 years ago that Apple was a washed-up computer company, whose disastrous forays into PDAs and other consumer-targeted devices had led to it being comprehensively outmanoeuvred in the marketplace by a rapidly expanding Microsoft. Nowadays, whilst the reverse situation is not quite true, it is certainly the case that the release of a new Apple product garners significantly more column inches in the press than anything announced from its Redmond-based competitor, which seems always to be metaphorically tarred with the ‘dull but worthy’ brush.

And so it was last Wednesday, when Apple unveiled its latest product. In the months leading up to the launch, chat rooms and forums had been buzzing with rumours about exactly what this product might be. In fact, such was the level of anticipation, one could be forgiven for thinking that Apple CEO, Steve Jobs, was about to announce that he’d discovered the final resting place of the Ark of the Covenant.

Unfortunately, he hadn’t; what we actually got was a tablet PC, named the iPad (which must have taken the Apple branding department, ooh… minutes to think up.) And whilst it certainly looked sleek and user friendly, it didn’t, at least on first appearances, have any paradigm-shifting features or functionality.

But, then again, neither did the iPod, or, dare I say it, the iPhone; Apple’s success in recent years has been built less on pure innovation than on integrating existing features in a single device, and offering a well-executed means of adding content, be it media or software. In this respect the iPad is no different: it provides wi-fi, 3G (in the more expensive models), accelerometers and a large colour display, and most importantly, will be backed up with a fully featured online media store from which users can download books, magazines, and presumably applications.

I said ‘most importantly’ there because it is content distribution that has been the area in which Apple has made the biggest impact in changing the way consumers purchase and use media. One only needs to look at the 8.5 billion songs downloaded on iTunes since its launch, or the billion plus applications downloaded to iPhones and iTouches to realise that what the iPad really represents is the opportunity for the publishing industry to radically re-invent its distribution and sales mechanisms.

Which leads me, in what you may think was a rather long-winded way, to the future of paper and the printed medium in general.

The death knell of the printed word has been sounding from some quarters almost since the first web browser was made commercially available. Online publishing was the way forward, we were told back at the end of the last millenium. The printed word had no chance of competing with a method of distribution where the marginal costs were virtually zero. Web versions of major magazines and newspapers began to spring up on a daily basis. Paper purists wept.

Ten years on, however, the Sunday Times still gives paperboys hernias, and no WH Smiths store is complete without a row of middle-aged men staring blankly at the pages of car magazines. Sure, there have been some major closures – the Face and Melody Maker spring to mind – but there has not been the print apocalypse that was forecast. In fact, remarkably little has really changed.

This, I believe, is not a consequence of some deep-seated human need to connect with the physical medium of ink on paper, but is related to the technical limitations of the online medium. Websites are great for short articles, for videos, interactivity and nuggets of information, but no-one is going to use a desktop or laptop to read a book or even a magazine in a website format. They may, however, use a thin, light e-book reader, to read specially produced e-books and e-magazines,  especially if the content is readily available, and at a cost that is below that of the physical version. This is where the iPad comes in.

Now, at this point, some of you may be wondering why I haven’t mentioned the plethora of existing e-book readers in the marketplace, all of which, with the exception of Amazon’s Kindle and the very recent lookalikes from Sony et al, have been disappointing flops. Surely, you may argue, if e-books are so great, why have sales of these e-book readers been so poor?

The answer is that until now e-readers have lacked a simple, cohesive platform for the distribution of content, and have not offered sufficient features to make the expense of buying one seem worthwhile. The Kindle has been more successful than its predecessors, mainly because it is fully integrated with Amazon’s own e-book system, but it still lacks a solid selling point, unless you are someone who buys books in sufficient quantities for the lower price of e-books to offset the cost of the Kindle itself.

The difference with the iPad is not that it’s cheaper than the Kindle, or other e-readers – in fact the reverse is actually true. No – the real reason why the iPad will turn the e-reader market on its head, and change the way we buy  and consume the printed word, is its desirability and adaptability. Buy a Kindle and you have a dull beige box that looks like it was designed by the same people who designed IBM PCs in the 1980s (ie people who wear short sleeve shirts with ties). Buy an iPad and you have a perfectly executed piece of post-modern industrial design. The Kindle is a one trick pony. The iPad can play games, sense motion in x, y and z planes, and can almost be used in place of a laptop, if necessary. Combine this with Apple’s content distribution system, which is the market leader in the music and application download fields, and it becomes clear that the iPad may even sell in larger volumes than the iPod or iPhone. It has the ‘want one factor’ previously missing in previous e-readers, and its multi-functionality will provide the average consumer with greater justification for  purchasing it than the single-use devices from Amazon, Sony and other competitors.

I’m not saying that Apple will have the market entirely to itself over the next few years; it is likely that other companies will follow suit, in the same way that Android based mobile phones have been developed to compete with iPhones in the last year or so. What I do believe, though, is that Apple will have the lion’s share of the market, and that any sales to competitors will only serve to increase the size of the total market, rather than steal customers from Apple, in much the same way that Ferrari and Lamborghini sales rose signficantly in the last decade despite an increase in the number of companies manufacturing supercars.

The implications of this for the print and media industry are clear – if the growth in sales of online magazines and books for the iPad mirrors that displayed over the last five years  by music downloaded from iTunes, then they will need to radically re-define their distribution and business models.

Let’s look at the figures in a bit more detail to back up this assertion. Sales of downloaded music, which were virtually zero prior to 2004, grew exponentially after the release of the iPod, and are now predicted to reach $4.3 billion dollars by 2012 (http://www.forrester.com/rb/Research/end_of_music_industry_as_we_know/q/id/43759/t/2) – a figure that is greater than the forecast sales for CDs. iTunes has a share of around 70% of the global market, and it seems likely that this will remain constant, or increase, between now and 2012. In other words, within the next two years, downloading will become the most popular method for consumers to purchase music, and iTunes will be the dominant choice of retailer from which these purchases will be made.

The print market is lagging around 6 years behind the music market, but if we apply these figures to books and magazines, we can hypothesise that 50/50 print/download sales could be reached as soon as 2016. In fact, providing uptake of the iPad is as high as I’ve predicted, we could see such a shift even sooner, as printed books and magazines have much higher unit costs of production than CDs, and, thus, there is a stronger financial imperative for publishers to promote the higher margin (yet, hopefully cheaper to the consumer) e-versions.

Whether this happens is almost entirely in your hands.

Read Full Post »

I  thought I’d kick off my futurology blog with a look at my own theory of the inverted pyramid, which examines the underlying assumptions made in any body of knowledge and asks whether they are sufficiently stable to support the structure above. We’ll consider the applications for futurology in a little while, but before we proceed any further a little more explanation as to exactly what I mean is required.

As with most explanations, it is probably easier to demonstrate by example than to undergo a delineation of the theory in abstract terms, so I’ll start by looking at how the inverted pyramid applies to a practical subject – namely to that most contentious of issues – religion – and, more specifically, Christianity.

With over 2000 years of history behind it, and with innumerable subtly (and in some cases, not so subtly) different denominations, Christianity is comprised of an extraordinarily broad mix of pseudo-historical facts, articles of faith and much-debated interpretations of scripture. Yet whilst the notion of, say, original sin, or the transubstantiation of the mass, may have resulted in centuries of disagreement between the religion’s sects, the body of knowledge over which they both agree and disagree is, without exception, predicated on a single premise – that God exists. Remove the premise – that which is represented as the inverted point of the pyramid that forms the foundations – and the entire body of knowledge (that represented by the pyramid itself), with all of its attendant mythology and debate, crumbles.

Whilst there have been many attempts to prove the existence of God over the years – there have been ontological and teleological arguments (the latter resurrected in recent years by the intelligent design brigade) – none is what we could call a proof in the empirical or logical sense of the word. It is, thus,  a thin and not entirely robust premise that is upholding the inverted pyramid that is the religion’s belief system. Of course, Christianity is not alone in this;  all of the world’s major religions are susceptible to the same flaw in their reasoning.

One could conclude then, that if a body of knowledge predicated on a single premise is to be robust enough to undergo rigorous scrutiny and still maintain its form, then that premise needs to be one that is exceptionally stable; one that, despite its solitary nature, is strong enough to support the broadening weight above it. The problem, of course, is that an inverted pyramid is inherently unstable, and that no matter how strong the initial premise is, the spreading weight above will almost always result in an eventual collapse.

You may, by now, have started to see how this theory can be applied to futurology. The futurologist is concerned, more than anything, with the strength of the atoms of fact gleaned from the present, through which he or she will extrapolate future scenarios. Should the futurologist rely too heavily on too small a number of premises, or choose ones that are insufficiently robust, then the likelihood of their future scenarios being accurate is heavily reduced.

Let’s take a look, then, at the inverted pyramid in action, in the realm of that great exponent of futurology – science fiction – and, more specifically, the 1970s UK TV series, Space: 1999. It is, perhaps, a little misleading to choose such a programme to illustrate my point, given that it was produced purely as entertainment and not as a means of providing a serious vision of mankind’s future development, but I have decided to include it simply because I can think of no other example that demonstrates so succinctly the link between a weak initial premise and poor predictions of future scenarios.

As you would expect, Space: 1999  is indeed set in space, and the year really is 1999. And whilst 1999 in the actual world was the year of the dotcom explosion, the millenium bug and  the iBook, in the fictional world of Space:1999, mankind has established a permanent base and a nuclear waste dump on the moon, and computers are multi-coloured flashing objects with tiny monochrome screens. Had the writers and producers been given marks for the accuracy of their depiction of a world only 25 years forward in time from their own, it is probable that an Iceland-in-the -Eurovision Song-Contest-like nul points would have been the resounding chorus.

It is important to remember, though, that Space:1999 was produced at a time when manned lunar landings were still fresh in the collective memory, and when space travel seemed to many to be the ‘next big leap for mankind’. Given this background, it is easy to see why the show might extrapolate from the success of the Apollo missions, a world, a quarter of a century in the future, in which mankind had begun to colonise the moon. Where the show’s creators went wrong was not to test this assumption; if they had done so they would have understood that the costs and physical requirements of such a venture would have precluded it from happening, certainly at any point before the middle of the following century. With this key premise removed, much of the imagined world of the series  – the inverted pyramid itself  – collapses.

So, where does this leave the futurologist? The answer is: with a need to build his future scenarios on multiple, tested and robust assumptions. In other words, we must ensure not only that the foundations of our pyramid are stable, but that it is also supported at as many points as possible – the pyramid should actually more resemble a square. In reality, this means examining each assumption carefully, then cross-checking the effects of each assumption against the others. Assumptions can be assigned a score, based on their robustness, and only the highest scoring ones, provided that they are sufficient in number and agreement, can be used as the founding premises of our future scenarios.

We’ll examine some examples of this technique in operation in a future posting.

JP

Read Full Post »