A disruptive technology is one that most people do not see coming and yet, within a very short period it changes everything. A disruptive technology will become the dominant technology. Rarely are they accurately predicted because predictions are generally extrapolated from the existing understanding. For example, there’s no doubt that the invention of the motor car was a disruptive technology, but Henry Ford is often quoted as saying “If we had asked the public what they wanted, they would have said faster horses.”
It’s almost impossible to predict what will become a disruptive technology (although the likelihood of being wrong isn’t going to stop me) but they are very easily recognized in hindsight. Living in Los Angeles it’s obvious the affect that Mr Ford’s invention has had on this society. Although some would argue that it wasn’t so much the invention of the motor car that made the difference, but the assembly-line technique that made the motor vehicle (relatively) affordable.
In fact I think it’s reasonable to believe that a disruptive technology will have a democratizing component to it, or a lowering (or removal) of price barriers.
Non-linear editing on the computer – Avid’s innovation – was a disruptive technology but initially only within a relatively small community of high end film and episodic television editors. The truly disruptive technology was DV. DV over FireWire starting with Sony’s VX-1000 and Charles McConathy/Promax’s efforts to make it work with the Adobe Premiere of the day, paved the way for what we now call “The DV Revolution”.
Apple were able to capitalize on their serendipitous purchase of Final Cut from Macromedia and drop the work that had been done to make Final Cut work with Targa real-time cards and concentrate on FireWire/DV support. (It was two further releases before we saw the same level of real-time effects support as was present in the Macromedia NAB 98 alpha preview.) I think, at the time, Apple saw Final Cut Pro to be another way of selling new G3’s with built-in FireWire. The grand plan of Pro Apps came about when the initial success of Final Cut Pro showed the potential. But that’s another blog post.
DV/FireWire was good enough at a much lower price, with all the convenience of single-wire connection. We’ve grown from an industry of under 100,000 people worldwide involved in professional production and post-production to one almost certainly over 1 million worldwide.
Disruptive technologies usually involve a confluence of technologies at the right time. Lower cost editing software wouldn’t have been that disruptive without lower cost acquisition to feed to it. Both would have been pointless without sufficient computer power to run the software adequately. Final Cut Pro on an Apple IIe wouldn’t have been that productive!
In a larger sense DV/FireWire was part of a larger disruption affecting the computer industry – the transition from hardware-based to software-based. We are, in fact, already through this transition with digital video, although the success of AJA and Blackmagic Design might suggest otherwise. However, the big difference now is that the software is designed to do its job with hardware as the accessory. Back in the days of Media 100’s success, Media 100 would not run without the hardware installed, in fact without the hardware it was pretty useless as everything went through the card. Then when they rebuilt the application for OS X they developed it as (essentially) software-only. This paved the way to the current HD version (based on a completely different card) and a software-only version.
Ultimately, all tasks will be done in software other than the hardware needed to convert from format to format. In fact much of the role of today’s hardware is that of format interface rather than the basis for the NLE as it was in the day of Media 100, Avid AVBV/Meridien and even Cinéwave. Today’s hardware takes some load off the CPU but as an adjunct to the software, not because the task couldn’t be done without the hardware. This has contributed to the “DV Revolution” by dramatically dropping prices on hardware.
Disruptive technologies are hard to predict because they are disruptive. Any attempt to predict disruptive technologies is almost certainly doomed to failure, but like I said, that’s not going to stop me now!
We are headed for a disruptive change in distribution of media, both audio and video content. I wish I could see clearly how this is going to shake out, so I could invest “wisely” but it’s still too early to predict exactly what will be the outcome 5-7 years down the track. I feel strongly that it will include RSS with enclosures, in some form. It will have aspects of Tivo/DVR/PVR where the content’s delivery and consumption will be asynchronous. Apart from news and weather, is there any need for real-time delivery as long as the content is available when it’s ready to be consumed? Delivery will, for the most part, be via broadband connections using the Internet Protocol.
There is a growing trend to want to merge the “computer” and “the TV”, either by creating media center computers, by adding Internet connected set-top boxes (like cable boxes) or by delivering video content direct to regular computers. Microsoft’s Media Center PCs haven’t exactly set the world on fire outside the college dorm where they fit a real niche; Apple are clearly moving slowly toward some media-centric functions in the iLife suite where it will be interesting to see what’s announced at MacWorld San Francisco in January; and there are developments like Participatory Culture’s DTV and Ant is not TV’s FireANT for delivering channels of content directly to the computer screen. Both DTV and FireANT are based on RSS, with enclosures, for their channels, just like audio podcasting does.
On the hardware box front, companies like Akimbo, Brightcove and DaveTV are putting Internet-connected boxes under the TV, although DaveTV is having a bet both ways with computer software or set-top box.
Whether any of these nascent technologies are “the future of media” as they are touted by their developers, whichever way this shakes out it has important implications for our industry. No-one foresaw that the Master Antenna and Community Antenna systems of the 1950’s would evolve into today’s dominant distribution networks – the cable channels, which have now (collectively) moved ahead of the four major networks in total viewership. The advent of cable distribution opened up hundreds, or thousands of new production opportunities for content creators. This time many people foresee (or hope) that using the Internet for program distribution will take down the last wall separating content creators from their audience.
In the days of four networks, any program idea had better aim to appeal to 20-30% of the available audience – young, middle-aged or old – to have any chance of success. In an age where the “family” sat down to watch TV together (and even ate meals together) that was a reasonable thing to attempt. As society fragmented we discovered that there were viable niches in the expanded cable networks. Programs have been artistically and/or financially successful that would never have been made for network TV because of the (relatively) small audiences or because the content was not acceptable under the regulations governing the networks. The development of niche channels for niche markets parallels the fragmentation of society as a whole into smaller demographic units.
Will we see, or do we need, more channels? Is 500 channels, and nothing on, going to be better when it’s 5,000 channels? Probably, because in among the 5,000 (or 50,000) channels will be content that I care enough about to watch. It won’t be current news, that’s still best done with real-time broadcasting, but for other content, why not have it delivered to my “box” (whatever takes this role) ready to be watched (on whatever device I choose to watch it)? (Some devices will be better suited to certain types of content: a “video iPod” device would be better suited to short video pieces than multi-hour movies, for example.)
If the example of audio podcasting is anything to go by, and with just one year of history to date it’s probably a little hard to be definitive, then yes, subscriber-chosen content delivered “whenever” for consumption on my own schedule is compelling. I’ve replaced the car radio newstalk station with podcasts from my iPod mini. Content I want to listen to, available when I’m ready to listen. Podcasts have replaced my consumption of radio almost completely.
Ultimately it will come down to content. Will the 5,000 or 50,000 channels be filled with something I want to watch? Sure, subscribing to the “Final Cut Pro User Group” channel is probably more appealing than (for me) many of the channels available on my satellite system. Right now, video podcasts tend to be of the “don’t criticize what the dog has to say, marvel that the dog can talk” variety. Like a lot of podcasts. Not every one of the more-than-10,000 podcasts now listed in the iTunes directory is compelling content or competently produced.
But before we can start taking advantage of new distribution channels, for more than niche applications, we need to see some common standards come to the various platforms so that channels will discovered on Akimbo, DaveTV, DTV and on a Media Center PC. About the only part of this prediction I feel relatively sure of, is that it will involve RSS with audio and video enclosures, or a technology derived from RSS, like Atom (although RSS 2 seems to have the edge right now.)
In a best-case scenario, we’ll have many more distribution channels, aggregating niche markets into a big-enough channel for profitable content (particularly with lower cost production tools now in place). Direct producer-customer connection, without the intermediation of network or cable channel aggregators improves profit potential on popular content and possibly moves content into different distribution paths. Worst case scenario is that nothing much changes and Akimbo, DaveTV, Brightcove, Apple or Microsoft’s Media-centric computers go the way of the Apple Lisa – paving the way for the real “next big thing”.
Are blogspodcasting and networks a precursor to video blogcasting? If so, what would that mean for production and post-production businesses? Channels of information, subscriptions, and a value add – direct to customers.
With the news today that Matrox had announced a dual-link PCI graphics card designed to power dual-link monitors like Apple’s 30″ Cinema Display I was once again prompted to ask why there are no workstation class cards for OS X. The Parhelia card is a good graphics card but not a workstation-class card but even so, the nearest equivalents for OS X do not have the complement of output options that the Parhelia card does. Pity there’s no Mac drivers for it.
But it still begs the wider question of why none of the high end graphics cards, like 3D Labs Wildcat Realizm aren’t available for Mac – with increasing demand from applications like Motion, and in the very near future CoreVideo and CoreImage on OS X 10.4 Tiger, Mac users need the power of these graphics cards to get the most out of the applications.
Of course, ATI, NVIDIA and Apple tend to point fingers at each other, although to the best of my understanding the hold-up is in the drivers and apparently Apple write the drivers for OS X. Perhaps there’s a great push to get these cards into Macs when Tiger ships – we can only hope so at least, but in the absence of hard information I vote that we in the post production industry let Apple know that we want these cards supported so we can have better performance from Avid Adrenaline on OS X, Apple’s Motion, anything CoreVideo coming up (NAB is only 12 weeks away), Boris Blue, Combustion and more.
Until we get support for these tools, there remain good reasons to go with Windows for true power graphics users.