Category Archives: The Technology of Production

How Hollywood killed the movie stunt

How Hollywood killed the movie stunt

An interesting article that is really more about changes in editing style than it is about stunts in movies – including early “movies” that were essentially a shot of a stunt.

I ask because in looking at that image of the stuntman diving into the Hudson, and running through a mental checklist of my favorite movie stunts, I realized that almost none of them occurred in films released during the last 10 years.

What’s the significance of that time frame? Well, for one thing, it’s the approximate start of the Digital Era of cinema — the point where video started to replace film and practical effects (meaning effects that were created in order to be photographed just like any other physical object) started being subsumed by computer-generated effects. And for another (and this is surely related) the late ’90s/early aughts marks the point when classical or “old-fashioned” editing — which dictated that every cut should be dramatically and aesthetically justified — was supplanted by what the film theorist David Bordwell calls the “intensified continuity” or “run and gun” style. The latter seeks to excite viewers by keeping them perpetually unsettled with computer-enhanced images, fast cutting and a camera that never stands still.

If you’re an editor,  writer or producer, you should read this.

Novacut – open source video editor

Novacut – open source video editor built for collaboration.

Novacut is still a work in progress and looking for people to become involved. It’s clearly a worthy effort to support, but I remain somewhat skeptical that Open Source projects can achieve the finish and polish of commercial software, but it does usually produce tools that are competent and free. And that’s no bad thing.

Adobe Premiere Pro CS5 “saves” an interview.

One of the reasons my direct posting here has been light lately is that we’ve been working on a small documentary, partly for the exercise but mostly to a) have demo material for prEdit that isn’t 10 years old and b) prove to myself the prEdit is indeed a great new workflow for documentary editing. Plus a documentary gets made to store the memories of the early 60’s drag racing community.

Inevitably one of the tapes ends up with breaks every few seconds. Final Cut Pro always, always breaks HDV into individual clips, regardless of your settings, so parts of this interview were simply lost. So I tried capturing in Premiere Pro CS5. A little surprised to have to preview on the camera (not inside Premiere Pro CS5) but the capture happens and the entire interview is captured in one piece with no dropped frames.

I’m composing hymns to Premiere Pro’s greatness, until I try an export. (All captured media is being converted to ProRes 422 for the master and editing formats.) Adobe Media Encoder crashes when it hits one of the glitches that tripped up Final Cut Pro. Rinse and repeat and we’re not getting an export. Even an attempt to playback causes Premiere Pro to disappear.

Well, not of picture anyway, but AME will export the audio by itself without a problem. So, while it’s not perfect, I now have that important interview (and the one we travelled furthest to get) with about 99% of the audio intact and laid up with what video I have and I’ll be able to use the interview in the doc.

So thanks to the ability to capture all my HDV material Premiere Pro CS5 at least got me usable material.

What about the iPad and Media Production?

On October 31 last year Edo Segal wrote an article on TechCrunch with the title For The Future Of The Media Industry, Look In The App Store. The article is definitely worth a read but this jumped out at me:

But the entertainment industry has a vested interest in the success of this new type of convergence, as within it lies the secret to its continuing prosperity. The only way to block the incredible ease of pirating any content a media company can generate is to couple said experiences with extensions that live in the cloud and enhance that experience for consumers. Not just for some fancy DRM but for real value creation. They must begin to create a product that is not simply a static digital file that can be easily copied and distributed, but rather view media as a dynamic “application” with extensions via the web. This howl is the future evolution of the media industry.

It brings together some of the thinking I’ve been doing on how to challenge the loss of revenue from direct consumption or from advertising revenue when digital files of programming and music are so easily shared and copied. like to summarize their approach as CwF + RtB = financial success: Connect with Fans and give them a Reason to Buy some scarce goods. Many musicians are already doing this and the results are summarized in the article The Future of Music Business Models (and those who are already there).

I agree that CwF + RtB is part of the future: we can’t charge for infinitely distributable digital goods but we can charge for scare goods (or services) promoted by the music.

But I’m not as sure that will work in the same way for the “television” business, which I define as being “television style programming professionally produced” even if it’s never broadcast on a network on cable. Certainly it will be possible to sell merchandising around programming, and everyone is encouraged to do that.

I’ve also written and presented – as long ago as my Nov 2006 keynote presentation for the Academy of Television ArtsSciences – that producers and viewers have to be more connected, even to the extent of allowing fan contributions.

Well, last night I had something of an epiphany that bought together Edo Segal’s thoughts and my own as I contemplated the implications of the recently announced Apple iPad.

As a brief aside, I find the iPad to be pretty much exactly what I was expecting (although I thought maybe a webcam for video chat) and interesting. Although I don’t see where it would fit in an iPhone/Laptop world, I can see plenty of uses particularly for media consumption. (For example a family shares an iMac but each of the older children have their own iPad for general computing, only using the iMac for essays etc.)

But the iPad doesn’t really lend itself to static media consumption as it has been: where the producer sends stories fully finished and complete to viewers who passively consume. That’s when the import of Edo’s comment struck: there is more of a future in media consumption for those producers who create the whole environment.  This has definitely been done by many movies and shows but usually with more of a consumption-of-information about the show, rather than a rich interactive experience where fans of the show are as important as the producers.

The future of independent production and media consumption is an immersive environment (website, or better yet and iPad app) with:

  • Content
  • Community (forums, competitions)
  • Access to the wider story, side stories or “back story” in various media formats
  • Character blogs
  • Cast and crew blogs
  • Fan contributions and remixes.

Such an experience would be almost a cross between a typical television program and a video game environment. Sure programming is part of what can be consumed on the site; but there are competitions, games, back stories; additional visual material edited out of the program source, with additional shooting, using technologies like Assisted Editing.

Any unauthorized distribution of content will only be distribution the content, not the experience of the program in its full glory.

Now, there’s no particular reason why this couldn’t be largely done on a website, but it is as an immersive iPad app that I think it will really be fantastic. The iPad is very immersive and tactile. It presents no “border” (i.e. browser window and other computer screen elements) to distract from the programming. It begs to be interacted with because holding it in place to watch a 22 or 44 minute show doesn’t appear to be going to be all that great.

There’s one more selling point for the iPad: it allows in-app sales, so some of the “reasons to buy” can be sold very transparently without even leaving the app’s environment. Avatars, screen savers, certain games or activities might carry a small charge. Yes, even the media itself (or some of it) could carry a small transaction charge. Smooth, frictionless sales in an environment optimized to engage people in the story of the show.

Apple’s iTunesLP format is a very small start in this direction by building a micro-site for the album artwork. This is very powerful because it supports most modern web technologies in a tight package and interactive features (all, b.t.w., without Flash but looking a lot like Flash).

Edo has some further good ideas and I recommend reading the article at the top of this post.

What can some kids do with a “green screen” kit for Christmas?

On the Yahoo-based Final Cut Pro list MarkB posted this just a few minutes ago:

Gave my kids (14 & 16) a green screen kit from Cowboy Studio for Xmas. The older one does a sports video blog, the younger one shoots and edits it (Canon HV20 camera, Final Cut Express, 4-year old iMac).
They used their new chroma key trickery today. I helped them set up the green screen, gave the younger one a 5-minute lesson in how to use my DV Garage plugin, then stayed out of it except for a tip or two. This is what a 14-year old kid can do first time:

Watch the video or at least the first couple of minutes. (I’m not that into football/soccer so it doesn’t mean much to me) but look at the work.Not only is the 16 year old good talent, but the way it’s put together is damned nice too. (In this style of presentation I’ll overlook my long-standing distrust of jump-cuts and live with fact that it’s become an acceptable style: heck in this example I think it works fine.)

Mark mentions that the keyer they used was DV Matte Pro, which I’ve also had a lot of success with: using it on A Musical Journey with Richard Sherman, on the 40th and 45th Anniversary Edition Mary Poppins DVD. After testing all that were available that’s what gave us the best results (although it does have a different approach to fine tuning edges than most keyers, which threw me at first).

I’ve long argued that we have to constantly be improving our skills, because those coming up behind us are staring with a whole lot better craft/technical skills that we did. In fact, we have to keep learning to keep up and make sure our experience and people skills are a whole lot better.

But it does make you wonder what these guys will do with their Christmas “green screen kit” if ever they discover 3D. (I suspect that would be the 14 year old’s realm.) Does easy, accessible keying technology really change production forever?

What were the technical trends at NAB 2009?

There certainly wasn’t much new in NLE at NAB. Avid had already announced, Apple are keeping to their own schedule that apparently doesn’t include NAB (although Apple folk were in town) and Adobe have a 4.1 update coming for Premiere Pro CS4. The only new NLE version was Sony’s Vegas, which moves up to version 9. With, of course, RED support. Can’t forget the RED support – it was everywhere (again).

Lenses for RED, native support, non-native support: everyone has something for RED, or Scarlet/Epic coming up. Lenses are already appearing for those not-yet-shipping cameras.

Even camera technology seemed to take a year off. I certainly became convinced of the format superiority (leaving aside lenses, and convenience factors) of AVCCAM, which is a pro version of the consumer AVCHD, with higher bit rates. The evidence supports the hypothesis that AVCCAM at 21 or 24 Mbits/sec should produce a much higher quality image than MPEG-2 at the same bitrate. Before this NAB I was only convinced “in theory”. Of course, choose the AVCCAM path and you’ll be transcoding on import to FCP or Avid to much larger ProRes or DNxHD files, which is an optional (and recommended) path for HDV or XDCAM HD/EX.

Everyone has a 3D story to tell. Panasonic promise 3D-all-the-way workflows “coming” and there were all sorts of tools on the floor for working with 3D, projecting 3D, viewing 3D…  As one of my friends quipped “The presentations were amazing. What’s more I took off my glasses and the 3D experience continued around me!”

I confess to being a little torn on 3D (and Twitter, but that’s another post). I’ve seen some really amazing footage, and some that simply tries too hard to be 3D.  I also worry how we’ll adapt to sudden jumps in perspective as the 3D camera cuts to a different shot. I noticed a little of this when viewing an excerpt from the U2 3D concert film. There are natural analogs to cutting in 2D – in effect we build out view of the world from many quick closeups, so cutting in film and TV parallels that.

I can’t think of an analog for the sudden jumps in position in 3D space and perspective that would help our brains adapt. Maybe we’ll just adapt and I’m jumping at shadows? Who knows. I don’t plan on 3D soon.

Nor do I expect to see Flash supported on a TV in my home for at least a couple of years. That’s the problem that Adobe faces in getting support for Flash on TVs and set-top boxes. For a start it will require a lot more horsepower than those boxes have already, but Moore’s law will take care of that without a blink. A bigger problem is the slow turn-over cycle of Televisions. Say it’s 6 months before the first sets come out (and none are yet announced). It’s probably ten years before any particularly provider can rely on the majority of sets being Flash enabled. Assuming it catches on.

So I rather see that as a non-announcement. Remember the cable industry already has it’s own Tru2way technology for interactivity on set-top boxes.

I am much more interested in Adobe’s new Strobe frameworks, even though it could take some business away from my own OpenTVNetwork.

For the geeky, my favorite new piece of technology for the show would have to be Blackmagic Design’s Ultrascope – an HD scope package, just add PC and monitor to the $695 hardware and software bundle for a true HD scope at an affordable price.

I’ve already given my opinions on the Blackmagic Design announcements, AJA announcements and Panasonic announcements during the show.

Two more trends this year: cheaper and better storage and voice and facial recognition technologies are becoming more widespread.

I am amazed at the way hardware-RAID protected systems have fallen in cost. Not only the drives themselves but the enclosures are getting to the point where it’s no longer cost-effective to build your own, certainly not if you want RAID 5 or 6.

Five years ago the only company demonstrating facial and speech recognition were Virage, who I didn’t see this year. But there are an increasing number of companies that have speech recognition that seems to be, overall, about the same quality as that bundled with Adobe’s Premiere Pro and Soundbooth CS4, i.e. it can get reasonably high in accuracy with well paced, clean audio and no accent. Good enough for locating footage.

Facial recognition seems to be everywhere, from Google’s Picassa to news transcription services. Not only do they recognize cuts but they also recognize the people in the shots, prompting when a new face is recognized.

How long before the metadata that powers First Cuts doesn’t have to be input by a person, again? That’s what really excited me about NAB 2009.

Why Final Cut Pro specific cameras?

At the MacWorld Final Cut Pro User Group Supermeet on Wednesday night (Jan 7th) JVC announced two new ProHD camcorders. The 1/4″ progressive sensor 3-CCD compact (prosumer form factor) GY-HM100 and the shoulder mounted 1/3″ sensor GY-HM700.

There are a couple of things that make these two cameras interesting. They record in QuickTime native format ready for native editing in FCP. No import, no conversion – just copy and edit (or edit off the card). They also use SDHC compact memory cards instead of expensive proprietary formats. Both cameras have two card slots and with two 32 GB cards, can record up to 6 hours continuously for just a couple of hundred dollars.

You can read up on the rest of the specs but there are three points I’d want to make.

As far as I know this is the first camcorder made specifically for Final Cut Pro. While there have been some earlier attempts to make Avid-friendly camcorders, they didn’t hit it off in the marketplace. Clearly JVC see Final Cut Pro as a big enough market, with at least 1 million unique registered users (and probably twice that if unauthorized copies are counted) to justify doing a specific camcorder.

Secondly, increased use of SDHC. As well as these two new cameras the AVCHD/AVCCAM (Panasonic HMC-150) use compact flash as does the Red One camera. These are multi-vendor, non-proprietary formats that are readily available up to 32 GB. Take that P2 and SxS media! Of course, all these sources use compressed video of some format.

The third point is the most interesting one. JVC acknowledge that the camera does 720P at 19 or 35 Mbit/sec; 1080i at 25 Mbit/sec (aka HDV) and 1080P at 35 Mbit/sec using an “Enhanced MPEG2 Long GOP Encoder”. Traditionally ProHD has been working within the HDV specifications but there is no 35 Mbit/sec spec for HDV, particularly not one that’s already supported by Final Cut Pro. It appears that JVC are using Sony’s XDCAM EX format or something very like it, for these two new cameras.

This is not the first time JVC has worked with the Sony format. Back in September 2008 JVC announced support for XDCAM EX media and created a version for 720P licensed from Sony, which only supports 1080 in XDCAM EX.

Increasingly, Sony’s XDCAM EX format – at 35 Mbit/sec – is the grown up version of HDV.