Categories
3D Item of Interest

New technology allows users to “feel” 3D images.

New technology allows users to literally “feel” 3D images http://bit.ly/dvgjSd

Quite possibly the early days of the Star Trek ‘holodeck’:

The key piece of technology is a special touch sensor that emits feedback to the user’s hand and is able to manipulate it into feeling like the actual object that is being displayed. The 3D images themselves, though, can be projected on something as ordinary as a Samsung 3D TV.

It’s still a long way from visible light beings that have solid form, but I imagine the sensation is more real – even with goggles – than regular 3D.

Still, I have to contemplate the thought that the most likely initial use of this technology will be for more baser pursuits.

Categories
Metadata Video Technology

How Adobe ‘gets’ metadata workflows

Thanks to an upcoming piece of software we’re working on I’ve been spending a lot of time within the CS5 workflow environment, particularly looking at metadata and the Story workflow, and I’ve come to the conclusion that we’ve been so blinded by the Mercury Engine’s performance that we might not have seen where they’re heading. And I like what I see.

Most readers will likely be aware of the Transcription ability introduced with CS4 and updated in CS5. Either in Soudbooth, or in Adobe Media Encoder (AME) via Premiere Pro for batches, the technology Adobe builds on from Autonomy will transcribe the spoken word into text. Our initial testing wasn’t that promising, but we’ve realized we weren’t sending it any sort of fair test. With good quality audio the results are pretty good: not perfect but close, depending on the source, of course.

We first explored this early in the year when we built and released Transcriptize, to port that transcription metadata from the Adobe world across to Apple’s. That’s what set us down our current path to the unreleased software, but more of that in a later post.

Now we’re back in that world, it’s a pretty amazing “story”. There’s three ways they get it that I see:

  1. Good metadata tracking at the file level
  2. Flexible metadata handling
  3. Metadata-based workflows built into the CS applications (and beyond).

Balancing that is the serious miss of not showing source metadata from non-tape media that doesn’t fit into pre-defined schema. At least that seems to be the case: I can’t find a Metadata Panel that shows the Source Metadata from P2, AVCCAM/AVCHD, or RED to display. Some of the source metadata is displayed in existing fields, but they are only the fields that Adobe has built into Premiere Pro, which miss a lot of information from the source. For example, none of the exposure metadata from RED footage is displayed, nor Latitude and Longitude from P2 and AVCCAM footage.

That’s the downside. To be fair, Final Cut Pro doesn’t display any of the Source Metadata either (although you can access it via the XML file.)  Media Composer can show all the Source if desired.

Good Metadata Tracking at the file level

Apple added QuickTime Metadata to Final Cut Pro 5.1.2 where they retain and track any Source Metadata from files imported via Log and Transfer. This is a flexible schema but definitely under supported. Adobe’s alternative is XMP metadata. (Both XMP and QuickTime metadata can co-exist in most media file formats.)

XMP metadata is file based, meaning it is stored in, and read from, the file. There are seven built-in categories, plus Speech Analysis, which is XMP metadata stored in the file (for most formats) but considered as a different category in the Premiere Pro CS5 interface. I believe that the Source metadata should show in the XMP category because it is file-based even if its not XMP.

On the other plus side XMP metadata is very flexible. You don’t need third party applications to write to the XMP metadata. Inside Premiere Pro CS5 you simply set up the schema you want and the data is written to the file transparently. If the data is in a file when it’s added to a project, it’s read into the project and immediately accessible.

This metadata travels with the file to any and all projects. This provide a great way of sending custom metadata between applications. Speed Analysis metadata is also carried in the file, so it can be read by any Adobe application (and an upcoming one from us, see intro paragraph) direct from the file.

Flexible Metadata Handling

Not only is the XMP file-based metadata incredibly flexible, but you can also apply any metadata scheme to a clip within a Premiere Pro project, right into Clip metadata. For an example of how this is useful, let’s consider what we had to do in Final Cut Pro for First Cuts. Since Final Cut Pro doesn’t have a flexible metadata format, we had to co-opt Master Comments 1-4 and Comment A to carry our metadata. In Premiere Pro CS5 we could simply create new Clip-based fields for Story Keywords, Name, Place, Event or Theme and B-roll search keywords.

(Unfortunately this level of customization in Premiere Pro CS5 does not extend to Final Cut Pro XML import or export.)

An infinitely flexible metadata scheme for clips and for media files (independently) is exactly what I’d want an application to do.

Metadata-based Workflows in the CS5 Applications

To my chagrin I only recently discovered how deeply metadata-based workflows have become embedded in the Adobe workflow. (Thanks to Jacob Rosenberg’s demonstration at the June Editor’s Lounge for turning me on to this.) Adobe have crafted a great workflow for scripted productions that goes like this:

  1. Collaboratively write your script in Adobe Story, or import a script from most formats, including Final Draft. (Story is a web application.)
    • Adobe Story parses the script into Scenes and Shots automatically.
  2. Export from Adobe Story to a desktop file that is imported into OnLocation during shooting.
    • In OnLocation you have access to all the clips generated out of the Adobe Story file. Clips can be duplicated for multiple takes.
    • Clips are named after Scene/Shot/Take.
  3. During shooting you do not need to have a connection to the camera because some genius at Adobe realized that metadata could solve that problem. All that needs to be done during shooting of any given shot/take is for a time stamp to be marked against the Clip:
    • i.e. this clips was being taken “now”.
    • Marking a time stamp is a simple button press with the clip selected.
  4. After footage has been shot, the OnLocation project is “pointed” to the media where it automatically matches the shot with the appropriate media file, based on the time stamp metadata in the media file with the time mark in the OnLocation Clip.
    • The media file is renamed to match the clip. Ready for import to Premiere Pro CS5.

Now here’s the genius part in my opinion (other than using the time stamp to link clips). The script from Adobe Story has been embedded in those OnLocation clips, and is now in the clip. Once Speech Analysis is complete for each clip, the script is laid-up against the analyzed media file so each word is time stamped. The advantage of this workflow over using a guide script directly imported is that the original formatting is used when the script comes via Story.

All that needs to be done is to build the sequence based on the script, with the advantage that every clip is now searchable by word. Almost close to, but not quite, Avid’s ScriptSync based on an entirely different technology (Nexidia).

It’s a great use of script and Speech Analysis and a great use of time-stamp metadata to reduce clip naming, linking and script embedding. A hint of the future of metadata-based workflows.

All we need now, Adobe, is access to all the Source Metadata.

Categories
Assisted Editing Item of Interest

Letting the Machines Decide

Letting the Machines Decide http://bit.ly/aQwrUW (If you get stymied by the WSJ pay wall, simply go to news.google.com and enter “Letting the Machines Decide” and follow the link to the WSJ around the paywall.)

I know, not my usual stuff, but I have an abiding interest in all forms of artificial intelligence because, ultimately, I believe we’ll be able to apply a lot of the techniques and technologies developed for AI to automating Postproduction. Heresy, I know, but bear with me.

Anything that can be analyzed and systematized can be automated. When we were developing First Cutsour tool for taking long-form documentary log notes and converting them to very fast First Cuts – the most challenging part of the exercise wasn’t teaching the computer to do something, it was analyzing what I did as an editor to make a “good” edit. Just imaging how complex are the rules for placing b-roll!

So, with that background and a belief that a lot of editing is not overtly creative (not you of course dear reader, your work is supremely creative, but those other folk, not so much!). It can be somewhat repetitive with a lot of similarities.

Just like the complexities of stock trading: knowing when to buy, when to hold and when to sell.

The programs are effective, advocates say, because they can crunch huge amounts of data in short periods, “learn” what works, and adjust their strategies on the fly. In contrast, the typical quantitative approach may employ a single strategy or even a combination of strategies at once, but may not move between them or modify them based on what the program determines works best.

What I think is really interesting is that the software tools started to act contrary to what an experienced trader would do, but:

In early 2009, Star started to buy beaten-down stocks such as banks and insurers, which would benefit from a recovery. “He just loaded up on value stocks,” said Mr. Fleiss, referring to the AI program. The fund gained 41% in 2009, more than doubling the Dow’s 19% gain.

The firm’s current portfolio is largely defensive. One of its biggest positions is in gold stocks, according to people familiar with the fund.

The defensive move at first worried Mr. Fleiss, who had grown bullish. But it has proven a smart move so far. “I’ve learned not to question the AI,” he said.

And that’s what we discovered. One night – after a couple of glasses of red wine – we decided to throw a “stupid” combination of story keywords at First Cuts to see what it would do. Well, would you believe in the six minute edit that eventuated, I only wanted to move one clip, and its associated b-roll, one shot early (swapping it) and as far as I was concerned the edit was done.

Categories
Apple Metadata Video Technology

How serious is Apple about metadata?

During a recent thread here where I “infamously” suggested Apple should drop Log and Capture for the next version of FCP, one of the topics that came up was the use of metadata. Most commenters (all?) appeared – to my interpretation – to feel that reel name and TC were the “essence” of metadata.

And yet, if we look at the most recent work of the Chief Video Architect (apparently for both pro and consumer applications) Randy Ubilos we see that Location metadata is a requirement for the application. According to Apple’s FAQ for iMovie for IPhone if you don’t allow iMovie for iPhone to access your location metadata:

Because photos and videos recorded on iPhone 4 include location information, you must tap OK to enable iMovie to access photos and videos in the Media Library.

If you do not allow iMovie to use your location data, then the app is unable to access photos and videos in the Media Browser.

You can still record media directly from the camera to the timeline but, without the Location metadata, you’re pretty much locked out of iMovie for iPhone for all practical purposes.

There is no location metadata from tape capture! There’s not much from non-tape media right now, although some high end Panasonic cameras have an optional GPS board. However P2 media (both DVCPRO HD and AVC-I) as well as AVCCAM all have metadata slots for latitude and longitude.

Now, I’m NOT saying that Apple should force people to use metadata – particularly if it’s non existent – and this type of restriction in a Pro app would be unconscionable. I merely point out that this shows the type of thinking within Apple. In iMovie for iPhone they can create a better user (consumer) experience because they use Location metadata for automatic lower third locations in the themes.

Where I think it’s a little relevant is in counterpoint to some of my commentors: building an app that’s reliant on metadata is a different app than one relying on simple reel name and TC numbers.

Categories
Distribution HTML5 Item of Interest

YouTube: HTML5 Video Is No Match for Flash (Yet)

YouTube: HTML5 Video Is No Match for Flash (Yet) http://bit.ly/d48tWW

Although YouTube has been encoding to H.264 since early 2007, most distribution is via their Flash player, although they do have an HTML5 player as well. The advantages of Flash for YouTube at the moment are:

  • Live Streaming (although almost nothing on YouTube is live streaming in that sense – it’s all progressive download). What Google means is control over buffering and dynamic quality of the files it serves up.
  • Content protection for the “Premium Content” demanded by the content owners, despite all kinds of DRM being pointless (don’t work) and annoy the legitimate user.
  • Encapsulation and Embedding. Flash is definitely easier for that and has better security.
  • Fullscreen Video. Tick. HTML5 players (mostly MP4 players) do not do Fullscreen video. Not that I use it often, but it’s an important feature to have.
  • Access to Camera and Microphone for interactive experiences, something not yet possible in HTML5

On the other hand, Hulu Plus kicks Hulu’s dependence on Flash for it’s iPad/iPhone application. (In fairness, you can do pretty much all of the above when you move from plug-in or native browser support to a custom application.)

Categories
Apple Pro Apps

Why Apple should drop Log and Capture from FCP

My friend Terry Curren and I get together for lunch periodically. Last time he was trying to convince me, among other things, that Apple will drop Log and Capture from the next version of Final Cut Pro. I resisted the idea until I realized that not only was he right, but that Apple should drop Log and Capture. Here’s why.

Tape is deadish now, will be more so in 2012.

After revising the HD Survival Handbook last year I realized that HDV and tape in general was dead. HDV was the last tape format for acquisition and that too is now (according to me) officially “dead”. (Not that it’s out of use, but that it’s unwise to invest further in that format.)

So, given that I have considered tape to be “dead” for a year, how dead will it be in another 18-24 months? Very dead.

Sure, there will be people who need to capture from tape and output to tape. Output is already handled by Blackmagic Design and AJA with utilities that ship with their hardware. Blackmagic Design’s version includes capture.

Rewriting Log and Capture will waste engineering resources that should go into an improved Log and Transfer.

If tape capture and output is a third party opportunity (and both Blackmagic Design and AJA utilities are better at accurate insert editing than FCP is itself) then the engineering resources could go into improving Log and Transfer: speed and metadata support could be beefed up.

Dropping old technology and moving to new is in Apple’s DNA

We’ve dropped the floppy disk, ADB, and a host of other technologies. In the iDevices, Apple have frequently used the latest and greatest technology, so it’s much less likely they’d invest the resources that would be necessary to rebuild Log and capture.

So, I’m convinced: Log and Capture must go. Even though they have Cocoa code in the HDV version of Log and Capture I can’t see the benefit when the vast majority of FCP users in 2012 so it has to go. Leave an opportunity for third parties and move FCP into a newer, modern future.

Updated: Matt has a point in the comments that I should have addressed: tape will be with us for quite a while and I made almost all the same arguments to Terry before becoming convinced I was wrong.

Beside, tape is dead according to this image from Chris Roberts of a Copenhagen shop window:

Tape could well be dead.

Categories
HTML5 Item of Interest

IE9 supports Canvas and hardware acceleration!

IE9 supports Canvas…. hardware accelerated! http://bit.ly/cG20eG

Like I’ve said before, HTML5 is really a combination of the <video> tag, the <canvas> tag, Javascript and CSS transforms. That IE 9 will not only support the Canvas tag but do it through hardware acceleration is great news for HTML5.

Like all of the graphics in IE9, canvas is hardware accelerated through Windows and the GPU. Hardware accelerated canvas support in IE9 illustrates the power of native HTML5 in a browser. We’ve rebuilt the browser to use the power of your whole PC to browse the web. These extensive changes to IE9 mean websites can now take advantage of all the hardware innovation in the PC industry.

Categories
Assisted Editing Item of Interest Metadata The Technology of Production

I’ve just uploaded some computer edited videos to YouTube

As well as showing the software in action, this series of videos show the results from the software. Each “edit” is based on a set of story keywords (logged with the clips) and a duration. Lower Thirds are automatic; story arc is automatic; b-roll is automatic; audio from b-roll is faded in and out and dropped in volume. All automatically and in seconds.

The project is about a young triple threat – singer, dancer, actor – Tim Draxl, discovered in Sydney when he was just short of his 18th birthday.

He played Rolf in a professional touring production in Australia in 2000 and his career has blossomed from there, and the three CD deal he has with Sony Universal: the first when he was 18!

Remember, these edits were done in seconds, from selects using Assisted Editing’s First Cuts software. And yes, this is my baby (along with Dr Greg Clarke).

The Sound of Music Edit

Without limits – about 13 minutes of material.

Four minute limit set. Edit is tighter and only the best material makes it to the edit.

 

Growing Up

Tim grew up partly in Australia and partly in Austria as his father worked as a ski instructor. This is the unlimited version of the “Growing Up” edit.

[Update: I forgot the 10 minute limit so one of the movies was too long and YouTube can’t distinguish between a 6 minute cut and a 4 minute cut, thinking they’re the same. Fortunately the videos are also available on our site. The Growing up unlimited  and six minute versions are available at http://assistedediting.com/FirstCuts/results.html]

And finally with a 4 minute limit.

Categories
Apple Pro Apps Item of Interest

Some cool tools for Final Cut Pro from Edit Mule

Auto-Collapse for FCP http://bit.ly/dtJQ1h Tidy up those timelines by collapsing redundant layers, removing unused parts of clips, etc. Looks powerful and useful.

We often create or are presented with messy, confusing timelines… This is the perfect way to simplify unwieldy timelines… It’s ideal for efficient use of drive space before media managing and re-conforming; and also for consolidating sequences to one track for exporting old style CMX EDL’s.

Filter Removal for FCP. Unlike Final Cut Pro’s Filter Removal tool, this one allows selective removal.

So often we find ourselves with sequences with tons of filters of all varieties. For example, say you have a whole sequence that’s been de-interlaced, colour corrected, with some maybe blur and film effect filters peppered around too, and you want to remove just the interlace filter… its impossible without deleting all the others. The workaround for this little problem is as tedious as it gets, you’ve got to pick through each and every clip and manually delete each of those pesky de-interlace filters… Now with EM Filter Remover you simply select the sequence whose filters you want to edit, and it does it all for you!

And one more that I didn’t tweet about is Auto Scratch. Automatically set the Scratch Disk to follow the project. Yah!

Particularly useful for machines and facilities with many operators and projects… EM Auto Scratch remembers where each projects render files and media destinations are meant to be, even when you hop between projects. No more excuses for colleagues who’ve accidentally deleted all the media for the project you’ve been working on for months!

Until today I wasn’t even aware of Edit Mule – out of the UK and creating some nice tools.

Categories
3D Item of Interest

Consumers Put 3D TV to the Test

I’m somewhat of a 3D skeptic, particularly when it comes to 3D Television in the home.  I’m fairly comfortable with the cinema experience (although the darker images, awareness of the glasses framing the screen and directors throwing things at me all the time are negatives in my opinion), but I just don’t see how the 3D experience will work with the way we view Television.

In the current issue of DV Magazine, editor David Williams asks why anyone wouldn’t want 3D in the home. My response is that, with glasses, it fundamentally does not fit with the way we watch TV. We don’t just watch TV silently. If I wanted that I’d watch on my laptop, and work on email or Twitter or curating my photo library or something else.

In both cases, glasses would get in the way. It can take us 3 hours to watch a 1 hour (44 min) TV show because we’ll stop and discuss something that the show triggered or we remembered. Glass off, back to TV, glasses on. Or Greg will be cooking while watching TV. Again glasses are incompatible.
The same limitations do not apply to color, stereo sound or HD.
It basically comes down to: TV watching is not a monotasking activity. It’s not sufficiently compelling to do that, so we multi-task with conversations, or while working on another screen.
And I see 3D being incredibly difficult in that situation, even without glasses.
OTOH, I see it working for those who are dedicated sports folk who don’t interact much with other people while watching sports (there goes the Superbowl party).
3D gaming, definitely killer. 3D cinema, bound to be good when done well, but 3D TV in the home. I remain skeptical.
Similarly the folk who viewed 3D for the Technologizer article seems interested but not enough to want to spend money:

Glasses, in fact, were the biggest obstacle. “You’re going to ask friends and family to spend $150-$200 on a pair of glasses?” Tom asked. The cost would be prohibitive if he wanted to invite friends over for a football game or a movie. The group was simply incredulous when I explained that glasses from one manufacturer wouldn’t work with TVs from another. Third-party universal models are coming out, however; and Samsung has vaguely promised future interoperable models.

Reactions were the same at the World Cup screening. While I was watching, a family came up to look at the TV. I offered the boy, about 4, my pair of glasses. He tried them for about a second, then pulled them off. His dad, probably in his 40s, was about as enthusiastic. “I like the TV, but will probably never buy the glasses,” he said, adding, “Only one percent of the programming is in 3D. And then you gotta buy the $500 player.” (Samsung’s BD-C6900 actually lists for $400.)

My new German friends felt the same. “We would pay a little more,” said Astrid, “because we’re technology freaks.” But she felt the premium was too much today. “We just bought our first flat screen in the fall,” she added,” saying it would be a while before they got another TV.

Tom, from the first group, may have summed up everyone’s opinion when he said “I could see buying this in maybe five years, when there’s more content and cheaper glasses”

Â