The present and future of post production business and technology | Philip Hodgetts

Archive for July 2009

I’ve been thinking a lot over the last couple of months about news. In fact somewhere within me is brewing a book on the way that the Internet and technology has changed news so when the Digital Production BuZZ asked me to comment on the subject this week, it forced me to put some of the thoughts into a coherent form. Hopefully last night’s interview (my segment starts 20 minutes in) was, but I’d like to share those thoughts with you here.

I think most people are aware that the newspaper industry, in particular, is in trouble. The Internet and modern technology have changed the way we get and consume news. It’s also changed the way the way the news itself is gathered.

There are several ways that the Internet  and technology have changed news and I’m sure my thoughts here are going to only skim the surface. First, a little history. Back in the days PI (Pre-Internet) – really just on 15 years ago – news was hard to come by. We didn’t get information internationally, or even nationally, without the newspaper and to a lesser degree radio and Television but mostly the newspaper. The entire contents of an hour-long evening news bulletin would not take up the space of the front page of most newspapers of record, so it was to newspapers we looked for local, national and international news.

I used to be a 3-paper-a-day man back in Australia. The local newspaper for local news; the State-Capital based newspaper of record and the National financial news for, well national financial news. (I was a Fellow of the Australian Institute of Company Directors in those days, and had a keen interest in such things.)

I haven’t read a newspaper on a regular basis in 10 or more years! These days I get my news via RSS into an aggregator. My general (local, national, and international) news comes from eight major sources: AP, LA Times, Wall Street Journal, Washington Post, NY Times, CNET, Sydney Morning Herald and Yahoo Technology News across two countries. But I’m only interested in a fraction of what they report.

But these are just eight of the nearly 300 RSS feeds that feed me the news I’m really interested in. No newspaper would ever be likely to give me that personalized look at the world as it evolves. Plus, I don’t have to wait 24 hours to get “aged news” (as Jason Jones put it on The Daily Show).

Now, back PI we needed the same AP article reproduced in the local paper in each market because that’s how we got the news. These days we only need the source – the original source which is rarely a newspaper or AP – and a link. It annoys me that the same story appears 20 times or more in one set of news feeds, duplicated from the same AP article and rarely with any editorial influence or rewriting.

In fact, I think you’ll find a good portion of most papers are simple rewrites of press releases or AP stories, with very little real reporting being done at all.

Blog aggregators like the Huffington Post and to an increasing degree, AOL who has more than doubled the number of reporters in the last year hiring those discarded by mainstream media, are creating their own reporting and commentary networks. News is coming directly from the source. We don’t need an AP or NYT outpost in Iran during an uprising. We get news from Iran, from The Tehran Bureau or Global Voices Online (a blog aggregator who knows which bloggers to trust).

As an indication of how much the news industry has changed, The Tehran Bureau, published by volunteers out a small suburban house in Massachusetts, has had very accurate and detailed information about what is going on in Iran while the mainstream media have been sidelined by the officials in the country and not able to report. Their information was being quoted and “reported” by mainstream media who can’t get coverage from their traditional channels.

None of this could happen without the Internet infrastructure and specific technologies that sit on top of it, and sometimes link into other technologies like the cellular phone network’s SMS system.

It was a blogger who bought down Dan Rather by revealing that the papers purporting to reveal irregularities with President George W. Bush’s service in the Air National Guard were fake. There are dozens of such incidents where bloggers,with time and the Internet at their disposal, have broken dozens of stories, with more accuracy and greater detail than the mainstream media. (Frankly the accuracy rate of mainstream media is pretty appalling.)

It was a cell phone recording that affected the balance of power in the Senate in the 2006 mid-term elections when a Democrat staffer recorded George Allen’s infamous “Maccaca” comment that, arguably, lost him his almost certain return to the Senate.

It was the cell phone video of “Neda” being shot in the civil disobendience after the Iranian election that helped inspire more people to come out in opposition to the Government of the country.

With millions and millions of cell phones in consumer’s hands it’s now more likely than not that a camera will be at the scene of a major incident. The first picture of Flight 1549 in the Hudson was from Janis Krums’ iPhone on the ferry that was first on the scene to pick up the passengers. Naturally he shared the photo via Twitter. (It was 34 minutes later that MSNBC interviewed him.)

Twitter was first to break the news, again. People have sent tweets from within the midst of the news, including instances where people have tweeted their involvement in a disaster like Mike Wilson, a passenger on board Continental’s Flight 1404, which skidded off the runway at Denver airport and burst into flames. Mike tweeted right after he escaped out of the plane’s emergency chutes and posted a picture of the foam-covered aircraft long before any traditional media was even aware of the accident.

When a Turkish Airlines Boeing landed short and broke apart at Amsterdam’s Schipol, the first word to the public was a Tweet, sent out by a fellow who lives near the airport. (FlightGlobal.com)

Twitter has become a major news source, such that there are now sites, like BreakingTweets.com, dedicated to breaking news on Twitter as a news site in addition to Twitter’s own Breaking News page. If you want the up-to-the minute news, you follow Twitter it seems.

Even if newspapers and the Associated Press ultimately fail, as they are most likely to, I still see a bright future for journalism, just not in the traditional places.

There is one more aspect to “news and the Internet” and that’s the social one. Many of the source I subscribe to in my RSS reader are bloggers who write in the space. I may miss an article or resource but Scott Simmons (on his own site or at ProVideoCoalition.com), Oliver Peters, Larry Jordan, Shane Ross, Lawrence (Larry) Jordan, John Chapell, or Norm Hollyn are there to find the things I miss and bring them to my attention. (Of course, usually with some insightful writing in between.)

I don’t have to read everything or be everywhere because the social networks I participate in create a new network far more valuable to me than the best efforts of the Associated Press!

Jul/09

30

Why I was wrong about ProRes

Since the launch of ProRes 422 in Final Cut Pro 6, I’ve been vocally disappointed that Apple didn’t choose to license Avid’s DNxHD family of codecs instead of developing their own competing (and almost identical) ProRes 422. Right down to the data rates used, the two codec families are very similar.

Except DNxHD is truly cross platform including *creating* files on Windows. Apple might not want to acknowledge it but the best high end compositing apps (Nuke et al.) are Windows and many of the highest end 3D apps are Windows only. That’s the world Apple’s customers live in and not having a way to create ProRes 422 on Windows is still a failing of that codec family. DNxHD codecs can carry alpha channels in all versions. Plus, if Apple had adopted DNxHD then media would be (sort of) compatible between Final Cut Pro and Media Composer. (A rewrap from MXF to QuickTime would be required, or the use of RayTools to read MXF natively into Final Cut Pro.)

But with the release of Final Cut Pro 7 and the new ProRes 4444 codec, I can see why Apple wanted to “do their own thing” as it were. They get to control the future of the codec family without involving third parties.

There’s no DNxHD 444 codec. There is now a ProRes 4444 codec and there is now alpha channel support for those who want it. (Admittedly I’d like to see alpha channel support in the 220 and 140 Mbit versions as well, but perhaps I shouldn’t be greedy.)

While it only duplicated DNxHD functionality (and really the new 45 Mbit Offline codec in ProRes duplicates the similar DNxHD 36 codec) it didn’t make sense to have competing offerings.

Giving us more than DNxHD does, makes up for the duplication.

But please, Apple, give us a way to create those files on Windows. I’m no Windows fan but this is the real world and Windows exists in media creation.

I was prepared for a “small” release this time round, as I assumed that the Pro Apps Team would be working hard to convert to Cocoa and would have to release a smaller “interim” release, but Final Cut Pro 7 is definitely more than I was expecting.

Having iChat Theater built-in means no more workaround with remote collaboration using two Macs! It also suggests the Pro Apps folk “get” that remote collaboration is booming and they know they need to adapt to that world.

Likewise the new publishing palette is going to be great for a lot of editors who need to routinely provide progress updates and deliver them on the web. That it runs in the background while you continue working is even better. You could have saved a reference movie and sent that to Compressor and added an upload action to the preset, but this is just so much simpler, and gives direct access to the most popular sharing sites, and Mobile Me!  MobileMe might be the best choice for many editors – files can be private and certainly not as public as YouTube!

My all-out favorite features, while a small one, is that Markers in a Sequence now move with the Sequence as clips are inserted or deleted. Colored Markers are great and I’ll use them a lot to identify a type of marker. For example, one color could mean “more work needed here” another color would be a locator just you jump quickly to part of the Sequence, and so on.

The technologist in me is very impressed with the new ProRes codecs. Those that work at the high end will love the ProRes 4444 codecs (and those that want an alpha channel will use it anyway). The Proxy version at 36 Mbit/sec parallels Avid’s own DNxHD offline codec and Apple needed something similar for HD offline. The most interesting codec is, however, the 100 Mbit LT version.

Clearly aimed at acquisition I expect we’ll see camcorders and other devices, like maybe the Ki Pro, supporting this data rate, which is co-incidentally the same as AVC-I at its highest setting. AVC-I up against ProRes 422 LT would be very, very similar in quality, both 4:2:2 and 10 bit and using similar compression strategies. It would be a perfect data rate for the Ki Pro if AJA want to support it. (I can’t help but wonder if the last-minute-delay of the Ki Pro wasn’t to wait for this announcement, but I’m just guessing.)

The Pro Apps team have thrown a “sop” at those who want Blu-ray authoring with the ability to create a Blu-ray compatible H.264/AVC file in Compressor that can be burnt to Blu-ray or standard DVD media. Nothing that Toast 10 hasn’t been able to do for some time now but nice to have it included in the lower-cost Final Cut Studio.

Many have interpreted the inclusion of this feature as an indication that Apple are going to get “more serious” about Blu-ray, but I’m not sure. I think it indicates the opposite. If there was going to be a big Blu-ray push the these features would be added to DVD SP, which received almost no update in this version. I think we’ve got Apple’s “solution” for Blu-ray in Final Cut Studio. Who know, only the future (and probably a Product Manger at Apple) will tell. (The PM won’t ever tell, that’s for sure!)

As to the loss of LiveType. It was probably inevitable as it was increasingly obvious that Motion was taking on many of the roles previously done by LiveType. By adding in the LiveType glyph animation features to Motion (adopted directly from LiveType) most of the functionality is now in Motion. My only concern is whether Motion now recolors LiveFonts correctly (i.e. the way LiveType did). I’ll test as soon as I have a copy in hand.

Finally, the price. Who can complain about Final Cut Studio being the same prices now as Final Cut Pro was alone for the first couple of generations.

Certainly, on the surface, it’s a good release.

On the timing – I notice that all Pro Apps products – Studio, Server and Logic (Pro Music) all came out together for the first time. Does it mean anything? It’s Apple, who knows and I’d rather not drive myself crazy trying to second guess them!

Jul/09

22

An update and two new pieces of software

We’ve been all about logging and metadata over the last few weeks!

First Cuts has just had a substantial update: we’ve added a new module to it that makes it easy to use Microsoft Excel to do your log notes. The new module is called exceLogger and it came about because of a suggestion from a First Cuts user. The advantage is that, even if you’ve already captured your clips, exceLogger will read your log notes out of the Excel spreadsheet and add them to the logging fields in your Final Cut Pro project.

The update is free for First Cuts owners.

We liked the idea of exceLogger so much that we created a stand-alone application called exceLogger for FCP – you can read more about it here.

The second new piece of software is something completely different. Final Cut Pro back at version 5.1.2 introduced support for QuickTime metadata, and more cameras and formats have been adding metadata to their media files. (Philip wrote about this metadata at his blog.) The problem is that you can’t see this QuickTime metadata in Final Cut Pro’s browser view – it’s hidden.

That’s why we created mini Metadata Explorer (miniME for short): export your clips from Final Cut Pro as an XML file, and open it in miniME. The spreadsheet view fills in with your clip names and columns of QuickTime metadata.

The free version of miniME allows you to save this metadata out to an Excel spreadsheet. But if you buy a serial number you also get the option to add this hidden metadata into the Final Cut Pro logging fields of your choice. There’s more information here.

I have never been an HDV hater. I always thought that it was a great format, that allowed a lot of HD production to be affordable, while needing to be treated carefully for maximum quality.

From the first JVC HDV camcorder – lousy camera but showing promise – HDV was an affordable, accessible HD format that continued to improve in quality from generation to generation as the encoders improved. (MPEG-2, like DV, is constructed so that there can be considerable innovation and improvement on the encoder side, as long as a reference, or standard, decoder can decode it.) MPEG-2 is now more than four times more efficient than it was when the specifications were finalized 15 years ago.

The reason for the codec history lesson is that HDV is based on MPEG-2. (As are XDCAM HD and XDCAM EX.) Encoders improve over time so inevitably models fall behind the latest releases. For that reason I had to drop from consideration – for a new camera – Canon’s XL-H1, A1, and G1; Sony’s diminutive HVR-A1U ; and JVC’s KY-110U. These were all released in 2006 or earlier and while Canon claimed the “best” encode quality at the time, that is no longer even remotely true. JVC themselves claim that the MPEG-2 encoders in the HD200 and HD250 cameras are “100% better than the year before” (the year the 110U was released)!

While these would be excellent purchases on the second hand market, if you’re buying new you should be buying state-of-the-art, not three year old technology. That’s two whole encoder quality iterations!

Another reason why HDV didn’t make the cut this year is that most of the pro-focused camcorders are more expensive than more versatile and up-to-date options. For example, the nearly two-year-old GY-HD250 currently has a street price of $8,950 – that’s the highest street price of any camcorder on the list and more than Panasonic’s HPX300 or Sony’s EX-3.

I’d certainly still consider an Canon HV40 as a personal camera or a crash camera – at only $850 it’s hard to go wrong. The main reason it would still stay in play as a personal camcorder is price and native workflows in most NLEs. At least well-proven workflows in all NLEs. But even here the upcoming Canon Vixia Canon HF S11 and HF 21 AVCHD will likely give better quality – unless you want 24P, which is an HV20/30/40 exclusive in the price range.

This year we have a plethora of great choices for camcorders: none of them HDV in my opinion. If you’re not editing with Final Cut Pro – where the JVC HD100 and HD700 are less attractive – then you might consider a Sony V1U (released 2007, so only one generation of technology old) but for the million and a quarter Final Cut Studio users the native QuickTime workflow with the quality of the 35 Mbit/sec XDCAM HD codec makes a lot more sense at the same price (V1U vs HM100).

This year’s great choices are all non-tape cameras: HPX-300, EX-1, EX-3, HPX170, HM700, HM100, and HMC150 write to proprietary solid state media (P2, SxS) or to inexpensive and ubiquitous SDHC  cards. Solid state media at tape-like pricing that you can simply record and keep as well as keeping a digital backup. (Now that’s appealing.)

So, it seems that HDV was the last new tape-based format, ever. And I think we’re over it. As we’ve started to work out issues of long-term storage of non-tape media, the advantages of much-faster ingest – instant in some cases – and enhanced metadata support have become obvious.To different groups at different times, for sure, but we are facing a non-tape future.

And I think I’m OK with that.

The format that has really surprised me is Panasonic’s AVCCAM. I have to say my initial response to the HMC150 was “why on earth are they muddying the waters by rebranding AVCHD as AVCCAM”? I’m still not convinced the two names for the same format makes sense, but the higher data rates available on the HMC150 (and upcoming HMC40) and the AVC (a.k.a H.264) codec at the base of the format, mean that AVCCAM delivers much higher image quality: well, images that suffer less from compression-related degradation.

The disadvantage: only Premiere Pro CS4 and Sony Vegas really deal with it natively and Premiere Pro CS4 still has some issues with some variants of the format. Avid and Apple’s software re-encodes the files to the much-larger ProRes 422 or DNxHD codecs. (Typically 5-6x the storage requirements of AVCCAM/AVCHD.) But it’s a decent camera at a decent price with higher-than-HDV image quality, just with a workflow hiccup. (See comments on HV40 above.)

The HMC150 records to SDHC cards, as do the other two hot picks of the year: JVC’s HM100 and HM700. Whatever format you choose (HPX300, EX-3 or HM700) if you want a shoulder mount you’ll pay a premium. Typically, however, you get interchangeable lens capability in those same cameras, so it’s not all bad.

Finally, a word about the HPX-300. Because of the AVC-Intra support, the HPX-300 has the highest record quality (compressed image quality) of all with 50 or 100 Mbit/sec bit rates and 4:2:2 10 bit recording, there’s no real arguing that this is the quality king this year.

Except for the Ki Pro Factor. AJA’s almost-released Ki Pro is a hard drive or Compact Flash recorder that records native QuickTime files in ProRes 422 – near uncompressed 10 bit, 4:2:2 recording quality equal to the AVC-I support in the HPX-300. Every one of the recommended cameras this year can record uncompressed analog or digital output to the Ki Pro. If you’re not working with Final Cut Pro though, it’s a wash, like the JVC HM100 and HM700.

It must mean something when there are so many cameras targeting a specific postproduction NLE. The only other time I recall that happening was with a (from memory) Hitachi camera that recorded native Avid media, but I forget the details and it never reached any sort of momentum.

HDV 2004-2009 R.I.P.

There was a time when the only way to get a book published was to interest a publisher and sign away your copyright to that publisher. There were definitely benefits to that arrangement, mostly starting with a nice up-front advance on sales!

However, most authors never see anything more than that advance and usually end up owing money (in theory) to the publisher as a consequence of insufficient sales to cover the advance. The per-book return to an author is so low that most authors make more money off the Amazon affiliate link than they do from the book sale!

When you self publish you get a much larger return-per-sale than from a publisher, because you’re taking on more of the work and risk yourself. With print-on-demand technologies and online sellers like Amazon open to all, it’s certainly practical to self-publish, but should you?

Based on my experience with The HD Survival Handbook; Pro Apps Tips collections; Awesome TitlingSimple Encoding Recipes (just rewritten last week for 2009) and most recently The New Now. This exercise started with simple downloadable PDFs and has led to a paperback now in Amazon.

That you will have to write your content and provide most of the illustrations is expected and pretty much the same whether you self publish or have a publisher. One intangible advantage of a publisher is that they are going to keep on your back for the book once they’ve paid you the advance, whereas when you self publish you’re responsible for your own scheduling.

You have to provide your own editor/proofreader

Everyone (whether they like to think it or not) needs a proofreader and someone who reads their material to ensure content accuracy and grammatical clarity. Believe me, my work is much better thanks to Greg Clarke’s careful read throughs and constructive criticism. Even better, he works to improve my work, not take my “voice” out of it.

My experience with publishers (two companies) is that they try to achieve this soulless bland style that could be anyone. I have, as you probably have noticed, a personal style and voice in my writing. I like that and it seems my readers like the style. By self publishing I get to keep my voice in the work – to keep the writing in my style not something generic and dull.

If you must edit your own work, or simply can’t find someone to fill that role, then read it out loud. Reading aloud takes a different path within the brain and you’ll recognize mistakes or lack of clarity much more easily if you read out loud.

You’re responsible for design and layout

Personally I like playing with illustration, layout and design. My font choices are probably boring and The New Now is probably a point-size too large (although my contemporaries like the slightly larger print for aging eyes). I totally enjoyed laying out and creating the illustrations for The HD Survival Handbook so this isn’t daunting for me. But if you’re not comfortable doing design, you’ll need to (probably) pay someone to lay out the book, whether you’re distributing a PDF or going to print.

Likewise cover design and cover copy. It’s all going to come back to you without a publisher, so be prepared to put in even more time, or pay someone to do it. For covers, Amazon’s CreateSpace has templates you can draw on.

You’re responsible for the printing

What once was one of the two primary reasons for having a publisher was to fund the expense of printing (typically) 5,000 books. (Not surprisingly, the advance when I was working with publishers was equivalent to the return from 5,000 copies. Few authors see any additional return.)

These days, with on-demand printing already very reasonable for B&W books and getting more so for color, printing is not an issue any more. (As an aside there’s a new generation of the print-on-demand technologies just announced that are twice as fast and half the cost of the current machines. This will reduce the cost of on-demand printing even further.)

I chose Amazon’s CreateSpace simply because the relationship with Amazon makes it a very simple choice. It solves three problems in one – printing, ISBN number and access to retail distribution. The process is simple enough even for a first time user. I had only one issue that appears to have been more a problem with UPS than with CreateSpace.

You can use CreateSpace as a channel to Amazon, or simply to print copies of the book to sell after presentations or from your own website. (We sell the PDF version from our site, all print copies that are not in-person sales are handled by Amazon.)

Now, when an order is received at Amazon, it’s printed at CreateSpace and shipped without any additional effort on my part.

It is a little more complicated to be listed in Amazon if you use LuLu or other on-demand publisher.

You need to provide an ISBN

While not necessary if you plan to only sell direct, an ISBN number is essential if the book is to go into any distribution channel or to a retail bookshop. Some places want to charge up to $250 for an ISBN to be allocated to your book, but CreateSpace include the ISBN for no additional charge. You simply leave a blank space on the cover design for where the ISBN will be imposed and printed.

Booksellers worldwide can order your book by ISBN.

You need to get access to distribution channels

Unless you plan to only sell in person and through a website, you needed a publisher to get access to the retail book channel. CreateSpace automatically offers listing in Amazon via a simple checkbox and price setting. (You set the price for Amazon, although they will tell you the minimum price you can sell and still get a return!)

Although there are other booksellers – who can order the book via the ISBN – I didn’t think there was value in seeking to be listed at Barnes and Noble or other bookseller. My book can be found on Amazon or ordered by any bookseller and that’s enough. I also figure anyone in our industry (loosely defined as Digital Production, Post Production or Distribution) will likely buy online rather than attempt to find any give book in a walk-in bookshop. Most likely they will go to Amazon where the book is listed.

Open, unmediated access to the Amazon retail site is one of the most significant changes that made self-publishing practical.

You need to do your own publicity and promotion

In theory, your publisher is going to promote and publicize your book. In theory. In practice what mostly happens is that the book is listed among all upcoming books in your category in a publication circulated to bookshops (so they will advance order copies). They’ll send out an email to selected, somewhat appropriate media and bloggers, and that’s about it.

You might get a 30 minute presentation spot on a publisher’s booth during a trade show but by-and-large that’s the publisher’s contribution to promoting your book. Most authors will expend effort to promote the book themselves anyway.

Which is another reason to consider self-publishing. If you’re going to need to promote your book yourself anyway, why not just promote your book yourself and leave the publisher out?

I wrote an article for the 2009 Supermeet Magazine (available shortly for download – check LAFCPUG.org) on growing a market for your independent production. That information would be equally valuable for building a market for a book, using modern PR techniques and (don’t hate me) “social media”. (There is more in The New Now on using the same techniques to build a business – naturally with a lot more depth.)

From my perspective self-publishing has been a positive experience. I get to keep my unique style and voice; I get to control how the book looks (not important to everyone but it is to me) and most importantly, I get to keep a larger portion of the return from my hard work. To date, we have done significantly better on the books than I would have had I gone down the more traditional path. Given the sorts of advances now being offered by publishers (trending toward half what they were five years ago and not enough to cover the time to write a book) I have done very, very much better from fewer sales than I would have had I published via the traditional route.

And here’s a final benefit. Author copies from CreateSpace are at cost. They are much lower, particularly for B&W/Grayscale books than you would think, such that my New Now book is often my new ‘calling card’. It’s a inexpensive way to keep people thinking of you and recognize the value I can add with consulting and other services.

As long as the six “You have to” issues listed here aren’t deal-breakers for you then I recommend you give it a go!  Got questions? That’s what the comments are for.

Jul/09

9

How to get paid faster!

This is summarized from a small section of the “Work Smarter” chapter in The New Now: How to grow your production or postproduction business in a changed and changing world.

Never feel reluctant to invoice customers. You have done great work for them, helping make them more money (or you don’t have a future) so there’s no reason to feel the slightest bit embarrassed about wanting to be paid, and paid promptly.

Pay Fast

Let’s consider the other side of the equation: if you want to be paid fast, pay fast yourself. Unless you never want to do business with a supplier or subcontractor again, keep the relationship good by paying when due, or ahead of when due. Doing so will keep the business relationship alive and improve your reputation. You’ll become a valued client of theirs and get preferential treatment if it ever comes down to a choice of doing business with you, or with someone else.

You are not a bank – reset your status with customers

If you find yourself getting strung out for payment then you need to reset the relationship. If necessary, go to those customers who are slowing payment and point out that you’re a production company, not a bank, and since you won’t be getting a big bail-out you rely on customers to pay promptly.

Make sure you make it clear you’ll run through hoops for the customer but you expect them to pay in a timely manner.

Tip for the Paranoid: Watermark all preview tapes, DVDs or files before payment has been received.

Talk about payment terms up front

You’re going to have to talk about it sometime and before you’re committed to the work is the best time. Unless payment terms are part of the agreement, customers can always claim they “didn’t know” or “we can’t do that”. If there is a genuine problem that their company cannot pay on your terms, this is the time to negotiate it, not when the job is complete.

Know your profitable customers

Do you know how much each client’s payment cycle costs you?

In a post at Howard Mann’s Business Brickyard, Colleen Barrett – former President of Southwest Airlines – has this story to tell about examining the profitability of each customer based in part on payment history.

Send your invoice promptly

In a small production or postproduction business, it’s the video work that interests most of us. The fact that we have to run a business as well is kind of a drag. One consequence is that invoices don’t get sent out promptly. Realistically you can’t expect anyone to pay until they have the invoice, so get the invoice out the door as quickly after the client accepts the work as possible.

Important: I’m talking about tried and true customers here. If you’re working with a totally new customer stick to the traditional policy of 30% payment of budget when work commences; 30% at the completion of production (or suitable postproduction milestone, like first draft assembly) and the balance of 40% due on completion. For new customers it’s very unwise to let the master out to the customer until payment has been made (and cleared the bank).

Avoid the mail

Once you let the postal system have control over your invoice you don’t know when it was received unless you take out receipt confirmation, which requires a trip to the post office for each invoice. Not fun. Instead, email the invoice and ask the client to confirm receipt. If you don’t hear back within a day or so, follow up with another email or phone call to confirm that the invoice has been received and that there are no problems with it.

Know the client’s payment process

The larger the company, the more complicated getting paid is. Know their system so you can work with it to your advantage, or at least not make payment unnecessarily slow because of a mistake at your end.

Offer a discount

Depending on your jurisdiction many Utilities and Government bodies are legally obliged to take advantage of any discounts offered.

Make your invoices pleasant

Invoices are a fact of life but make them appealing. Get the person who designed your logo and stationary to design the invoice. Make it look attractive and look like your company.

If you have a relatively small number of customers, like we typically do in production and postproduction, then take the time to write a personal message with the invoice (in the email or an accompanying letter). Make it some funny quote, or poignant statement about the industry, or something relevant to the customer’s business. By including something pleasant you can eventually make people look forward to your invoice. It can be the same message on every invoice, although change it at least every month.

Systematically follow up every invoice

Simply sending out a ‘Past Due’ notice and subsequent monthly statements isn’t going to cause a company deliberately delaying payment any stress or pressure. We’re enabling their poor behavior!

Never let copyright pass until you’re paid in full

One of the things I learnt painfully, was to put a clause in the agreement (never a contract, that scares people) such that copyright in the work did not pass to the client until payment was received in full.

It’s a blunt instrument, and only really useful when the relationship has broken down and future work is unlikely, but it ups the ante because willful breach of copyright has penalties attached that inadvertent breaching does not.

Jul/09

3

What about the hidden metadata in Final Cut Pro?

We’ve been working with a few people previewing, and getting feedback on, a new addition to our First Cuts assisted editing tool – basically checking some areas of Final Cut Pro that I haven’t explored for years and I had the most interesting conversation with Jerry Hofman.

Before I get to that though, let me ask (beg) for feedback on any of our software products. We want to keep making them better and love feedback, feature requests and especially problems. We respond quickly – this particular feature request was received on Friday 26th, discussed briefly during a Hollywood Bowl concert on Saturday night and was a preliminary feature by Wednesday!

Anyhow, in discussing this particular tool with Jerry (you’ll find out what it is soon enough!) I asked how much metadata from RED is imported to Final Cut Pro via Log and Transfer. Jerry, who uses RED a whole lot more than me (i.e. he uses it!) said “not very much”, which pretty much matched my understanding working with a whole bunch of RED clips with Sync-N-Link and never seen any of the color temperature, date or other information that’s in the RED metadata.

In sharing this conversation with my smart partner, and our main code writer, Greg Clarke, he commented “Oh, I do think Mr Hofman is mistaken!” (or words to that effect). Turns out Greg has been scrolling past this metadata for most of the last year. The difference is that Greg works with FCP XML exports, while Jerry and I were looking through the Final Cut Pro interface.

OMG! What a treasure-trove of metadata there is. And why didn’t we know of this? Surely someone from all the conversations we’ve had with people developing Sync-N-Link must know about this? (You’ll all come out of the woodwork into the comments and let me know you’ve known about it for years!)

So this morning Greg’s built me a tool for exploring this hidden (I prefer “secret” because it makes it seem more mysterious) metadata, turning it into an excel spreadsheet. I already had XDCAM EX media and P2 media along with RED clips and I was able to download some AVCCAM media shot with Panasonic’s HMC-150 camera.

There’s an enormous amount of Source metadata there. A lot of fields that seems to be unused even in the camera. Clearly, the current version of Final Cut Pro doesn’t have the flexibility to display items like ‘whiteBalanceTint’ or ‘digitalGainBlue’ settings in the original file. I guess this type of metadata is going to be challenging for Apple and Avid to deal with, as they don’t (currently) have displays in their application for the enormous amount of metadata that are generated with tapeless cameras. I’m just very thankful that it’s being retained, and that it’s available via XML (and associated with a Final Cut Pro clip ID).

There’s definitely metadata already  being produced that we can use to improve First Cuts – at least for non-tape media sources. But it’s also interesting to explore fields that are available but not being used.

Show all columns and you'll be surprised at what's available, or going to become available.

Show all columns and you'll be surprised at what's available, or going to become available.

BTW, you can explore yourself using Log and Transfer. Open any type of media that Log and Transfer supports and then, right click on the column header (like you would in Final Cut Pro) and select “Show all Columns”. The columns displayed will change according to the type of media selected.

So far, Sony’s XDCAM EX has the least amount of metadata and the least interesting metadata – barely more than the basic video requirements and information on the device: model and serial number.  RED footage has a lot of metadata, although most is focused on the technical aspects of the shot as you would expect for a digital cinema camera.

But take a peak at the source metadata from P2 Media! All the goodness like the date of the shoot (which FCP otherwise does not export) and time (as does RED) but also fields for ‘Reporter Name’ (awesome for a First Cuts – News product) and Latitude and Longitude. While they’ve been blank in every instance because I don’t think Panasonic are shipping any cameras with GPS built in yet, it does suggest that future Panasonic cameras are likely to contain GPS and store that data in with the media file. Anyone who’s a regular reader will know that means Derived Metadata! There are also fields for ‘Location Source’, ‘Location Name’, ‘Program Name’, ‘Reporter’, ‘Purpose’and ‘Object’ (??).

AVCCAM carries all the fields of P2, more or less, with the addition of a “memo” and “memo creator” fields.

It’s been fun exploring this ‘secret’ metadata. Now to find a way to make some use of it, or make it practical. Would anyone be interested in a tool that would not only read and explore this metadata, but allow some of it to be mapped to existing Final Cut Pro fields?

July 2009
M T W T F S S
« Jun   Aug »
 12345
6789101112
13141516171819
20212223242526
2728293031