The present and future of post production business and technology | Philip Hodgetts

CAT | Random Thought

 

If you look very closely at the restoration credits box at the bottom, you can see my credit as "Post Production Restoration Consultant".

 

Was at the Cinerama Dome to view the restored print of Windjammer and it occurred to me that there’s a lot of commonality between Cinerama (the three camera/three projector widescreen of the late 50’s and early 60’s) and 3D.

But first, a little back story. I have been consulting on the restoration of Windjammer as a technical consultant: making sure that the maximum amount of quality we could get from the print was available for the restoration.

I also advised on tools for the job. The Foundry’s Furnace Core featured prominently as did Adobe After Effects and Final Cut Pro. I also helped set workflow and kept everything running smoothly.

Unfortunately the complete negatives for the three panels of Windjammer are not complete. In fact the only place the entire movie is available was in a badly faded composite 35mm Anamorphic print.

You can see the trailerremastering process and how we telecined (Oh look, it’s me in the telecine bay) online, but today was the only time it’s likely to be shown in a Cinerama Theater.

David Strohmaier and Greg Kimble did a great job on the restoration – all on Macs with Final Cut Pro and After Effects.

Now this wasn’t a full reconstruction so we worked in HD – 1080p24 – but used the full height during telecine and correction so we didn’t waste any signal area with black. For the DVD, due in early 2011, the aspect ratio is corrected and a “smile box” (TM) treatment to simulate the surround nature of Cinerama.

Because we were working in HD, I was pleasantly impressed by how great it looked at Cinerama size on the Arclight Theater’s Dome Cinema in Hollywood. (Trivia point: the Dome was built for Cinerama it never showed Cinerama until this decade.)

Another point of interest was that the whole show ran off an AJA KiPro as it did in Bradford earlier in the year, and Europe last month. Each Act of the 140+ minute show was contained on one drive pack. Can’t recommend the KiPro highly enough.

So, there we were enjoying the story (and restoration work) and it occurred to me that there were strong similarities in cinematic style between “made for 3D” 3D and Cinerama.

 

Before restoration, this composite image was washed out, lacking in saturation and very shifted toward red/magenta.

Before restoration, this shot was desaturated, shifted to red and blown out. (From the screening Sep 05, 2010.)

 

Cinerama seams together three projectors into a very wide screen view that was the precursor of modern widescreen. The very wide lens angles favor the big, panoramic shots and shots that are held rather than rapid cutting. Within this frame the viewer’s eyes are free to wander across multiple areas of interest within the frame.

Similarly, my experience of “made for 3D” 3D movies is that it is most successful when shots are held a little bit longer because each time a 3D movie makes a cut, it takes the audience out of the action for a moment while we re-orient ourselves in space. (Unfortunately there’s nothing analogous to that in the Human Visual System, unlike traditional 2D cutting, which mimics the Human Visual System – eyes and brain together .)

Both Cinerama and 3D work best (in my humble opinion) when the action is allowed to unfold within the frame, rather than the more fluid camera of less grand 2D formats or 3D.

Since 3D had its last heyday around the same time as Cinerama, maybe everything old is new again? Digital Cinerama anyone? (How will we sync three KiPros?)

And one little vanity shot since today was the first (and likely last) time I’ve had my credit up on the big screen in a real cinema:

 

My first (and likely last) big screen credit moment. 9/5/10

My first, and likely last, big screen projected credit.

 

 

In my experience few productions – be they film or television – are well planned from a workflow perspective. It seems that people do what’s apparently cheapest, or what they have done in the past. This is both dangerous – because the production workflow hasn’t been tested – and inefficient.

In a perfect world (oh *that* again!) the workflow would be thoroughly tested: shoot with the proposed camera, test the digital lab if involved; test the edit all the way through to the actual output of the project. Once the proposed workflow is tested it can be checked for improved efficiency at every step. Perhaps there are software solutions for automating parts of the process that require only small changes to the process to be extremely valuable. Perhaps there are alternatives that would save a lot of time and money if they were known about.

Instead of tested and efficient workflows, people tend to do “what they’ve done before”. When there are large amounts of money at stake on a film or TV series it’s understandable that people opt for the tried and true, even if it’s not particularly efficient because “it will work”.

Part of the problem is that people simply do not test their workflows. I’ve been involved with “film projects” (both direct to DVD and back out to cinematic release) where the workflow for post was not set until shooting had started. In one example the shoot format wasn’t known until less than a week before shooting started.

Maybe there was a time when you could simply rely on “what went before” for a workflow, but with the proliferation of formats and distribution outputs, there are more choices than ever to be made.

Which brings me to the other part of the problem. Most people making workflow decisions are producers, with input from their chosen editor. Chances are, unfortunately, that neither group are very likely to truly understand the technology that underpins the workflow – or even why the workflow “works”. They know enough of what they need to know to get by but my experience has been that most working producers and editors do not actively invest time into learning the technology and improving their own value.

And when they’re not working, they’re working on getting more work. Again, not surprising.

But somewhere along the way, we need producers to research and listen to advisors (like myself) who do understand the workflow and do have a working knowledge of changing technology that can be make a particular project much more efficient to produce, but I have no idea how to connect those producers with the people who can help.

We’ve seen, in just a little under two years, how technology can improve workflows, just with our relatively minor contributions:

Rent a couple of LockIt boxes (or equivalent) on set and save days and days synchronizing audio and video from dual system shoots;

Log your documentary material in a specific way, and take weeks off post production finding the stories in the material (Producers can even do a pre-edit);

Understand how to build a spreadsheet of your titles and how to make a Motion Template and automate the production of titles (and changes to same).

If you know you can recut a self contained file into it’s scene components, how does that change color correction for your project;

Import music with full metadata.

These are all examples of currently-available software tools from my company and others that are working to make post production more efficiently. I wrote more about this in my Filling in the NLE Gaps for DV Magazine.

My question though, is how do we encourage producers to “look around and see what’s available” and open up their workflows to a little modern technology. To this end, Intelligent Assistance is looking to work closely with a limited group of producers in 2010 to find ways to streamline, automate and make-more-robust postproduction workflows. So, if you’re a producer and want to save time and money in post, email me or post in the comments.

If you’ve got ideas on how encourage producers move toward more metadata-based workflows? How do we get the message out?

So apparently some author comes up with a figure that online unauthorized distribution is costing the book publishing business $3 billion a year. (Once again repeating the totally bogus argument that each download is a lost sale but that’s for another post.) One has to question the independence of the study when the writer works for a company presenting a “solution” to the problem they identify, but let’s leave it for the moment.

This is only the tip of the iceberg. There’s another industry that costs the book publishing business $100 billion a year in lost sales: libraries. Using the same methodology as the study in the cited publisher’s weekly article above, this blogger calculates that libraries have cost publishers $1 Trillion dollars in the last decade.

So, if we’re going to solve the book “piracy” problem in a way that really helps publishers, we’ll have to close all the libraries. After all they’re costing publishers more than 30x more than any unauthorized distribution does: even if you calculate that unauthorized distribution with totally bogus methodologies.

In fact, photocopying also costs the print publishing industry billions of dollars a year, so we should regulate their use. In fact, if the RIAA/MPAA want a “three strikes” rule, then it should be applied to everything.

A three-strikes rule (as introduced in France) would mean that if an unsubstantiated assertion from a record-company-appointed “watchdog” is made against an IP address, the account would be cancelled and the user taken off the Internet. (Note: this is without judicial process; without any proof that the account holder did the download; with a system that has accused dead people of “piracy” or any other legal process we normally hold as being important before issuing punishment. At least there has to be a trial!)

So, if this is a good idea for music or movies (like they’re some “special” category) then it obviously should be carried through to protect print publishers as well.  According to “Freedom to Tinker” it would work like this:

The government sets up a registry of accused infringers. Anybody can send a complaint to the registry, asserting that someone is infringing their copyright in the print medium. If the government registry receives three complaints about a person, that person is banned for a year from using print.
As in the Internet case, the ban applies to both reading and writing, and to all uses of print, including informal ones. In short, a banned person may not write or read anything for a year.
A few naysayers may argue that print bans might be hard to enforce, and that banning communication based on mere accusations of wrongdoing raises some minor issues of due process and free speech. But if those issues don’t trouble us in the Internet setting, why should they trouble us here?
Yes, if banned from using print, some students will be unable to do their school work, some adults will face minor inconvenience in their daily lives, and a few troublemakers will not be allowed to participate in — or even listen to — political debate. Maybe they’ll think more carefully the next time, before allowing themselves to be accused of copyright infringement.
In short, a three-strikes system is just as good an idea for print as it is for the Internet. Which country will be the first to adopt it?

After all, if it’s fair to have people cut off the Internet (and their life) based on three unsupported, unproven assertions from anyone, it should apply to everything. Right? It should apply to the children of Record Company executives (who apparently only got a “talking to” from their father -wish I could find a link to that story).

This is, of course, after the RIAA and MPAA have totally failed to establish that they have had any loss from piracy. (The biggest grossing movies were mostly pirated before release from within the studio.) Study after study (sorry Adage login required) after study shows that those who download music are the biggest buyers of music, but facts have never gotten in the way of idiot assertions from these organizations.

So, either we apply “three strikes” under some reasonable regime that would require the record company or movie studio to actually do what the law requires and identify the person at the account and prove that they uploaded a file as “making available” is not established legal precedent in any jurisdiction; or we’ll allow a regime where anyone can be accused of “piracy” by any other person without proof or the need to follow established law.

Which are you going to support?

One way or the other I’ve been thinking of what a “new media studio” would be like; how will people be paid; what would drive consumer demand; and all the rest that goes with a theoretical construct of a “replacement” for what we have now. Practically speaking, it’s more likely to evolve with many ideas in parallel, than come in one sudden upheaval that creates a new greenfield.

Although, as an aside from my main theme, I look ahead two years to when the actors’, writers’ and directors’ contracts come up for renewal. My feeling is that they’ll either have negotiated a settlement before the contract runs out, or we’re in for an apocalypse.

Remember that this a purely theoretical construct so I’m forgiving myself for not having every detail covered. What set me thinking, horrible-though-it-is was Demand Media. Wired’s article The Answer Factory: Demand Media and the Fast, Disposable, and Profitable as Hell Media Model is really a nasty kick in the mouth for production skills: essentially “quality” has no place in this (highly profitable) production line, where costs have been driven down by competitive pressure. It is probably the dystopian future we were warned against when the industry became “democratized”.

Fortunately I don’t think it’s feasible for television-like content. (I’ll just call it Television, but I mean the sort of content that people watch on networks, cable channels, or off a satellite or even via Hulu.) For a thousand reasons I’ll bet at a minimum a more complex production process and higher demands for writing skills. Even relatively successful Internet Shows often have underdone production values from lack of quality writing, lighting or sound. (And some are excellent in all three because they have been made by “old school” folk.)

But let’s step back and apply some of the principles and see what might come of it.

Based on audience demand

Instead of basing program ideas on  some ‘gut feeling’ of a producer or executive we can take a lesson from the Demand Media case and design shows tailored to specific audience demands. Demand Media have algorithms that watch search terms and derive future “shows” as answers to questions people are asking ‘now’.

I’m sure there are ways of tackling similar challenges for TV shows. Monitor social media interactions for the types of comments being made about shows; use that data to derive algorithms to direct existing shows and find ideas for shows that will have an audience, and the business model for that audience would also be known. (See below, Funding it All)

Production Line

Everything becomes a production line. It’s going that way now, but the whole process needs to not be recreated anew for each show. In a greenfield model, employment is constant with people moving from show to show as they come and go; moving from one creative grouping to another.

Everything is standardized: production gear, cameras, record formats, etc. Standard workflows, controlled by the studio.

Talent

Talent would be mostly staff – from writers, production crew, actors, editors, audio post – paid decent salaries and with good benefits. Everyone would get a decent salary with a flat salary structure (instead of the enormous salaries for some) but would also share in the studio profits. Everyone is motivated to make it work.

Talent (across the board) is nurtured in their craft advancement based on merit. (Implicit in advancement is the concept that people will leave, unless the studio always grows.)

Production

Put production in inexpensive facilities, either purpose built (long term) in inexpensive locales (low cost counties) or in excess facilities from a declining (declined) old industry.

I see a lot of standing sets and green screen, and frankly a lot of synthetic sets.

Again with standardized production gear, all matching grip and common set modules for set construction. Work on the model of Southwest, JetBlue and Virgin America: one standard service, in standardized aircraft with much simplified maintenance and costs for spares.

Standardizing on common equipment, workflows, formats and outputs would save production and post huge amounts of money. Equipping with modern gear that has great quality at affordable prices taking advantage of all the cost reduction of the last decade.

Production will require talent. We need it to “look and sound like Television” because that’s where a large market is at (if we’re in a greenfield remember). It will still need to be lit well; recorded well and finished to a high standard, but I would argue that the most profitable approach would be to go to the least expensive “good enough” solution. And by “good enough” I do mean that it has to be good, but maybe for this type of content, shooting with a Viper might be “more quality than we need to pay for”. But AVC-I or direct ProRes acquisition with a KiPro makes for high quality and efficient pipelines that maintain “good enough” quality.

Apply that concept across the range of production departments: good enough, but not luxury.

Promotion and Audience Building

I think there are a lot of lessons from the independent film producers who have learned how to build audiences, and it’s something I’ve presented on before. It will be more building and nurturing fan bases and involving them in the process as much as possible.

Funding it all

Ah yes, the million dollar question. Or multi-billion dollar question if we’re talking an alternative to the current Television industry. Of course, I don’t have any definitive answer because, well frankly, there won’t be one. As was obvious at Distribution U, there are many avenues to funding a program:

  • some audiences will want to pay directly, and that’s a viable business model as I’ve demonstrated before, for even quite small audience sizes;
  • less expensive productions make it easier for one advertiser (a.k.a. brand in recent discussion here) to sponsor the whole show (Mark Pesce’s Hyperdistribution model)
  • use the show to promote merchandise, live performances, or other scarce good.

In one part of my mind I think a model like this could actually work. In fact I’m sure some variation on this is part of Jim Louderback is attempting with Revision3 and Kip McClanahan is attempting with On Networks. I suspect that no-one is going as radical as Demand Media, and I hope no-one ever does.

Kip McClanahan
CEO, On Networks

I have to say I was horrified to read that Ryan Seacrest was getting $15 million a year to host American Idol. To host, not produce, not to book a studio, not to actually produce anything but to host. To read a teleprompter and walk without falling over.

I’ve never met Mr Seacrest and I have no personal animosity but $15 million  a year to host a talent show seems just wrong. Way out of balance with anything real.  This is a 3x increase in salary over what he’s been getting – $5 million a year – for the same job.

That same amount of money would produce six episodes of Mad Men including paying all the far more talented cast (hey, they can act); paying the crew; locations; editors; facilities and presumably profit for the producers. All instead of paying one person to turn up.

I cannot believe that any one person brings that much value to a show. It just seems way out of balance to anything reasonable and human and really, tells me why the whole industry needs to be made over anew.

Equally stomach churning are the  sums paid to the CEOs of the major media companies, even when the results they turn in are “disappointing” to say the least.  Disney CEO Robert Iger earned $30.6 million last year while presiding over a 26% drop in profit at Disney? Where is the shareholder revolt? Why are they not demanding an $8 million drop in salary package?

It’s not just Iger; the rest of the crowd of losing value media company heads are all paid outrageous sums of money for the value they (don’t) bring to the companies they head.

Here’s my solution. Set a limit to the maximum ratio the highest and lowest paid employees of a company can earn. You want to increase the CEO salary, then everyone’s salary goes up to share in the (obviously great) results. Set the ratio at 100:1 if you like, but set a ratio that cannot be broken.

Until there’s some sanity I’ll be putting my efforts into demolishing that industry to start over afresh.

Over at Techdirt, Mike Masnick wrote an interesting article suggesting that copyright on “art or music” may be unconstitutional. Now, I don’t expect the Supreme Court to rule that way any time soon – there’s not even a case before them – but it did make me wonder what would be different if copyright didn’t exist on film, television, music, architecture and other creative arts.

I thoroughly recommend reading Mike’s article, but the gist of the argument is that the Constitution provides for a “Limited Period” (originally 14 years, not 50 years past the death of the author) for “authors” (only, no descendants or corporate owners) “To promote the Progress of Science and useful Arts”. Useful Arts apparently being the business of invention in the language of the day. No mention of almost all our current copyright system.

We wouldn’t have the RIAA suing its best customers. The RIAA, MPAA and their kind around the world would have to work out how to compete, which is simple: provide a good product at a fair price and provide it conveniently. Without the crutch of copyright to protect a dying business model (and a highly profitable one, so it’s understandable they don’t want to adjust to the new reality) they would have to compete.

After all, television has been giving its content away pretty much since day one. Of course others (advertisers) pay for the privilege of interrupting the program with something irrelevant, which is why I’d rather pay a fair amount for my ad free copies, thanks.

If there was no copyright, then digital copies would abound, and content creators would either have to add value to their official (paid) version; or bundle advertising so closely with the show that it doesn’t appear like advertising. (In fact I believe the future of advertising is branded media, but that’s a post for another day.)

Of course, it can be done. iTunes and Amazon’s music store sell music that is fairly readily available via various P2P mechanisms. Every one of the 4 Billion songs Apple has sold has been available free.

Perhaps content could be free after a period of time, and people will pay for immediacy. This is the strategy the Direct TV hoped would give them more customers by showing Friday Night Lights on Direct TV before their outing on NBC. (See my earlier article on how the numbers stack up for new media, on how that program is being funded and what a fair price would be for a viewer.)

People will pay for convenience and simplicity – both reasons why iTunes has been such a successful model, despite charging way too much for television and movie content.

There are dozens of ways that television, and new media production, could fund itself if there was the necessity and they couldn’t fall back on copyright. In fact in my “Making a living from new media” seminar, I outline 13 different ways that free media can lead to a decent middle class income.

If “Hollywood” wasn’t covered by copyright, how different would it be?

During a conversation last night about a new type of touch-screen display that mounts on regular glass (don’t know any more about it than that – hope to get more information shortly and share).

During the discussion I was reminded that in the earliest days of using NLEs (a Media 100 for me at that time) I had fantasies about being able to edit using a 3D display environment, where in this virtual world the clips would be in space or grouped together in some logical order (these days I’d say that was based on metadata groupings) and the editor could simply move clips around, stack them and build the story along a virtual timeline. Even composite by stacking clips.

Not that I ever really developed the idea beyond that trip to my imagination, it does make me wonder if some sort of surface like that being proposed for regular glass, or even maybe a 30″ Cinema Display type screen, that was a full touch-screen surface that supported gestures, etc. Microsoft’s Surface would be close to the sort of experience I’m visualizing.

In thinking about it further I realized that the sort of work we’ve been doing with metadata would tie in nicely. The metadata would be used to group and regroup clips organizationally, but also to suggest story arcs or generally assist the editor.

It’s probably time for a new editing paradigm.

If not for a future version of FCP or Media Composer, perhaps, for iMovie?

Jun/09

5

I think there’s a sixth type of metadata

When Dan Green interviewed me earlier in the week for Workflow Junkies, in part about the different types of metadata we’ve identified, Dan commented that he thought we’d get to “seven or eight” (from memory). I politely agreed but didn’t think there were going to be that many. I should have known better.

The “iPhoto disaster of May 09″ is actually turning out to be good for my thinking! In earlier versions, iPhoto created a copy of the image whenever any adjustments were made. The original was stored, which explains why my iPhoto folder was almost twice the size of my actual library as reported in iPhoto. iPhoto 09 (and maybe 08, I skipped a version) does things a little differently.

When I changed images while the processor was under load, the image came up in its original form and then – a second or so later – all the corrections I’d made would be applied. It was obvious that the original image was never changed. All my color balance, brightness, contrast and even touch up settings were being stored as metadata, not “real changes”.

The original image (or “essence” in the AAF/MXF world) is untouched but there is metadata as to how it should be displayed. Including, as I said, metadata on correcting every image blemish. (The touch up tool must be a CoreImage filter as well, who knew?)

So, I’m thinking this is a different type of metadata than the five types of metadata previously identified. My first instinct was to call this Presentation Metadata – information on how to present the raw image. Greg (my partner) argued strongly that it should be Aesthetic Metadata because decisions on how to present an image or clip or scene, but I was uncomfortable with the term. I was uncomfortable because there are instances of this type of metadata that are compulsory, rather than aesthetic.

Specifically, I was thinking about Raw images (like those from most digital cameras, including RED). Raw images really need a Color Lookup Table (CLUT) before they’re viewable at all. A raw Raw file is very unappealing to view. Since not all of this type of metadata is aesthetic I didn’t feel the title was a good fit.

Ultimately, after some discussion – yes, we really spend our evenings discussing metadata while the TV program we were nominally watching was in pause – we thought that Transform Metadata was the right name.

Specifically not “Transformative” Metadata, which would appear to be more grammatically correct, because Transformative has, to me, a connotation of the transform being completed, like when a color look is “baked” into the files, say after processing in Apple’s Color or out of Avid Symphony. Transform Metadata does not change the essence or create new essence media: the original is untouched and Transfomed on presentation.

Right now we’re a long way from being able to do all color correction, reframing and digital processing in real time as metadata on moving images as iPhoto does for still images, but in a very real sense an editing Project file is really Transform Metadata to be applied to the source media (a.k.a essence).

This is very true in the case of Apple’s Motion. A Motion project is simply an XML file with the metadata as to how the images should be processed. But there’s something “magic” going on because, if you take that project file and change the suffix to .mov, it will open and play in any application that plays QuickTime movies. (This is how the Project file gets used in FCP as a Clip.) The QuickTime engine does its best to interpret the project file and render it on playback. A Motion Project file is Transform Metadata. (FWIW there is a Motion QuickTime Component installed that does the work of interpreting the Motion Project as a movie. Likewise a LiveType QuickTime Component does the same for that application’s Transform Metadata, a.k.a. project file!)

I think Dan might be right – there could well be seven or eight distinct types of metadata. It will be interesting to discover what they are.

Jun/09

3

Why don’t I care if newspapers die?

I was once an avid reader of newspapers – a three-paper-a-day man: the local paper for local news; the capital city daily for national and international news and the national Financial Daily for business news. I now read none and think that the whole industry has the stench of death about it – not financially (although it certainly has) but the quality of work was what sent me away.

Newspapers (and television news) is notoriously inaccurate. There are exceptions. Occasionally a paper will do a great job of investigative reporting and team it with great writing, but this is not the “norm”. Most newspaper content is filled with slightly rewritten press releases, information easily found elsewhere (movie start time, tides, weather, TV program guides, etc) and copied from the real source to the newspaper) and some hastily written article about an event that is full of inaccuracies because the reporter hasn’t a clue about the content.

Do you think I’m judging too harshly? Consider this. Have you ever watched the TV news report, or read a newspaper article, of an event you were part of or participated in? Has that report been 100% accurate? I can honestly say that, of the dozen or so appearances I’ve made in newspaper or TV media, or those associated with other family business where I’ve been privy to the facts, not one report was 100% accurate. Not a single one.

So I have to assume that every article is written with the same sloppy adherence to the facts of the story.

The average newspaper adds very little value. Most of the content is not original reporting – between the previously-mentioned press releases and Associated Press and/or Reuters and fact-based content sourced from elsewhere there’s not much original, true news gathering.

The little there is is easily reproduced elsewhere. For example, local news site Pasadena News outsources the writing to Indian writers. If you’re only rewriting a press release, or reporting the outcome of local council meetings, which are placed online anyway, then the desk could be in Pasadena or Mumbai. Fact checking (if anyone actually does that) is an email or phone call away wherever you are in the world (as long as you’re prepared to deal with time zone issues).

Newspapers, in their current dying form, are not adding a whole lot of value. Instead it’s nostalgia that’s keeping them going – the nostalgia of lazy Sunday mornings with paper, family and coffee, not the delivery of well-researched original reporting.

If we have Associated Press – who have a very useful RSS feed to deliver relevant content directly to me – why do I need the LA Times to print it for me? If they added a local angle, maybe.

Journalism won’t die with newspapers. In fact, contrary to the opinion of some journalists, the blogosphere – the sheer number of people fact checking – has led to some real stories breaking. Remember the Dan Rather/George W Bush faked papers scandal? Or how the citizen reporter who videotaped (and shared) George Allen’s “macacca” moment that lost him re-election in 2006? It seems in many, many recent cases, citizen journalists have out-performed (in aggregate) the established media in uncovering stories.

So, I’ve gone from a three-a-day habit to a zero newspaper life and am better informed about news than ever. I keep track of Australian news and am better informed than my Australian-resident mother. I scored very highly ion the Pew Research Test Your News IQ with a better score than my newspaper-reading, TV news watching friends and associates.

I won’t be dancing on the graves of newspapers, but their failure to adapt and their high minded refusal to see the log in their own eye makes me indifferent to the failure of the whole industry. Let it be replaced with new forms of news-gathering where some accuracy might slip in.

See also: We need a Fifth Estate and Will “amateurs” save democracy from the “professionals”?

Jun/09

1

What is the fifth type of metadata?

Right now I’m in the middle of updating and adding to my digital photo library by scanning in old photos, negatives and (eventually) slides. Of course, the photos aren’t in albums (too heavy to ship from Australia to the US) and there are not extensive notes on any because “I’ll always remember these people and places!” Except I don’t remember a lot of the people and getting particular events in order is tricky when they’re more than “a few” years old, or those that were before my time because a lot have been scanned in for my mother’s blog/journal.

Last time I wrote about the different types of metadata we had identified four types of metadata:

  • Source Metadata is stored in the file from the outset by the camera or capture software, such as in EXIF format. It is usually immutable.
  • Added Metadata is beyond the scope of the camera or capture software and has to come from a human. This is generally what we think about when we add log notes – people, place, etc.
  • Derived Metadata is calculated using a non-human external information source and includes location from GPS, facial recognition, or automatic transcription.
  • Inferred Metadata is metadata that can be assumed from other metadata without an external information source. It may be used to help obtain Added metadata.

See the original post for clearer distinction between the four types of metadata. Last night I realized there is at least one additional form of metadata, which I’ll call Analytical Metadata. The other choice was Visually Obvious Invisible Metadata, but I thought that was confusing!

Analytical metadata is encoded information in the picture about the picture, probably mostly related to people, places and context. The most obvious example is a series of photos without any event information. By analyzing who was wearing what clothes and correlating between shots, the images related to an event can be grouped together even without an overall group shot. Or there is only one shot that clearly identifies location but can be cross-correlated to the other pictures in the group by clothing.

Similarly a painting, picture, decoration or architectural element that appears in more than one shot can be used to identify the location for all the shots at that event. I’ve even used hair styles as a general time-period indicator, but that’s not a very fine-grained tool!  Heck, even the presence or absence of someone in a picture can identify a time period: that partner is in the picture so it must be between 1982 and 1987.

I also discovered two more sources of metadata. Another source of Source Metadata is found on negatives, which are numbered, giving a clear indication of time sequence. (Of course Digital Cameras have this and more.) The other important source of metadata for this exercise has been a form of Added Metadata: notes on the back of the image! Fortunately Kodak Australia for long periods of time printed the month and year of processing on the back. Rest assured that has been most helpful for trying to put my lifetime of photos into some sort of order. The rate I’m going it will take me the last third of my life to organize the images from the first two thirds.

Another discovery: facial recognition in iPhoto ’09 is nowhere near as good as it seems in the demonstration. Not surprising because most facial recognition technology is still in its infancy. I also think it prefers the sharpness of digital images rather than scans of prints, but even with digital source, it seem to attempt a guess at one in five faces, and be accurate about 30% of the time. It will get better, and it’s worth naming the identified faces and adding ones that were missed to gain the ability to sort by person. It’s also worthwhile going through and deleting the false positives – faces recognized in the dots of newspapers or the patterns in wallpaper, etc. so they don’t show up when it’s attempting to match faces.

Added June 2: Apparently we won’t be getting this type of metadata from computers any time soon!

<< Latest posts

Older posts >>

April 2015
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
27282930