Categories
Business & Marketing Distribution Item of Interest

Why do I have two inconsistent positions about copyright?

Just lately I’ve been dealing with a content aggregation site (or two) that had articles from this blog listed in their articles directory. Worse still is that the site is designed to distribute articles to other sites. I don’t mind the idea: if a writer wants wider distribution, then it probably makes sense to syndicate the article there, than have it sit in obscurity.

I had to fight fairly hard to get my articles out of their system because I had not put them in that system and didn’t want the articles syndicated wildly. Now I do have some syndication organized (if you’re reading this on Toolfarm, thanks) but I don’t want this content distributed anywhere I haven’t directly authorized.

The articles were removed but only after I re-served the DMCA takedown notice on the owner of the domain name, as the normal site admins were not acting in according with the provisions of a DMCA Takedown notice. (I actually thought I’d have trouble when I realized, from the domain registration, that the company was actually in Israel, which isn’t actually covered by US Copyright law! Fortunately they did the right thing.)

We were talking about this over dinner and I realized I had a double standard going on. Not necessarily a bad thing but any internal inconsistency is alway s worth examining.

I was remarking that I am fairly certain there’s at least one school or college that’s using my HD Survival Handbook as a class text, which is not exactly being used in accordance with a single-user license that is the normal purchase. (BTW, we’re always happy to do very attractive bulk pricing for anyone that wants to reuse in a school or commercial organization, as we did recently.) But the thing is I wasn’t particularly upset by it. Sure, I would prefer that they made an arrangement with us for official distribution, but the thing is, I didn’t have any proof that they were doing something wrong. There may be a way that just the teach uses the work as a reference.

If I had actual proof put in my face – such as a student saying that the HD Survival Handbook was actually on a student-accessible server at her college – I would have to act. (In that case I sent a nice email to the original purchaser at that college stating what the student had said and he immediately made it right.) When I say “have to act” I actually mean it. Should an author not act on flagrant breach of the licensing conditions, there are circumstances where the author can lose the copyright exclusivity.

So I was struck with my apparent double standard. I am less worried about meticulously keeping the commercial writings only to those who purchased, than I am about these thoughts being widespread.  Partly that was because the instance with the aggregation site did not have link-backs to this site – the uploader had substituted links to their site, and the content was misused – wrong tags and confusing descriptions. My name even appeared on an article I didn’t write! But it’s also because a lot of what I write here are the beginnings of my thinking about something, or they’re going to be (or have come from) commercial writings.

Mostly, I think, it’s because the commercial products were written to be distributed widely. Plus, if there is a whole class or two that are using my work as their textbook, I’m still being compensated with reputation building. I’m not unhappy with the thought that a whole generation of student will grow up thinking that I provide accurate, understandable and useful information. I figure that will lead to some compensation some day. The portion that does pay for the downloads, and I like to think that’s the majority, make the project well and truly worthwhile, and frankly, I don’t think those students would have paid anyway! Whatever money a student has should be kept for the truly important things… 😉

Here though, I’m writing as much to clear my thinking or have a record of something I’m fired-up about as anything. I don’t have advertising on the site and don’t expect it will be a commercial return. I do hope that it’s reputation building, and when you reproduce this work without authorization, you’re taking my reputation and using it for your own purposes. And I don’t like that.

PS

What I consider highly appropriate is to make reference to a post, summarize the main points – perhaps quote a paragraph or two – and then link to the permalink for the article here. (Click on the article headline and the URL will be the permanent link.) That type of use is a compliment.

Categories
Interesting Technology

Why is QuickTime X like OS X?

During the discussion with Larry Jordan and Michael Horton, I posit that QuickTime X, like OS X before it, is a complex transition that necessarily takes many iterations to complete.

OS X 10.1 was missing even the most basic OS 9 features, but progressively we got all that was missing, and much, much more. QuickTime X is like that: we’ve got the basics of linear playback now and more will come over time as they rebuild/rewrite and refactor media creation and playback on OS X.

The interview’s only six minutes.

Philip Hodgetts on QuickTime X.

Categories
Apple Pro Apps Item of Interest Video Technology

Why did Blackmagic Design buy daVinci?

Of course, I don’t have any direct link into the mind of Grant Petty, founder of Blackmagic Design and don’t know more about the purchase of daVinci other than what Grant posted, but it’s such an interesting purchase that I can’t help but comment and guess.

Like so many of the industry’s giants of old, daVinci was losing money in the face of lower priced competition (Apple Color) and a reliance on mostly-obsolete 2K-limited hardware. On the other hand, Resolve is software only and resolution independent running on a cluster of Linux machines connected with Infiniband high speed data interconnect. daVinci also have Revival, although I don’t know anything about what advantages it brings.

Clearly, Grant thinks that the company has not been making the most of its opportunities and more focus on marketing and product development will once-again bring the daVinci brand to prominance. (Assuming it ever lost it.)

However, I don’t expect we’ll see Blackmagic Design suddenly want to start competing with Apple Color. I don’t think that’s the market and Grant himself seems to rule out that direction:

DaVinci Resolve is unique because it uses multiple linux computers linked together with InfiniBand connections and multiple GPU cards so you get the real time performance advantage it has. I donʼt think that can be lowered in price much, however over the next few years as technology advances this might happen a little. However, DaVinci is different to a DeckLink card because itʼs a high performance computing based tool. Our focus will really be on adding more features. Thatʼs what we want, and I guess others would too.

Possibly, some time in the future, a network of multiple Linux machines might be replaced by optimized code on some future 8+core Mac with awesome graphics card and an application written with Grand Central Dispatch and  OpenCL in mind. But don’t hold your breath! Combined CPU+GPU power has to increase a lot to replace multiple machines and the market is not that big.

I think the move will allow daVinci to continue developing their modern products and repositioning the company (to be operated independently of BMD) for the mid-size post house: those that have become dissatisfied with Apple Color but who would not have purchased a full daVinci hardware/software package. If the price could be, say $60K instead of $300K (or more) then that has a really good chance of reviving the brand and – in that inevitable trend – make higher quality available at lower price. That has always been Grant Petty’s goal, so it seems this is consistent.

Categories
The Business of Production Video Technology

Why might large post houses be heading for the elephant graveyard?

My friend James Gardiner wrote an interesting post “Are large Post Houses a sunset industry?” and it set me thinking. Now James is writing from an Australian perspective and “large post house” and “boutique” post house have quite different expectations of size than the Australian context. (For example, Alpha Dogs in Burbank bill themselves as a “boutique” post house but in Sydney or Melbourne they’d be one of the larger post houses.)

In general principle he’s right. The economics of the large post facilities (really factories) of the size of IVC, FotoKem, Ascent Media’s various facilities are changing. They probably always have been. And certainly there are signs that the very large post-focused facility in New York and Los Angeles are threatened. Long-term post Burbank post factory Matchframe sold a majority stake for just $300,000 (mostly because of long term debt it is presumed). The costs of maintaining the “heavy iron” of a big post facility can be millions a year.

In general principle I agree with James: these large facilities are probably a sunset industry. But he identified one point that I wanted to expand on.

What a big post house bring to the table is more then just services, they bring know how and knowledge.  You KNOW it is going to work.

That alone is the reason that there will (almost certainly) be facilities like these big post factories: at least in LA and NY. These facilities are large enough to be able to experiment and invest in discovering the best workflows (as, indeed, do the people at Alpha Dogs et. al.) and technologies.

But knowledge gets shared. This is one of the absolutely best things about the current Internet Era: knowledge is freely shared in ways it never could be before.

Look at RED workflows. The RED Digital Cinema camera is a big step forward in performance-for-price and a new class of digital cinema camera. When it was first released the tools and workflow where completely unexplored. None of the major NLE companies had native support for the new wavelet codec and working between NLE and color correction caused nightmares.

Two years on and there are established “best practice” workflows across Final Cut Pro, Media Composer and Premiere Pro. Pretty much anyone who does a little research can find a workflow that’s tested. Where did the posts you find when you do that Internet search come from? People who have solved a problem, sharing the solution with other who have the same problem.

Frankly, this information sharing is what made my reputation. As a very early adopter of NLE (specifically a very early adopter of Media 100) I ran into problems earlier than those who purchased later. I also discovered email groups in early 1997 and benefited from the shared experience of the Media 100 Email List of fellow travelers dealing with NLE in the mid 1990’s. (All digital for more than a decade now.)

I don’t know what form the future post-house/factory will be, but what will survive are the “centers of knowledge” because ultimately that’s more important than expensive, but infrequent access to high-priced technology.  The latter will continually get cheaper and people will find smarter, faster ways to do things, that ultimately become best practice and the “norm” again.

I’d be remiss if I didn’t point out two of our own tools that can give a FCP facility and edge: Sync-N-Link synchronizes dual system video and audio in minutes rather than hours, or if you’re working with an edited Sequence replacing camera audio with multi-track in hours instead of weeks. Sync-N-Link is already being used across a lot of Network and Cable series.

Producers have been printing out EDLs and trying to match them to a spreadsheet to report clip usage or music usage: a tedious task for sure, but one that can be automated with Sequence Clip Reporter, which just takes the pain out of creating a video, audio or combined report, including a reel-by-reel report if that’s what you need.

Categories
Apple Interesting Technology Video Technology

What is QuickTime X?

With the release of Snow Leopard (OS X 10.6) this week, we finally get to see QuickTime X.

Simply put, QuickTime X is, as predicted, a simplified media player and simplified architecture optimized for playback of linear video streams. Most of what made QuickTime interesting to interactive authorers back a few years, is not present in QuickTime X.

We gain some new features: 2.2 gamma, screen capture and easily publish to major online video sharing sites. Screen capture is a nice addition and easy sharing probably would have been predictable if we’d seen Final Cut Pro 7 earlier.

The 2.2 gamma will no doubt take some time to get full adoption but at least it provides a way for us to add or change a color profile. Files with color profiles automatically adjust display to look correct on all screen. (At least, that’s the theory.) Within the Final Cut Studio it seems that correct gamma will be maintained *if* conversions are done with Compressor and not QuickTime 7’s Pro Player.

Chapter display has changed from a pop-up text list to thumbnail images. Better for consumer focused movies; less good for professionals.

Fortunately, it’s not an either/or. You can choose to install QuickTime 7.6 in addition to QuickTime X. If you try and access a movie that requires QT 7 features, users will be prompted to install QT 7 (aka “the real QuickTime!). If you want to make sure it’s installed, Apple have instructions on installing it.

So that’s the story of QuickTime X – a simple, consumer-focused player with a modern-looking interface, just as I predicted a little over a year ago.

Added 8/31 Just got this off a QT Apple email list. It’s not an official word from Apple but I think it sums it up well:

Quicktime X at this time isn’t a replacement to Quicktime 7, just allows faster multi-threaded playback of some of the older codecs.

Added 9/1 Ars Technica has a deep article on the difference between QT X and QT 7 and how QTkit negotiates between them,  that confirms I got my “educated guesses” right and provides more depth in how Apple achieves this.

Categories
Business & Marketing Random Thought

Where is the value in $15 million a year for a spokesmodel?

I have to say I was horrified to read that Ryan Seacrest was getting $15 million a year to host American Idol. To host, not produce, not to book a studio, not to actually produce anything but to host. To read a teleprompter and walk without falling over.

I’ve never met Mr Seacrest and I have no personal animosity but $15 million  a year to host a talent show seems just wrong. Way out of balance with anything real.  This is a 3x increase in salary over what he’s been getting – $5 million a year – for the same job.

That same amount of money would produce six episodes of Mad Men including paying all the far more talented cast (hey, they can act); paying the crew; locations; editors; facilities and presumably profit for the producers. All instead of paying one person to turn up.

I cannot believe that any one person brings that much value to a show. It just seems way out of balance to anything reasonable and human and really, tells me why the whole industry needs to be made over anew.

Equally stomach churning are the  sums paid to the CEOs of the major media companies, even when the results they turn in are “disappointing” to say the least.  Disney CEO Robert Iger earned $30.6 million last year while presiding over a 26% drop in profit at Disney? Where is the shareholder revolt? Why are they not demanding an $8 million drop in salary package?

It’s not just Iger; the rest of the crowd of losing value media company heads are all paid outrageous sums of money for the value they (don’t) bring to the companies they head.

Here’s my solution. Set a limit to the maximum ratio the highest and lowest paid employees of a company can earn. You want to increase the CEO salary, then everyone’s salary goes up to share in the (obviously great) results. Set the ratio at 100:1 if you like, but set a ratio that cannot be broken.

Until there’s some sanity I’ll be putting my efforts into demolishing that industry to start over afresh.

Categories
Interesting Technology Metadata

What is new from Intelligent Assistance?

Sorry about the little haitus in posts. It’s certainly not because I’ve got nothing I want to talk about! (Ryan Seacrest’s $13 million deal for American Idol and why doesn’t Robert Iger’s outrageous salary go down when Disney’s profit drops 26%, but they’ll either wait for later today or tomorrow.)

The pause has been caused by a couple of reasons: number one of which is that this week (and the next two) I’m looking after myself. Partner Greg is in Australia for a Visa renewal and I’m once again realizing how much he does to make our lives easier. (Mine particularly).

Also, we’ve been releasing new software, updating older products and revising earlier books. In fact we’ve been doing so much that I can’t announce stuff in press releases yet!

About a month back I finished completely revising Simple Encoding Recipes for the Web 2009 edition. Anyone who purchased in 2009 should have received a download link. Announcements to everyone else are coming or you can buy the update for $4.95. (It’s a complete rewrite).

Last week the revision of The HD Survival Handbook 2009-2010 was finished and, again, those who purchased in 2009 will have received an email with an update link. All other previous purchasers will have received a $4.75 upgrade offer. It’s been about 30% rewritten, almost an additional 20 pages, so the upgrade price represents the value add that’s gone into it. The “upgrade” is the full new version, not changed pages. Also this year we went with Avid support – codecs, hardware and workflow. Given that’s now a 233 page US Letter book, it’s a huge project to revise. So much has changed in a year.

In between, Greg’s been working hard to release an updated First Cuts to First Cuts Studio by adding in the functionality of one of our new applications, exceLogger. Have I mentioned we love customer feedback? It’s made Sync-N-Link a much stronger product. Naturally we want the same feedback from customers of our other products. Good, bad or feature request, all feedback is welcomed. (Begged for!) exceLogger was a feature request for First Cuts for Final Cut Pro, and is available as a stand-alone application for those who just want to log in Excel but merge with captured media in Final Cut Pro.

BTW this now makes  First Cuts Studio great value: At $295 it includes Finisher (US$149) and exceLogger (US$69) – so the Auto-edit functionality of First Cuts is just $77!

Greg also developed two additional applications that fit perfectly in our metadata focus. miniME (Metadata Explorer) when we discovered (just four years after Apple told us!) that the QuickTime metadata from IT-based digital video sources (non-tape) is preserved in FCP but only visible in exported XML. So, Greg wrote me a simple tool to view the hidden metadata and export to an Excel spreadsheet. (That functionality is free in the demo version.) The paid version lets you remap that metadata into visible fields in Final Cut Pro.

Finally, the night we demonstrated miniME and exceLogger a friend of mine again suggested an idea for software that would report clips used in a Sequence – video or audio – as he has to provide reports to his clients, but equally useful for music reports. Greg worked on it for a while and this week we released Sequence Clip Reporter. (Yeah, we tried to find a better name but that’s descriptive and stuck.)

Now there’s a lot of work goes into writing software. There’s the work on the actual functions of the software, but then there’s questions about interface and how functions should work. Then there’s software updating to be added, serial number support to be added and feedback mechanisms added. All beyond the actual functionality.

Me, I get to design a new logo for each piece of software, write website and postcard copy, write a press release and send it out. Plus Help files need to be written so people can actually use the software. So, around any new software there’s a lot of work that doesn’t actually involve much software writing!

And that’s why posting has been sparse.

Categories
Business & Marketing Distribution Random Thought

What if there was no copyright on “music and the arts”?

Over at Techdirt, Mike Masnick wrote an interesting article suggesting that copyright on “art or music” may be unconstitutional. Now, I don’t expect the Supreme Court to rule that way any time soon – there’s not even a case before them – but it did make me wonder what would be different if copyright didn’t exist on film, television, music, architecture and other creative arts.

I thoroughly recommend reading Mike’s article, but the gist of the argument is that the Constitution provides for a “Limited Period” (originally 14 years, not 50 years past the death of the author) for “authors” (only, no descendants or corporate owners) “To promote the Progress of Science and useful Arts”. Useful Arts apparently being the business of invention in the language of the day. No mention of almost all our current copyright system.

We wouldn’t have the RIAA suing its best customers. The RIAA, MPAA and their kind around the world would have to work out how to compete, which is simple: provide a good product at a fair price and provide it conveniently. Without the crutch of copyright to protect a dying business model (and a highly profitable one, so it’s understandable they don’t want to adjust to the new reality) they would have to compete.

After all, television has been giving its content away pretty much since day one. Of course others (advertisers) pay for the privilege of interrupting the program with something irrelevant, which is why I’d rather pay a fair amount for my ad free copies, thanks.

If there was no copyright, then digital copies would abound, and content creators would either have to add value to their official (paid) version; or bundle advertising so closely with the show that it doesn’t appear like advertising. (In fact I believe the future of advertising is branded media, but that’s a post for another day.)

Of course, it can be done. iTunes and Amazon’s music store sell music that is fairly readily available via various P2P mechanisms. Every one of the 4 Billion songs Apple has sold has been available free.

Perhaps content could be free after a period of time, and people will pay for immediacy. This is the strategy the Direct TV hoped would give them more customers by showing Friday Night Lights on Direct TV before their outing on NBC. (See my earlier article on how the numbers stack up for new media, on how that program is being funded and what a fair price would be for a viewer.)

People will pay for convenience and simplicity – both reasons why iTunes has been such a successful model, despite charging way too much for television and movie content.

There are dozens of ways that television, and new media production, could fund itself if there was the necessity and they couldn’t fall back on copyright. In fact in my “Making a living from new media” seminar, I outline 13 different ways that free media can lead to a decent middle class income.

If “Hollywood” wasn’t covered by copyright, how different would it be?

Categories
Apple Pro Apps

How is Shake not Motion?

With the apparent demise of Shake – really nothing real now, as it seems to have been withdrawn from sale – Apple appears to be redirecting enquires to the Final Cut Studio 3 pages, suggesting that they consider Motion a substitute for Shake. I hope there’s something else coming (“Phenomenal” anyone?) because Motion is not a substitute.

Now, before I get into the reasons why they’re not interchangeable, let me say that it might make perfect business sense to drop Shake and not replace it. Shake, when Apple purchased it, had about 200 customers. That number has obviously grown dramatically but I’d be surprised if there were 10,000 true users: people who use Shake as the high-end compositing tool it was designed to be. It was also obvious that Shake, as it was, wasn’t going to be able to move forward in any serious way: no way to hook into GPU power or other such lush goodness.

Creating a replacement from scratch – all new, modern code – is an expensive operation. For a company like Apple, probably in the tens of millions of dollars to not only create the application but to test it internally (the Motion team, at time of launch, had as many QA people as software engineers), put the marketing plan into practice, run launch events, seminars, create training resources, etc. At Apple’s level, software is an expensive business.

The market for high end compositing software is small, and in the time Shake hasn’t been developed competitors have been significantly upgraded and taken market share from Shake. Maybe the decision was made to simply take what they could from Shake and roll it into Motion.

But Motion is not now, nor ever will be, a replacement for Shake. Motion is a great motion graphics tool with compositing capability. Shake is a compositing tool with some motion graphics capability. You see the problem.

Motion is an excellent motion graphics tool for video editors. It is designed to make it relatively easy for non-experts to create some fabulous looking motion graphics. Shake, OTOH, was for those individuals who were trying to track a head  shot against green screen onto a body while putting the whole body into a scene generated in 3D while adding other 3D characters.

This would be a nightmare to composite in Motion, because it’s not what Motion was designed for.

So, while it makes perfect sense to kill Shake – it was old and needed updating, and maybe updating doesn’t make economic or marketing sense – it doesn’t make sense to pretend that Motion is a suitable replacement.

I suspect that the original purchase of Shake was more for the marketing benefit of being associated with Tentpole movies rather than the income from software sales. Apple doesn’t need that so much anymore (and that’s a good thing).

I’d still like to see what the Nothing Real team would do recreating the application from the ground up with modern technologies, but I suspect Shake will never be anything real in the future.

Categories
Business & Marketing Media Consumption

How has technology changed news reporting?

I’ve been thinking a lot over the last couple of months about news. In fact somewhere within me is brewing a book on the way that the Internet and technology has changed news so when the Digital Production BuZZ asked me to comment on the subject this week, it forced me to put some of the thoughts into a coherent form. Hopefully last night’s interview (my segment starts 20 minutes in) was, but I’d like to share those thoughts with you here.

I think most people are aware that the newspaper industry, in particular, is in trouble. The Internet and modern technology have changed the way we get and consume news. It’s also changed the way the way the news itself is gathered.

There are several ways that the Internet  and technology have changed news and I’m sure my thoughts here are going to only skim the surface. First, a little history. Back in the days PI (Pre-Internet) – really just on 15 years ago – news was hard to come by. We didn’t get information internationally, or even nationally, without the newspaper and to a lesser degree radio and Television but mostly the newspaper. The entire contents of an hour-long evening news bulletin would not take up the space of the front page of most newspapers of record, so it was to newspapers we looked for local, national and international news.

I used to be a 3-paper-a-day man back in Australia. The local newspaper for local news; the State-Capital based newspaper of record and the National financial news for, well national financial news. (I was a Fellow of the Australian Institute of Company Directors in those days, and had a keen interest in such things.)

I haven’t read a newspaper on a regular basis in 10 or more years! These days I get my news via RSS into an aggregator. My general (local, national, and international) news comes from eight major sources: AP, LA Times, Wall Street Journal, Washington Post, NY Times, CNET, Sydney Morning Herald and Yahoo Technology News across two countries. But I’m only interested in a fraction of what they report.

But these are just eight of the nearly 300 RSS feeds that feed me the news I’m really interested in. No newspaper would ever be likely to give me that personalized look at the world as it evolves. Plus, I don’t have to wait 24 hours to get “aged news” (as Jason Jones put it on The Daily Show).

Now, back PI we needed the same AP article reproduced in the local paper in each market because that’s how we got the news. These days we only need the source – the original source which is rarely a newspaper or AP – and a link. It annoys me that the same story appears 20 times or more in one set of news feeds, duplicated from the same AP article and rarely with any editorial influence or rewriting.

In fact, I think you’ll find a good portion of most papers are simple rewrites of press releases or AP stories, with very little real reporting being done at all.

Blog aggregators like the Huffington Post and to an increasing degree, AOL who has more than doubled the number of reporters in the last year hiring those discarded by mainstream media, are creating their own reporting and commentary networks. News is coming directly from the source. We don’t need an AP or NYT outpost in Iran during an uprising. We get news from Iran, from The Tehran Bureau or Global Voices Online (a blog aggregator who knows which bloggers to trust).

As an indication of how much the news industry has changed, The Tehran Bureau, published by volunteers out a small suburban house in Massachusetts, has had very accurate and detailed information about what is going on in Iran while the mainstream media have been sidelined by the officials in the country and not able to report. Their information was being quoted and “reported” by mainstream media who can’t get coverage from their traditional channels.

None of this could happen without the Internet infrastructure and specific technologies that sit on top of it, and sometimes link into other technologies like the cellular phone network’s SMS system.

It was a blogger who bought down Dan Rather by revealing that the papers purporting to reveal irregularities with President George W. Bush’s service in the Air National Guard were fake. There are dozens of such incidents where bloggers,with time and the Internet at their disposal, have broken dozens of stories, with more accuracy and greater detail than the mainstream media. (Frankly the accuracy rate of mainstream media is pretty appalling.)

It was a cell phone recording that affected the balance of power in the Senate in the 2006 mid-term elections when a Democrat staffer recorded George Allen’s infamous “Maccaca” comment that, arguably, lost him his almost certain return to the Senate.

It was the cell phone video of “Neda” being shot in the civil disobendience after the Iranian election that helped inspire more people to come out in opposition to the Government of the country.

With millions and millions of cell phones in consumer’s hands it’s now more likely than not that a camera will be at the scene of a major incident. The first picture of Flight 1549 in the Hudson was from Janis Krums’ iPhone on the ferry that was first on the scene to pick up the passengers. Naturally he shared the photo via Twitter. (It was 34 minutes later that MSNBC interviewed him.)

Twitter was first to break the news, again. People have sent tweets from within the midst of the news, including instances where people have tweeted their involvement in a disaster like Mike Wilson, a passenger on board Continental’s Flight 1404, which skidded off the runway at Denver airport and burst into flames. Mike tweeted right after he escaped out of the plane’s emergency chutes and posted a picture of the foam-covered aircraft long before any traditional media was even aware of the accident.

When a Turkish Airlines Boeing landed short and broke apart at Amsterdam’s Schipol, the first word to the public was a Tweet, sent out by a fellow who lives near the airport. (FlightGlobal.com)

Twitter has become a major news source, such that there are now sites, like BreakingTweets.com, dedicated to breaking news on Twitter as a news site in addition to Twitter’s own Breaking News page. If you want the up-to-the minute news, you follow Twitter it seems.

Even if newspapers and the Associated Press ultimately fail, as they are most likely to, I still see a bright future for journalism, just not in the traditional places.

There is one more aspect to “news and the Internet” and that’s the social one. Many of the source I subscribe to in my RSS reader are bloggers who write in the space. I may miss an article or resource but Scott Simmons (on his own site or at ProVideoCoalition.com), Oliver Peters, Larry Jordan, Shane Ross, Lawrence (Larry) Jordan, John Chapell, or Norm Hollyn are there to find the things I miss and bring them to my attention. (Of course, usually with some insightful writing in between.)

I don’t have to read everything or be everywhere because the social networks I participate in create a new network far more valuable to me than the best efforts of the Associated Press!