EditorUX

Novacut Wiki Home > Video Editor UX Design

Note: this is still a work in progress

The singular goal of Novacut is to help artists become profitable... without giving up control, ownership, or compromising their creative vision.

Final Cut Pro is the current standard. So our design goal is for artists to be measurably more profitable using Novacut than when using Final Cut Pro.

This UX design focuses on three profitability metrics:

  1. Person hours - if artists can produce the same end result with equal quality but in fewer total person hours, they are more profitable

  2. Risk - if we can eliminate avenues where likely user error will cause loss of work or simply waste time, artists are more profitable

  3. Cost of collaboration - if great artists can collaborate with lower overhead, or collaborate when they couldn't otherwise, they are more profitable

Why is it important for artists to be profitable? Because profit gives artists the freedom to tell the story they want to tell.

This UX design is dedicated to the storytellers

novacut-avatar-192.png

Novacut logo design by IZO

Intro

Surely we can't do better than Apple, can we? Well, the proof will be in the pudding, and we're not there yet. But there's one place we're already doing better than Apple: listening to what artists need.

If Apple had been listening to artists, Final Cut Pro X would be a collaborative editor that would allow artists to easily work in geographically distributed teams.

We've been working on Novacut for almost a year now, and the editor part of Novacut was taken on because there so clearly was a need for a distributed video editor (our term for what Arin Crumley calls a collaborative editor). As the architecture of our editor had to be carefully designed specifically for distributed editing, it appeared it would be very difficult to retrofit this onto an existing editor (which gave us an advantage).

However, by Apple's own words Final Cut Pro X was a total rewrite from scratch... a perfect opportunity to re-architect for distributed editing. And yet they didn't.

The last year we've spent a lot of time talking to artists, tuning our priorities. This UX design makes the conversation formal and publicly documented. But the conversation is just starting... we need feedback from many more artists, about many more details... always. Truly great creative tools cannot be designed in secret. They must be designed in open dialog with creators.

Special thanks

Special thanks to the many artists and designers that have given us help, feedback, and inspiration:

We'd also like to thank the entire Ubuntu community, and our friends at Canonical, especially the Ubuntu One team and the Canonical design team.

And an extra special thanks to Ian 'IZO' Cylkowski for the stunning brand and identity design he did for Novacut, and to Harno Ranaivo for the excellent icon designs he did for Novacut.

We have a long way to go still, but we couldn't have made it this far without all of you! Thank you!

A narrow focus

The Novacut vision is bold and a bit wild, certainly a lot of work. But we're also rather realistic because we have a laser focus on helping those artists who right now are leveraging HDSLR cameras to tell wonderful stories, with impeccable production quality, and are doing so on shoe string budgets you wouldn't believe.

These are artists we can help. Because there is no film cost and HDSLR cameras are inexpensive, the biggest production cost is truly people's time, and we're quite confident Novacut will allow these artists to tell the same story, with the same impeccable production quality, in less time. We're building the user experience from scratch specifically for these artists, specifically for these cameras.

We're okay if at first Novacut isn't a good fit for all artist, because the more focused we are, the quicker we can make it truly fantastic for at least some artists. And we are narrowly focused on:

  • Artists who on a lean budget are currently and successfully producing serials, shorts, or movies, who are distributing online, who have built a loyal fan base, and who are generating revenue directly from this fan base (crowdfunding, etc). These artists are living, breathing examples of what's needed to win right now. So the features and workflows they use are what we care about. Other features are off the table because apparently they aren't needed to win, or might even hurt your chances of winning.
  • Productions shot on Canon HDSLR cameras, nothing else. Fortunately, this doesn't exclude many of the above artists because Canon HDSLR cameras are a big reason why there are so many artists above. This will get broader over time, but we're following the artists, and Canon HDSLR are statistically what they're using right now. We'll follow artists to their next cameras too.
  • Productions that are completely live action, don't require compositing or special effects. We know this does exclude some artists for the time being, for which we apologize. We'll get to these features as soon as we can, but they need the foundations to be complete first anyway. For special effects (and perhaps compositing), integrating with Blender seems a quite attractive proposition.

Equipment recipe

novacut-equipment.jpg

For now, the Novacut editor only officially supports productions using this specific equipment recipe. Here are the ingredients:

  • You need one or more Canon HDSLR cameras. Choose among the 5D Mark II, 7D, 1D Mark IV, 60D, T2i, T3i, and 5D Mark III.

  • You'll record in-camera audio, but only use it for syncing purposes. Auto-leveling will work fine, but of course you can manually set the levels on cameras for which this is possible.
  • For more reliable audio sync, you might use a hot-shoe mic like the Sennheiser MKE 400 or Rode VideoMic. We'll do our very best to sync reliably even if you use the humble in-camera mic.

  • For your production audio, you'll use a high-quality portable digital recorder like a Zoom H4n or similar. Essential criteria is that it record WAV files onto a removable SD or CF card.

  • Ideally you'll use shotgun mics like the Sennheiser ME66 + K6 or similar. Ideally you'll have a boom operator for each speaking actor in a scene, and will record each actor on their own audio channel. Note that there are a few automation features (aimed at quick on-set edits) that will only work if you record each actor on their own channel.

As we're using GStreamer as our multimedia backend, Novacut may work perfectly fine with other cameras. But we can't guarantee it, nor spare development efforts on other targets till Novacut is measurably superior by the metrics we care about... for at least the narrow equipment recipe above.

File management

Novacut is build for collaboration, even if you're collaborating with someone across the world. And you can't do collaborative editing unless you have a sane way to move media files around between different computers, removable drives, local clusters, and the cloud.

Our solution to this problem is the Distributed Media Library (aka dmedia). We've spent the last year developing dmedia, and it's nearing what we consider production ready.

A key goal with dmedia was to automate file management tasks that are time consuming and error prone, yet accomplish nothing creative. Some examples of tasks that are completely automated:

  • Backups (specifically, ensuring at least, say, 3 known good copies exist)
  • Verifying file integrity (to detect drive failures or other problems)
  • Retrieving needed files from other local computers or cloud
  • Swapping files with other locations to make room for your active project files

Note that dmedia is not yet recommended for anyone other than adventurous developers.

Import workflow

Metrics: Person hours, Risk

The import process (rough equivalent of log and transfer in FCP) is handled by dmedia rather than the Novacut editor. You can do imports without the editor being open, and if you're editing, imports don't interrupt your editing.

The dmedia import UX was carefully designed for high volume pro use cases where importing is extremely repetitive, yet the user's intent is always exactly the same. Any options presented to the user become very likely places for the user to make an error. This is also a horrible place to have the user pick specific files to import because if they miss one, the file can easily be lost forever.

As such, imports start automatically simply by inserting cards into card readers. No user action is required, nor even possible. All files are always imported (but dmedia detects duplicates, wont make a mess), leaving no chance of mistakenly not importing a file, losing it forever.

Checkout this video of the import workflow in action.

Our importer extracts EXIF metadata from the thumbnail file corresponding to each HDSLR video. This means we know the initial Camera settings (Aperture, Shutter, ISO), but also useful things like the camera serial number, camera model, lens, etc. We also store the serial number of the SD/CF card a file was imported from.

Distributed storage

Metrics: Person hours, Risk, Cost of collaboration

dmedia tracks each location a file is stored. For example, a file might on the hard drive in your laptop, on a removable drive, and on a cloud storage service. dmedia also tracks when files are used, so it can estimate what files you are likely to use.

dmedia will keep your computer's hard drive(s) as full of media files as possible, so that there is a high chance of having the files you actually need already there. As space is needed, dmedia will swap files between other locations so that your computer contains the files you're currently working on. And dmedia always ensures multiple known good copies of your files exist.

HDSLR storytelling will gobble up hard drive space. Soon you will have more media than will fit on even a burly workstation, so it must be distributed across other computers, NAS, cloud, etc. Yet dmedia locally stores the thumbnails and metadata for your entire library, so you can browse through files as if they are local. And if you need a particular file, dmedia will automatically retrieve it from wherever it can.

Lazy delete

Metrics: Risk

The original Wastebasket was designed to gives users the ability to undo a file delete, and certainly it provides this functionality. However, in UX terms the Trash does not effectively provide an undo, and it's easy to understand why. Most of the time the user did mean to delete the file, so most of the time emptying the trash is the correct action. People respond to patterns, so it's completely natural and expected that people chunk these two steps into a single gesture.

So in UX terms, the Trash does not provide delete with an undo option. Instead, it provides a one-step permanent delete with a false sense of security. Very dangerous for fast paced professional storytelling.

The solution is to remove the option to empty the trash. Instead, dmedia will provide a window of time during which the delete can be undone, say a week (will be user configurable). After the window has expired, dmedia can permanently delete these files, but will only do so once the space is actually needed.

The dmedia Trash delivers the right UX: a one-step delete with a guaranteed undo window.

Editing automation

As we are designing only for storytelling with specific equipment, there is some low hanging fruit in terms of automating some mundane editing steps.

One thing to our advantage is that dmedia tracks each batch import, which is usefully because if you insert say 3 cards at once, those cards probably contain several takes where the cards represent camera A, camera B, and an audio recorder.

Audio sync

Metrics: Person hours

In our equipment recipe, you have in-camera audio for automatic sync. And you'll record your high-quality audio on a separate audio recorder like a Zoom H4n.

As you will likely both the card from the camera and the recorder in the same import batch, this narrows things down nicely for automatic audio sync done in the background after the import, even if the editor isn't opened.

Multicam

Metrics: Person hours

Terminology note: HDSRL video is pretty much exclusively shot in what stylistically is a single-camera setup. This is the traditional cinematic way of shooting, and how most modern TV shows are shot. However, rather than using one camera and shooting each angle separately, when possible HDSLR productions will often shoot multiple angles simultaneous. As HDSLR cameras are cheap (and have no film cost), this tends to help profitability because you can get more out of the cast and crew's time. Of course, this isn't always possible as sometimes the lighting doesn't work out or you might want to shoot from where, say, a person's head is in other angles.

Even when multicam isn't possible for a particular shot, you can still shoot footage for, say, a making of documentary, or to give fans a view into your day to day work while you're still in production.

This is much like the audio sync above, except you need to align multiple cameras and possibly multiple audio recorders all together in time. Novacut will allow one to manually assemble multicam shots (or fix sync if auto-sync didn't work), but the goal is to make this mundane step reliable and automatic whenever possible.

Quick on-set edits

Metrics: Person hours

Even a small production can easily have a cast and crew of 20 people. This means as much value as possible must be extracted from production days. If the results aren't usable, that's 20 peoples' time wasted.

If you have rapid feedback using quick on-set edits, then when needed you can do additional takes right then and there, when the scene is still setup, when the cast is still in makeup and wardrobe. There are countless issues (like focus being slightly off) that you wont catch till you can see the footage. If there are problems that aren't caught till too late, there is a high cost to getting the cast and crew together another day to re-shoot.

But more than just reviewing individual clips for technical issues, if rough edits can be quickly delivered, then everyone gets clear feedback on whether they are successfully delivering the story they envisioned. And if needed, they can make adjustment while the cost is low.

And there is one place where a rough on-set edit can be fully automated: the two camera over-the-shoulder dialog shot (the now canceled Fox show Better off Ted has some great examples of this). With a bit a scene setup to associate the camera with the correct actor's audio channel, a rough edit can be fully automated by detecting when the speaking actor changes from one to the other.

The above wouldn't be a good edit, and our goal isn't replace human editors with some sort of freaky robotic editor. But for quickly providing feedback on whether a take is usable, it might prove quite useful.

Just an idea... we'd love feedback on this.

Color correction

Metrics: Person hours

All in all, color correction is not a place where we anticipate being able to do much better than the industry standard (although we aren't far into research here). So UX wise, we'll take a mix of the best workflow ideas currently used.

But there is one interesting automation opportunity: as dmedia extracts EXIF metadata from HDSLR files, we know the lens being used, and the white balance settings, so that means we have the data needed to at least color normalize across different SLR lenses.

This is useful because not all productions need color correction, but if you're using several different lens (say a 50mm f/1.2L and a 100mm f/2.8L), you'll need to at least color normalize across these lenses as SLR lenses are not color matched. And even if you are color correcting, starting from a color normalized point could speed up the work.

Just an idea... we'd love feedback on this.

Storytelling

Because it's all about storytelling.

In recent blog post about FCPX, Josh Mellicker recounted a very interesting conversation with Randy Ubillos:

  • I remember one time, probably ten years ago, we were riding in the back seat of a car after a trade show and I told Randy that I envisioned Final Cut Pro moving towards more pre-production features, like scriptwriting and timeline storyboarding, where FCP would print out shotlists and a shooting script, and then after shooting, the actual takes would drop in and replace the storyboard placeholders.

Although we haven't talked about it much yet, the plan for Novacut has always been to build collaborative tools for pre-production, and to use the pre-production planning to drive production and post-production workflow. As our goal is to make artists as profitable as possible, we need to be looking at the entire pre-production to production to post-production pipeline. Of course, we have to take baby steps and will start with post-production (the editor), but we need to have a clear plan so that the editor can be designed to fit well in the total pipeline.

To be clear, Novacut wont be some overloaded hyper-application. These will be (in effect) separate applications. But the important thing is that all the applications save to the same database, so that one workflow can draw on the structure and information created by another workflow (when it makes sense). We feel there is a lot of low hanging fruit here that, interestingly, often has nothing to do with video itself.

Another area where there is low hanging fruit is in categorizing and rating a large amount of less structured footage. Documentaries certainly fall in this category, plus there are decidedly modern forms of storytelling that have started to push the limit on this front. Four Eyed Monsters was a pioneer here, and Google's Life In A Day is a recent example that clearly shows the need for this categorization and rating process to be collaborative.

Some features in this storytelling section are at fairly early stages in terms of UX and UI work... deliberately so because we need to get feedback from more artists.

Script writing

We'll have a UX design specifically for the script writer, but for now the goal is to think about how it will fit in the overall pipeline, integration points with the editor.

Story boarding

Logistics

Production day workflow

Keep/reject

Rating

Tagging

Metrics: Person hours, Cost of collaboration

We have a very flexible tagging system. Tags can apply to a media file, or to nodes in the editing graph. Tags themselves have easily extensible metadata. Initially we're going to support two types of extensions that allow you to make the tag apply to something more specific than the entire media file:

  • Ranges - you can apply a tag to a slice of of video or audio file (say, from 1:32 to 2:05 in a video)
  • Regions - you can apply a tag to a specific rectangular region in a video or photo (say to tag a specific person)

You can use ranges and regions together in a video. And this is just the beginning... we can extend this with more specificity options as we get feedback from artists, clarify use cases.

The tagging system was designed based on input from Christie Strong in particular (thanks!). For the record this was fully designed before Apple previewed FCPX.

Slice and sequence

Metrics: Person hours

Relative positioning

Metrics: Person hours, Cost of collaboration

Chunking

Metrics: Person hours, Cost of collaboration

Doodles

Metrics: Person hours, Cost of collaboration

Annotations

Metrics: Person hours, Cost of collaboration

Snapshots

Metrics: Risk, Cost of collaboration

Distributed editing

Real time collaborative editing

Metrics: Person hours, Cost of collaboration

Branch and merge

Metrics: Person hours, Cost of collaboration

Rendering

Metrics: Person hours, Cost of collaboration

Note on free and freedom

The fact that Novacut is free (as in costs nothing) offers artists little benefit itself. Free can at most increase their profitability by the cost of a video editor, which isn't much. Free is certainly nice, but we can't use free as an excuse for not having something measurably superior by the metrics we care about.

However, the fact that Novacut is free software (as in open source) is profoundly important. Artists need the freedom to shape their creative tools however they want, directly and without permission, otherwise their software is limiting their creativity. Yet as important as this freedom is, we likewise can't use freedom as an excuse for not having something measurably superior by the metrics we care about.

Feedback

Feedback from you

Novacut/EditorUX (last edited 2012-06-18 17:40:18 by 173-14-15-225-Colorado)