Given the technological nature of today’s social landscape, it should come as no surprise that an unprecedented amount of digital video is being captured and broadcasted for the world to see. It seems everyone these days is either an aspiring film maker, Youtuber or video blogger. Thankfully, the instruments required to compile footage are more accessible than ever. But as more people look to today’s editing programs to refine and edit their work, a glaring lack of understanding into the technical fundamentals of editing has emerged in a manner that is disconcerting to pretentious connoisseurs like myself.
Having noticed this unfortunate trend among both casual editors and aspiring professionals alike, I wish to elucidate some of the more common illiteracy plaguing today’s user base. What follows is an introductory examination of key points I believe need to be addressed. These are three things every video editor should know.
Three tips every video editor should know
Look closely at your monitor. A little closer…closer… that’s it! Save for individuals wielding relatively small panels with exceedingly high native screen resolutions, most will notice that the image being displayed is not that of a singular composition but rather the culmination of thousands of individual lights referred to as pixels. To be clear, the term ‘lights’ is somewhat misleading, as in most cases a monitor is backlit by a single LED panel. Nevertheless, we will regard them as so in the spirit of simplicity.
Pixels are the essence of digital imagery, both as it relates to your monitor and the information it interprets. Each on screen pixel is designated a color by your computer, which in turn takes its cue from information stored on your hard drive. What information you say? As it pertains to video editing, the data in question will typically be raw video footage.
Depending on the camera you are using, each individual frame within your video could potentially be made up of millions of individual pixels. Each pixel represents a piece of information, and that information is subsequently stored on your computer’s hard drive. As the years progress, the number of pixels per image dramatically increases. More pixels equal more information, hence larger files and less available hard drive space. So why would anyone prefer to use larger files? It comes down to this: more pixels produce more detail. Let’s check out the examples below.
As you have likely already noticed, there is distinct difference in visual complexity when comparing example A against examples B and C. Example A is made up of 100 squares; ten on the vertical axis and ten on the horizontal. Communicated in pixels, example A would have a hypothetical resolution of 10×10. Compare that to example C, which by contrast encompasses a total of 300 pixels; a resolution of 30×30. With more pixels comes a greater capacity for detail, as shown by my three attempts to depict a tree within the parameters of each substructure. At the very least, these examples should give you a rudimentary understanding of how resolution works. Now let us consider some commonly used video resolutions.
Assuming a relatively new camera is at your disposal, you will more than likely be using one of the following resolutions:
- 1280 x 720 (720p)
- 1920 x 1080 (1080p/Full HD)
- 3840 x 2160 (2160p/4k/Ultra High Definition/UHD)
It should go without saying that modern-day resolutions consist of significantly more pixels than that of our provisional examples. But what resolution is right for you? Contrary to what has been alluded thus far, you would be wrong to assume the highest available resolution is that which is most desirable. In truth, far more must be taken into consideration.
Realistically, the resolution you decide to use should coalesce with the prospective capabilities of your computer hardware. Even in 2019, 4K footage is a difficult proposition to muster given the limitations of today’s CPU, memory and GPU components. Even the costliest of investments into high end consumer grade hardware will struggle to keep pace with the amount of information at play when attempting to utilize 4k footage. Operating beyond the means of your hardware could prove to be cumbersome in one of many ways. For instance, if your computer lacks the necessary memory to effectively manage high definition video, you could be subject to frequent crashes once the project has partly reached completion. Another way in which insufficient hardware could negatively manifest itself is by way of pervasive slowdown. This can be somewhat mitigated by restricting the quality of the preview window within your software’s interface. However, this is not ideal as accurately assessing the visual state of your project is arguably crucial to the editing process.
Another way to circumvent the possibility of a problematic editing experience is to shoot at a lower resolution than what is advertised on your camera or by manually down scaling the footage if already captured and established to be well beyond the extent of your hardware capabilities. For instance, if your camera is capable of shooting 4K footage, alternatively consider shooting at 1080p by adjusting the camera’s internal settings. Likewise, if you already proceeded to shoot in 4K and later determine the footage is unsuitable for editing, you should perhaps downscale the video from 4K to 1080p in order to accommodate the limitations of your computer hardware. You can do this by simply importing the raw files into your editor and thereafter exporting them using more advisable video specifications. Bearing in mind the abundance of available hardware, not to mention the seemingly endless potential for variation among compatible components, there exists no comprehensive formula for measuring the effectiveness of one’s computer at editing profusely high-resolution video. In order to accurately gauge the competency of your computer hardware, do some research into the specifications listed in connection with your computer’s model number and or the parts within it.
Now that we have examined the core principles of screen and image resolution, we will next address the topic of aspect ratios. Thankfully, the two concepts work in conjunction with one another. Whereas resolution is the number of pixels used in a picture, aspect ratio indicates the height to width ratio at which those pixels are placed. Put simply, aspect ratio is the shape of your image.
All ubiquitous image resolutions have a corresponding aspect ratio. For instance, a standard 1920×1080 image has an aspect ratio of 16:9. This simply means that for every 16 lines of horizontal resolution there exists 9 lines of vertical resolution. That notion can easily be proven using simple arithmetic. If we divide our horizontal set of 1920 pixels by 16 and our vertical set of 1080 by 9, both equations will identically result in a sum of 120. The same is also true of 720p and 4K, both of which inherently constitute a 16:9 image. Below are a few examples of commonly used aspect ratios.
Unless deliberately specified prior to capture, most if not all consumer grade cameras operate at a 16:9 aspect ratio. Despite that fact, a deeper familiarity with other existing aspect ratios can benefit you in several ways. Perhaps you wish to employ a nostalgic look to your video, in which case you would covert to the old style 4:3 aspect ratio. If you are a seeking something more closely akin to traditional cinema, aspect ratios nearing 21:9 are the way to go. Just be aware that any subsequent change in aspect ratio will come at the cost of image cropping.
But a cropped image isn’t necessarily a bad thing for those maintaining highbrowed aspirations. The culture of cinema has long embraced wider aspect ratios. With that said, persons working within that framework should strongly consider doing so as well. Furthermore, cropping your 16:9 picture will fortuitously grant you additional freedom in post-production. By default, the unobstructed portion of a cropped image is centrally orientated. With respect to 16:9 images adopting wider aspect ratios, the reduced periphery allows for positional adjustments to be made along the vertical axis. Provided you don’t exceed the perspective threshold, feel free to reposition the image to your liking.
Unlike still photography which generates one picture at a time, video quickly produces several images and displays them in quick succession in order to create the illusion of motion. Depending on the circumstances, the length at which each individual picture is displayed could vary considerably. This is referred to as frame rate.
Frame rate as measured in frames per second (fps) indicates for every second the number of pictures being shown. Nearly all cell phone and enthusiast level cameras will default to 29.97fps; often conversationally rounded to 30fps. 30fps has become the de facto standard for social media uploaders, YouTube professionals and casual videographers. But that wasn’t always the case.
In the infancy of motion picture films, well prior to the introduction of video, cameras operated at a steady 24fps. For whatever reason, 24fps was determined to be the minimum number of pictures required to effectively simulate motion. 24fps subsequently became the standard for all production level and consumer grade film cameras. Strangely enough, not much has changed since the adoption of digital technology. As a matter of fact, nearly all movies released to this day operate at 24fps; the result of either senseless attachment to tradition or preferential aesthetic.
Having established that 24fps is the quintessential frame rate as it pertains to feature length films, it’s highly recommended you set your camera to shoot at 23.97fps (24p) whenever attempting to produce a movie within the guidelines set forth by industry professionals. But not every camera has a 24p function. Need not be concerned, as most editing software provide features that enable you to easily convert your 29.97fps footage to 23.97fps. While this is not ideal, a frame rate of 23.97 is an absolute must for those seeking the approval of veteran film makers and movie enthusiasts.
Although typically restricted to 24 and 30fps, certain situations entail you operate at unconventional frame rates in order to achieve specific results. One instance would be the intended use of slow motion. Often used for stylistic purposes, slow motion is a technique that if not approached correctly could detract from a movie’s overall sense of competency. The very nature of slow motion may lead you to instinctively presume that any observable decrease in speed is a consequence of a lower frame rate. In actuality, the number of frames per second should remain consistent with all preceding footage. In saying this, do not attempt to slow down footage without the necessary abundance of frames to maintain consistency. For example, a fifty percent decrease in play speed should at the very least be captured at twice the base frame rate of 23.97fps.