Tuesday, August 21, 2007

DVD Pick of the Week - Premonition

            Premonition is a 2007 drama film directed by Mennan Yapo and starring Sandra Bullock and Julian McMahon. The film was shot at locations throughout Louisiana.



IMDB User Rating: 5.6/10 (8,953 votes)

Multiple-camera setup



The multiple-camera setup (aka, multiple-camera mode of production) is a method of shooting films and television programs. Several cameras—either film or video—are employed on the set and simultaneously record (or broadcast) a scene. It is often contrasted with the single-camera setup, which uses just one camera on the set.
Generally, the two outer cameras shoot close shots or crosses of the two most active characters on the set at any given time, while the central camera or cameras shoot a wider master shot to capture the overall action and establish the geography of the room. In this way, multiple shots are obtained in a single take without having to start and stop the action. This is more efficient for programs that are to be shown a short time after being shot as it reduces the time spent of editing the footage. It is also a virtual necessity for regular, high-output shows like daily soap operas. Apart from saving editing time, scenes may be shot far more quickly as there is no need for re-lighting and the set-up of alternate camera angles for the scene to be shot again from the different angle. It also reduces the complexity of tracking continuity issues that crop up when the scene is reshot from the different angles. It is also vital for live television.

While shooting, the director and assistant director create a line cut by instructing the technical director to switch the feed to various cameras. In the case of sitcoms with studio audiences, this line cut is typically displayed to them on studio monitors. The line cut may later be refined in editing, as the picture from all cameras is recorded, both separately and as a combined reference display called the quad split. The camera currently being recorded to the line cut is indicated by a tally light on the camera as a reference both for the actors and the camera operators. A recent addition to this technique, borrowed from sports broadcasting, is called the "iso" recording (for "isolated" camera), where each camera's signal is recorded independently, in addition to feeding the switcher for the line cut.

History and use

Although it is often claimed that the multiple-camera setup was pioneered for television by Desi Arnaz and cinematographer Karl Freund on I Love Lucy, other television shows had already used it, including another comedy on CBS, The Amos 'n Andy Show, which was filmed at the Hal Roach Studios and was on the air four months earlier. The technique was developed for television in 1950 by Hollywood short-subject veteran Jerry Fairbanks, assisted by producer-director Frank Telford.[1] Desilu's innovation was to use a multiple-camera setup before a live studio audience.

The multiple-camera mode of production gives the director less control over each shot, but is faster and less expensive than a single-camera setup. In television, multiple-camera is commonly used for sports programs, soap operas, talk shows, game shows, and some sitcoms. However, many sitcoms from the 1950s to the 1970s were actually shot using the single camera mode of production, including The Adventures of Ozzie and Harriet, Leave It to Beaver, The Andy Griffith Show, The Addams Family, The Munsters, Get Smart, Bewitched, I Dream of Jeannie, Gilligan's Island, Hogan's Heroes and The Brady Bunch. These did not have a live studio audience and were shot using the single-camera technique, as are more recent programs such as The Larry Sanders Show (1992–1998), Malcolm in the Middle (2000–2006), Scrubs (2001–2008), and My Name Is Earl (2005–).

Television prime-time dramas are usually shot using a single-camera setup. Most films also use the single-camera setup. In recent decades larger Hollywood films have begun to use more than one camera on-set, usually with two cameras simultaneously filming the same setup, however this is not a true multicamera setup in the television sense. Sometimes feature films will run multiple cameras, perhaps four or five, for large, expensive and difficult-to repeat special effects shots such as large explosions. Again, this is not a true multicamera setup in the television sense as the resultant footage will not always be arranged sequentially in editing, and multiple shots of the same explosion may be repeated in the final film — either for artistic effect or because the different shots are taken from different angles they can appear to be different explosions to the audience.

The choice of single-camera or multiple-camera setups is made separately from the choice of film or video. That is, either setup can be shot on either film or video.

Wednesday, August 15, 2007

Arriflex D-20


The Arriflex D-20 is a film-style digital motion picture camera made by Arri first introduced in November 2005. The camera's main attributes are its modularity and the size and type of its sensor.

The D-20 uses a single CMOS sensor the size of a Super 35mm film gate aperture. Effectively the D-20, when used with current 35mm PL mount motion picture lenses, yields the same field of view and depth of field as that of traditional 35mm film motion picture cameras.

The D-20 captures images in two main modes.
In Data mode (4:3 aspect ratio) the sensor has 3018x2200 active pixels generating RAW Bayer-data at 12-bits. The RAW data needs to be processed outboard to generate a full color image. A delivery aspect ratio for theatrical release, commonly 1.85:1, is achieved by cropping from the original image, similar to the cropping necessary when shooting 35mm film. In Data mode the sensor size also allows for the use of anamorphic lenses, producing the 2.35:1 widescreen aspect ratio.
In HD Video mode (16:9 aspect ratio) the sensor uses 2880x1620 active pixels. This output is 1920x1080 pixels in either YUV 4:2:2 10 bit (via single link HD-SDI) or RGB 4:4:4 10 bit (via dual link HD-SDI).

The D-20 has a mechanical shutter, variable from 11.2° to 180° and an electronic shutter variable from 66° to 360° at 24fps. The camera is capable of running at speeds from 1 to 60fps, though this is currently limited. Numerous components of the camera were borrowed from Arri film camera models (most notably the 435ES), assuring full compatibility with most of the film camera accessories.

Film Shot with D-20:



Advantages

Like the Dalsa Origin, the D-20 features a detachable optical viewfinder. Digital and electronic viewing systems have different advantages.

optical viewfinder advantages
abiity to use camera to set framing without having connection to a power supply
extremely high viewfinder resolution

electronic viewfinder advantages
monitoring the final image after capture. optical viewfinder monitor before capture and therefore don't show what the camera records, but what the lens shows.
electronic viewfinders can display more information (as foucs charts, wavemeter, zebra etc) which help the dp.

both systems, electronic and optical can have
fractionally greater field of view than the sensor, allowing for more accurate and predictive framing
ability to optically zoom in within the viewing system to check critical focus.

the two points above are only available on very good electronical viewfinder/camera systems. Additionally, a wide variety of electronic viewing options can be added to the camera, giving it many advantages of purely electronic viewing systems.

Like Arri film cameras, the D-20 is modularly constructed. Both the mechanical and electronic components are upgradable. This also applies to the sensor, which can be changed as advances are made.

Limitations

At present the Data output and variable speed capabilities of the camera are disabled, awaiting upgrades from Arri.

Super 35 mm

Super 35 (originally known as Superscope 235) is a motion picture film format that uses exactly the same film stock as standard 35 mm film, but puts a larger image frame on that stock by using the negative space normally reserved for the optical analog sound track.

Super 35 was revived from a similar Superscope variant known as Superscope 235 which was originally developed by the Tushinsky Brothers for RKO back in 1954. When cameraman Joe Dunton was preparing to shoot Dance Craze in 1982, he chose to revive the Superscope format by using a full silent-standard gate and slightly optically recentering the lens port. These two characteristics are among the central ones of the format. It was adopted by Hollywood starting with Greystoke in 1984, under the format name Super Techniscope. Later, as other camera rental houses and labs started to embrace the format, Super 35 became popular in the mid 1990s, and is now considered a ubiquitous production process, with usage on well over a thousand feature films. It is also usually the standard production format for television shows, music videos, and commercials, since none of these require a release print, thus have no reason to reserve space for an optical soundtrack. James Cameron was an early, consistent, and vocal supporter of the format, first using it for The Abyss. It also received much early publicity for making the cockpit shots in Top Gun possible, since it was otherwise impossible to fit 35 mm cameras with large anamorphic lenses into the small free space in the cockpit.

Super 35 is a production format. Theatres do not receive or project Super 35 prints. Rather, movies are shot in a Super 35 format but are then - either through optical blowdown/matting or digital intermediate - converted into one of the standard formats to make release prints. Because of this, often productions will also use Super 35's width in conjunction with a 3-perf negative pulldown in order to save costs on "wasted" frame area shot and allow for camera magazines to shoot for 33% longer in time with the same length of film.

Read More

Low-key lighting

Low-key lighting is a style of lighting for film or television. In traditional lighting design for black and white photography, also called three-point lighting, there are a key light, a fill light, and a back light.

Low key light shows the contours of an object by throwing areas into light or shadow while the fill light provides partial illumination in the shadow areas to prevent a distracting contrast between bright and dark. For dramatic effects, one may wish the contrast to be high — to emphasize the brightness of the sun in a desert scene, to make a face look rugged, seamed, and old, or to isolate details in a mass of surrounding shadow. A variety of methods can be used to create these effects.

Demonstration




The key to fill ratio, as measured using an instrument to measure light intensity, e.g., a light meter, is the ratio of the intensity of the key light to the fill light. Low key lighting actually has a much higher ratio, e.g. 8:1, than does high key lighting, which can approach 1:1.

It is perfectly possible to use fill light in these large areas of shadow, reducing the contrast. Generally the term 'low key' refers to cases in which no such care is taken.

Low key is also used in cinematography to refer to any scene with a high contrast ratio, especially if there is more dark area than light. Compare with high-key lighting.

Mood lighting is a term used to describe the use of light to illuminate an object or background in a deliberate manner to evoke a certain mood or emotion. This highly skilled lighting technique is very subtle but nevertheless can achieve highly effective outcomes. An example of this is an evil character deliberately illuminated from beneath the chin giving them a certain eerie and demonic appearance.

Ambient lighting refers to the overall illumination of an environment without the addition of lighting for photography. This includes practical lamps, overhead fluorescent, sunlight or any previously existing light.

DVD Pick of the Week - Freedom Writers

Freedom Writers is a 2007 American film starring Hilary Swank, Scott Glenn, Imelda Staunton and Patrick Dempsey. It is based on the book, The Freedom Writers Diary, by teacher Erin Gruwell. The title is a play on the term "Freedom Riders", the black and white civil rights activists who tested the U.S. Supreme Court decision ordering the desegregation of interstate buses in 1961.

Trailer:



IMDB User Rating: 7.6/10

Saturday, August 11, 2007

New Site on Film Making

www.collativelearning.com

Check this site out.... very informative and useful for amateur film makers.....



Wednesday, August 8, 2007

Non-linear editing

Non-linear editing

for film and television postproduction is a modern editing method which involves being able to access any frame in a video clip with the same ease as any other. This method is similar in concept to the "cut and glue" technique used in film editing from the beginning. However, when working with film, it is a destructive process, as the actual film negative must be cut.

Non-linear, non-destructive methods began to appear with the introduction of digital video technology. Video and audio data are first digitized to hard disks or other digital storage devices. The data is either recorded directly to the storage device or is imported from another source. Once imported they can be edited on a computer using any of a wide range of software. For a comprehensive list of available software, see List of video editing software, whereas Comparison of video editing software gives more detail of features and functionality.

In non-linear editing, the original source files are not lost or modified during editing. Professional editing software records the decisions of the editor in an edit decision list (EDL) which can be interchanged with other editing tools. Many generations and variations of the original source files can exist without needing to store many different copies, allowing for very flexible editing. It also makes it easy to change cuts and undo previous decisions simply by editing the edit decision list (without having to have the actual film data duplicated). Loss of quality is also avoided due to not having to repeatedly re-encode the data when different effects are applied. Compared to the linear method of tape-to-tape editing, non-linear editing offers the flexibility of film editing, with random access and easy project organization.


With the edit decision lists, the editor can work on low-resolution copies of the video. This makes it possible to edit both broadcast quality and high definition quality very quickly on normal PCs which do not have the power to do the full processing of the huge full-quality high-resolution data in real-time.

The costs of editing systems have dropped such that non-linear editing tools are now within the reach of home users. Some editing software can now be accessed free as web applications, some, like Cinelerra (focused on the professional market), can be downloaded free of charge, and some, like Microsoft's Windows Movie Maker or Apple Computer's iMovie, come included if you buy the appropriate operating system.

A computer for non-linear editing of video will usually have a video capture card for capturing analog video or a FireWire connection for capturing digital video from a DV camera, as well as video editing software. Modern web based editing systems can take video directly from a camera phone over a GPRS or 3G mobile connection, and editing can take place through a web browser interface, so strictly speaking a computer for video editing does not require any installed hardware or software beyond a web browser and an internet connection. Various editing tasks can then be performed on the imported video before it is exported to another medium, or MPEG encoded for transfer to a DVD.

Tuesday, August 7, 2007

Fast & Slow Cutting

Fast Cutting

It is a film editing technique which refers to several consecutive shots of a brief duration (e.g. 3 seconds or less). It can be used to convey a lot of information very quickly, or to imply either energy or chaos. Fast cutting is also frequently used when shooting dialogue between two or more characters, changing the viewer's perspective to either focus on the reaction of another character's dialog, or to bring to attention the non-verbal actions of the speaking character. One famous example of fast cutting is the murder-scene in Alfred Hitchcock's film Psycho (1960).

Check this link.... Very Good Demonstration....

Slow Cutting

It is a film editing technique which uses shots of long duration. Though it depends on context, it is estimated that any shot longer than about fifteen seconds will seem rather slow to viewers from Western cultures. A famous example of slow cutting can be found in Stanley Kubrick's A Clockwork Orange (1971). In a segment that lasts three minutes and fifteen seconds and contains only three shots, the main character (Alex de Large) is followed as he walks the length of a futuristic record store, meets two young ladies, and brings them back to his (parents') house. Another example is Alfred Hitchcock's film Rope (1948) consisting of only eight cuts. Each cut lasts about as long as a full 1000 foot roll of 35 mm film (about 10 minutes). And of course, the prime-beef of slow-cutting being Russian Ark, which consisted of one long two hour plus shot.

Freeze frame shot

A freeze frame shot is used when one shot is printed in a single frame several times, in order to make an interesting illusion of a still photograph. Hong Kong director John Woo also makes extensive use of freeze frames shots, usually to gain a better focus on to a character's facial expression or emotion at a critical scene.



Freeze Frame is also a drama medium term used in which, during a live performance, the actors/actresses will freeze at a particualar, pre-meditated time, to enhance a particlular scene, or to show an important moment in the play/production. The image can then be further enhanced by spoken word, in which each character tells their personal thoughts regarding the situation, giving the audience further insight into the meaning, plot or hidden story of the play/production/scene. This is known as thought tracking, another Drama Medium.

Jump Cut

A jump cut is a cut in film editing where the middle section of a continuous shot is removed, and the beginning and end of the shot are then joined together. The technique breaks continuity in time and produces a startling effect. Any moving objects in the shot will appear to jump to a new position.

Demonstration of Jump Cuts



In classical continuity editing, jump cuts are considered a technical flaw. Most cuts in that editing style occur between dissimilar scenes or significantly different views of the same scene to avoid the appearance of a jump. Every effort is made to make cuts invisible, unobtrusive. Contemporary use of the jump cut stems from its appearance in the work of Jean-Luc Godard and other filmmakers of the French New Wave of the late 1950s and 1960s. In Godard's ground-breaking Breathless (1960), for example, he cut together shots of Jean Seberg riding in a convertible (see right) in such a way that the discontinuity between shots is emphasized. In the screen shots above, the first image comes from the very end of one shot and the second is the very beginning of the next shot — thus emphasizing the gap in action between the two (when Seberg picked up the mirror).

Match Cut

A match cut or raccord is a cut in film editing from one scene to another, in which the two camera shots are linked visually or thematically. It can be used to underline a connection between two separate elements, or for purely visual reasons. In a match cut, an object or action shown in the first shot is repeated in some fashion in the second shot; the objects may be the same, be similar, or have similar shapes or uses.



More

Speed Ramping

David Cox

Many ads and feature films these days use a process described by industry insiders as "speed ramping" in which onscreen characters and events are shown to suddenly speed up and slow down. It is a "look" which for filmmakers and critics of my generation (over 35) is associated with experimental and avant-garde film, particularly the types of films made with Bolex and Arriflex 16mm cameras which enable real-time shutter speed manipulation while the camera is running. When you film someone at 24 frames per second, and then slow the frame rate down to 12 frames per second while the camera is running, two things happen. 1)The person appears to speed up (fewer frames to cover the same action means that at a constant frame playback rate of 24 fps the action appears faster); and 2) unless the aperture of the camera is altered to keep the exposure consistent with the frame rate, the film gets overexposed, as more light is allowed to land on the slowed down film.

Now computer based non-linear editing and post-production tools are used to manipulate the speed of the images, as well as the other spin-off effects associated with multiple speed coverage of shots. Computers can mimic many of the attributes of traditional film, including the familiar scratching-of-the-emulsion, various dust and light leak effects, when the material has in fact been shot on digital video. I've lost count of students who ask me how to make their miniDV sourced video material look as if it had been filmed on 35mm panavision, with 1:185 aspect ratio.

Here's a Sample


These now commonplace digital techniques are used to connote the "look and feel" of film and have often been developed to help blur the distinction between video and film material, or computer generated film material such as 3D computer graphics. The aim is to create a naturalistic sense that material has been photographed in the most analogue and traditional ways possible. There can almost be said to be a fetishism of the attributes of traditional film, with the details of the passage of film through a gate, sprockets, film grain speckles, flickering image quality and all the other attributes which have lent film its status as the domain of "true professionals". The fetishism of film is to some extent the fetishism of motion picture-making as a profession. 'If only I could make my material look like that of the professionals, then I too might have a chance at mainstream success...' What is seldom questioned however are the assumptions and values which lie behind the mainstream industry-- its use of budgets, its use of labour, and the crippling distribution system which not even the biggest mavericks of the (Hollywood) century have been able to crack, Coppolla, Lucas, Speilberg --none of them.

Read More

Monday, August 6, 2007

Dolly Zoom

The dolly zoom is an unsettling in-camera special effect that appears to undermine normal visual perception in film.

The effect is achieved by using the setting of a zoom lens to adjust the angle of view (often referred to as field of view) while the camera dollies (or moves) towards or away from the subject in such a way as to keep the subject the same size in the frame throughout. In its classic form, the camera is pulled away from a subject whilst the lens zooms in, or vice-versa. Thus, during the zoom, there is a continuous perspective distortion, the most directly noticeable feature being that the background appears to change size relative to the subject.

As the human visual system uses both size and perspective cues to judge the relative sizes of objects, seeing a perspective change without a size change is a highly unsettling effect, and the emotional impact of this effect is greater than the description above can suggest. The visual appearance for the viewer is that either the background suddenly grows in size and detail overwhelming the foreground; or the foreground becomes immense and dominates its previous setting, depending on which way the dolly zoom is executed.

The effect was first developed by Irmin Roberts, a Paramount second-unit cameraman, and was famously used by Alfred Hitchcock in his film Vertigo.

Here's a Sample



The dolly zoom is commonly used by filmmakers to represent the sensation of vertigo, a "falling away from oneself feeling", feeling of unreality, or to suggest that a character is undergoing a realization that causes him to reassess everything he had previously believed. After Hitchcock popularized the effect (he used it again for a climactic revelation in Marnie), the technique was used by many other filmmakers, and eventually became regarded as a gimmick or cliché. This was especially true after director Steven Spielberg repopularized the effect in his highly regarded film Jaws, in a memorable shot of a dolly zoom into Police Chief Brody's (Roy Scheider) stunned reaction at the climax of a shark attack on a beach (after a suspenseful build-up). Spielberg used the technique again in E.T. the Extra-Terrestrial and Indiana Jones and the Last Crusade.

The Power of Editing

This is a treat for all the MATRIX lovers......

A freelancer has edited some of the brilliant scenes in Matrix with an excellent background score.... Do remember to turn your speakers on while watching the video, else it would be a kiss without a moustache.... enjoy.....

DVD Pick of the Week - Déjà Vu

Déjà Vu is a science fiction crime thriller directed by Tony Scott, produced by Jerry Bruckheimer, and starring Denzel Washington. The film was released on November 22, 2006.

Cast

Actor                                              Role

Denzel Washington                        Douglas Carlin
Paula Patton                                   Claire Kuchever
Adam Goldberg                              Alexander Denny
Bruce Greenwood                          Jack McCready
Val Kilmer                                      Agent  Andrew Pryzwarra
Matt Craven                                   Larry Minuti
James Caviezel                              Carroll Oerstadt


Trailor



Plot

In Algiers, New Orleans, after the explosion of a ferry transporting the sailors from the USS Nimitz and their families with 543 casualties, the lonely ATF agent Doug Carlin (Denzel Washington) is assigned to investigate the terrorist attack. Without any lead, he is informed by Sheriff Reed about a corpse of a woman that was found one hour before the explosion, but burnt with the same explosive. He is invited by FBI Agent Pryzwarra (Val Kilmer) to join the surveillance team led by Jack McCready in the investigations, using a time window and Einstein-Rosen bridge through seven satellites to look back four and a quarter days in time. He discloses the identity of the mysterious dead woman called Claire Kuchever and decides to follow her last moments trying to find the criminal. Along the surveillance, Doug falls in love for Claire and tries to change destiny, saving her life.

IMDB User Rating: 7.1/10

Sunday, August 5, 2007

The Showdown: Blu-Ray vs. HD-DVD

By Michael Grebb

Alan Parsons wishes it wasn't so. But like it or not, the senior vice president of Pioneer's industrial solutions business group has become a wary foot soldier in the battle over the future of the DVD format. As music blares from a band playing at a nearby exhibit at the 2005 International CES, Parsons sits at a small table in a meeting room contemplating how the next couple of years might play out. He remains relatively reserved, trying not to let his passion for the next-generation Blu-Ray Disc format devolve into vitriol against rival format HD-DVD. "I don't like the rock throwing," he insists. "I just want to excite consumers."

That may be true, but Parsons still finds it hard to resist getting in a few digs on the HD-DVD rival, which at about 15 gigabytes per layer has roughly 40 percent less storage capacity than the Blu-Ray format. "They might end up with something ho-hum," he says. "They're saying that [their capacity] is good, but people used to think that five gigs was good enough." Parsons shrugs his shoulders a bit, wearing a look of calm but certain exasperation. "Why would we limit ourselves to a lower capacity?" he asks.



To be sure, Parsons is among several CE manufacturers backing the Blu-Ray format, which they claim is superior to HD-DVD. But the HD-DVD format has its own backers, who while fewer in numbers, are equally adamant that their format will win out because of its lower transition and manufacturing costs—as well as other technical benefits and its expected quicker time to market. Indeed, either format is a vast improvement over the current DVD design, which maxes out at about 4.7 gigabytes. Even at standard-definition quality, that's barely enough space for a two-hour movie and a few hours of special features. And with that much space, forget about high-definition TV.

Read Full Article

Blu-ray Disc

A Blu-ray Disc (also called BD) is a high-density optical disc format for the storage of digital media, including high-definition video.

The name Blu-ray Disc is derived from the blue-violet laser used to read and write this type of disc. Because of its shorter wavelength (405 nm), substantially more data can be stored on a Blu-ray Disc than on the DVD format, which uses a red, 650 nm laser. A single layer Blu-ray Disc can store 25 gigabytes (GB), over five times the size of a single layer DVD at 4.7 GB. A dual layer Blu-ray Disc can store 50 GB, almost 6 times the size of a dual layer DVD at 8.5 GB.

Blu-ray Disc is similar to PDD, another optical disc format developed by Sony (which has been available since 2004) but offering higher data transfer speeds. PDD was not intended for home video use and was aimed at business data archiving and backup.

Blu-ray is currently leading in the format war with rival format HD DVD.

Here is a Video Demo:




Technical Specifications

1. About 9 hours of high-definition (HD) video can be stored on a 50 GB disc.
2. About 23 hours of standard-definition (SD) video can be stored on a 50 GB disc.
3. On average, a single-layer disc can hold a High Definition feature of 135 minutes using MPEG-2, with additional room for 2 hours of bonus material in standard definition quality. A dual layer disc will extend this number up to 3 hours in HD quality and 9 hours of SD bonus material.

More

High Definition Video Production

High Definition Video production, is the art and service of producing a finished video product to a customer's requirement and consumption, on a High Definition format. High Definition formats are normally either interlaced 1080 line video with a frame rate of 30 fps, or 720 line video, progressively scanned, with a frame rate of 60 fps. However, there are variants that exceed these standards.

Formats

The following digital tape formats are commonly used for HD Video Production:
HDCAM. The Sony HDCAM format supports 1080 resolutions at frame rates of 24p, 25p, 50i, and 60i. HDCAM stores the video at 1440 x 1080, which is a 33 percent reduction horizontally from 1920. It also uses a unique color sampling of 17:6:6, which means that HDCAM has only half the color detail of other HD formats. HDCAM is 4.4:1 compressed and is 8-bit, but supports 10-bit input and output.

D5-HD. The Panasonic D5-HD format uses the D5 tape shell. Unlike HDCAM, D5-HD can do 720/60p, 1080/24p, 1080/60i, and even 1080/30p. D5-HD compresses at 4:1 in 8-bit mode and 5:1 in 10-bit mode, and supports 8 channels of audio.

Now Here is a Demo Video from Warner:



DVCPRO-HD. This Panasonic HD format, sometimes called D7-HD, is based on the same tape shell used for DVCAM and DVCPRO. D7-HD does 720/60p and 1080/60i, with 1080/24p in development. It uses 6.7:1 compression, and supports 10-bit input and output per channel. DVCPRO-HD supports 8 channels of audio.

HDV. This format is one of a number of emerging formats that are being used in lower-cost cameras. HDV was introduced with JVC's groundbreaking professional consumer (prosumer) HD camera, the JY-HD10, which records highly compressed MPEG-2 on a mini DV tape. HDV is a MPEG-2 transport stream that includes a lot of error correction. Its video uses interframe-compressed MPEG-2, at 19 megabits per second (Mbit/s) for 720p and 25 Mbit/s for 1080i. Audio is encoded with 384 kbit/s MPEG-1 Layer 2 stereo.

The interframe encoding enables HDV to achieve good quality video at lower bit rates, which means much more content per tape, but it increases the difficultly of editing the content. The next article in this series will provide additional details about interframe encoding.

Matte Film Making

Mattes are used in photography and special effects filmmaking to combine two or more image elements into a single, final image. Usually, mattes are used to combine a foreground image (such as actors on a set, or a spaceship) with a background image (a scenic vista, a field of stars and planets). In this case, the matte is the background painting. In film and stage, mattes can be physically huge sections of painted canvas, portraying large scenic expanses of landscapes.

In film, the principle of a matte requires masking certain areas of the film emulsion to selectively control which areas are exposed. However, many complex special-effects scenes have included dozens of discrete image elements, requiring very complex use of mattes, and layering mattes on top of one another.

For an example of a simple matte, we may wish to depict a group of actors in front of a store, with a massive city and sky visible above the store's roof. We would have two images-- the actors on the set, and the image of the city-- to combine onto a third. This would require two masks/mattes. One would mask everything above the store's roof, and the other would mask everything below it. By using these masks/mattes when copying these images onto the third, we can combine the images without creating ghostly double-exposures. In film, this is an example of a static matte, where the shape of the mask does not change from frame to frame.

Other shots may require mattes that change, to mask the shapes of moving objects such as human beings or spaceships. These are known as travelling mattes. Travelling mattes enable greater freedom of composition and movement, but they are also more difficult to accomplish. Bluescreen techniques, originally invented by Petro Vlahos, are probably the best known techniques for creating travelling mattes, although rotoscoping and multiple motion control passes have also been used in the past.  More...

Thursday, August 2, 2007

Compositing

In visual effects post-production, compositing refers to creating new images or moving images by combining images from different sources – such as real-world digital video, film, synthetic 3-D imagery, 2-D animations, painted backdrops, digital still photographs, and text.



Compositing techniques, while almost exclusively digital today, can be achieved by many means. On-set in-camera effects have been utilized since the advent of film such as in the 1902 A Trip to the Moon. Optical compositing is the often complex process that requires an optical printer to photographically composite the elements of multiple images onto a single filmstrip. However, since the 1990s, digital techniques have almost completely replaced what was once the only method of post-production compositing.

Compositing is used extensively in modern film and television to achieve effects that otherwise would be impossible or not cost-efficient. One common use for compositing is scene or set extension which enables filmmakers to shoot on a relatively small set and create the impression of a significantly different location by adding additional surrounding and foreground imagery. A common tool to help facilitate composites is the bluescreen, a backdrop of a uniformly solid color--usually blue or green--that is placed behind an actor or object. During compositing, all areas of a frame with that color are removed and replaced, allowing the compositor to place the isolated image of the actor or object in front of a separately shot or synthetic background.

In feature film, movies are generally shot on 35mm film. For modern compositing, the film has to be digitized with a film scanner. It is then transferred to a computer where it can be edited. The compositors gather all the separately shot images and, with a compositing platform or software, combine elements of each image to achieve a resultant shot. As a result, a single frame of the finished shot may contain from anywhere between two to many hundreds of images from footage shot months or even years apart.

Bullet time

Bullet time (or bullet-time) is a computer enhanced simulation of variable speed(ie. slow motion, time lapse, other) photography used in recent films, broadcast advertisements and computer games. It is characterized both by its extreme permutation of time (slow enough to show normally imperceptible and un-filmable events, such as flying bullets) and space (by way of the ability of the camera angle--the audience's point-of-view--to move around the scene at a normal speed while events are slowed).

This is almost impossible with conventional slow-motion, as the physical camera would have to move impossibly fast; the concept implies that only a "virtual camera," often illustrated within the confines of a computer-generated environment such as a game or virtual reality, would be capable of "filming" bullet-time types of moments. Technical and historical variations of this effect have been referred to as time slicing, view morphing, slo mo, temps mort and virtual cinematography.

More...