// you’re reading...

Interview

Gwilym Morris p2

Co-director of Retrofit with Lloyd Morris, Gwilym Morris talks about contacting all the people involved in making Retrofit, which involved film professionals from around the world, and he talks about the stages of computer generated animation...

READERSVOICE.COM: Where did you get all the people involved, from the composer of the title track, to the top notch animators, to the actors like Frank Fitzpatrick and the rest of the crew?

GWILYM MORRIS: Both Lloyd [co-director Lloyd Morris] and I work in the industry, so the crew were talented people we knew. The sound and camera crew were provided through Tom’s contacts. Camera equipment and were provided by Liran at www.why116.com. Sound was provided by Matt, Howard and Josh. Lloyd’s wife is a make-up artist and provided those services on the day. Frank, the voice of the robot, is an amazingly talented actor who I knew through another friend who co-wrote Retrofit. The score was provided by Barn from Bob & Barn, people I had the pleasure of working with before, so it is all linked through contacts and people kindly giving up their free time to create a short film. We are lucky our industry is filled with people who love the craft so much they are willing to give everything to a project. We are very lucky.

RV: I read somewhere that each second of animation in programs like Maya takes about 20 frames and each frame takes a long time to render. Is it getting easier for film makers to render high quality cgi films like this or does it still take numerous computers?

GM: Films usually run at 24 frames a second and each frame requires a lot of work to give the illusion a CG character is really in the scene. To give you an example, when you film a scene, the filmed plate is just a background image and the computer image is just super imposed over the top. The computer doesn’t know what is in front or how we moved the camera and if you were to put the robot on the filmed footage at this stage it would be no different to adding text at the start of a movie. So the illusion has many stages and it’s the same on all movies with computer generated effects added.
Before the process of creating the robot starts designs are drawn up. Georgi and Angelo provided us with beautiful sketches as a blue print of what the robot will look like on paper.
Then on the day of the shoot on-set data is gathered in the form of measurements, lighting data and hundreds of photographs. Dan and William did a great job doing this around the busy film crew. Also Ross, our 1st AD did a good job of filming elements like the spoon the robot holds in his hand which was later cut out and added. Benn did an incredible job supervising production.
Once in the computer the following is needed to create a single frame:

1. Matchmove – The computer has to be told to recreate the exact camera move done in real life or the image placed over the top will slide around and the illusion is broken. The computer wasn’t on set so it doesn’t know how we moved the camera, so data is input such as height, lens used etc. in to complex computer program and with some talented people such as Chris and his team consisting of Anas, Will, Evan and Malcolm, who tweak the data and produce an exact match of the movement in the shot. So now even if the object is not animated it will still look like its in the scene, like a statue.

2. Modelling – The object that has been beautifully designed by Georgi was then modelled by Tibor who put in intricate details so that it looked like it could have been created in real life. This included Pistons and wires. This is the base of what you see on screen.

3. Texturing – The next stage is making sure the object is painted to give an illusion that it is made of real materials. Rusty metals, plastic, peeling paint. Petter assisted by jaromir, did a very nice job painting the textures and applying them to the object.

-continued next page…
-copyright Simon Sandall.