Greinasafn fyrir flokkinn: hreyfimyndir

Light Source

Project done in the Introductory 3D course at in spring 2014

Re-render of the original animation handed in as an exam project in the Introductory 3D course at This rendition is in HD1080, where the original render was in HD720. Also an attempt was made to increase quality by changing the Final Gathering Accuracy from 100 to 600 and Point Interpolation from 10 to 50. This resulted in dramatically increased rendering time, especially inside the tube where each frame took one hour to render; caching of Final Gather points could probably have been employed with a Final Gather Map. Total waiting time was lessened by using 10 lab computers at; licensing wouldn't allow using more computers simultaneously.


Original render of the animation handed in as an exam project in the Introductory 3D course at There isn't much difference to be seen when compared with the re-render, it's most perceivable in the light cast from the globe onto its surrounding floor and ceiling, with the MIA Light Surface Shader; in this version it's quite coarse and a bit less so in the re-render, mostly due to a higher Final Gathering Point Interpolation value, but still there it's not as smooth as I would have it.


The concept behind the project discussed in this report has taken many shapes, as ideas have brewed during the course of this spring semester.  Being a Games Technology student, it could have been a good fit to focus on creating some sort of game assets, but I was delighted to learn about the freedom offered in the Introductory 3D course.  If the outcome of the project leaves the viewer wondering about what just happened, then it is a success.

Though the first idea was inspired by a game concept that I had just started working on, then my initial aim was only to create a visual experience, rather than a real asset usable in a game.  Then the concept changed as new inspiration struck but the aim was still to create visuals only.  Later in the brainstorming process, that new concept lead to ideas of a possible gameplay, so the intended pure visual was inspiring the creation of an interactive experience and the process had come full circle; starting with an idea based on game development, aiming for an art piece and evolving it into something that inspires further game development.

Concept evolution through constant inspiration

The evolution of ideas for this project, from a focus on shapes, to the components of light, was fueled by objects encountered in the everyday life.


These days I’m constantly exposed to children’s toys, and one toy in particular, offering the matching of differently shaped objects to correspondingly shaped holes, has provoked thoughts about basic forms and how they can be the basis of more complex shapes.  Everything in the universe has evolved into it’s current shape from simpler building blocks, which themselves are composed of smaller, more primitive elements.  An individual’s ability also develops from simple tasks to more accomplished movements and thoughts.


This evolution and constant change inspired an idea for a visualisation where we would follow a primitive shape, travelling from the core of a sphere through a pipe to the surface, and during the travel the shape would morph into a more complex shape resembling the outlines of a country, which we would then see as a patch on the sphere’s surface.


Alongside this visualisation idea I had taken small steps towards creating a game where the player would have the task of matching increasingly complex shapes to their corresponding slots, where the most complex shapes would resemble the outlines of continents.  That concept was placed outside a sphere rather than inside it, like floating in orbit around the Earth.  When the continental shapes would be matched, they could be seen falling onto the sphere’s surface, forming a collection of the Earth’s continents.

Prototype in Unity for a game called Fall into Shape,  in a setting inspired by the stage separation of the Saturn V rocket
Prototype in Unity for a game called Fall into Shape, in a setting inspired by the stage separation of the Saturn V rocket


Light play


All the ideas connected to this project were based on shapes and their morphing into others, until I noticed a globe model when walking past an office window at ITU one evening.  I became fascinated by this kind of objects, which I had in my bedroom as a child, and started thinking about the source of that globe’s light.  In the case of the model, it is obviously a regular light bulb on the inside that lights up its surface.  A few years ago I saw glowing lava light up the night sky in Iceland, during the notorious “Eyjafjallajökull” eruptions, and there thelight source was molten rock, lava, flowing from the Earth’s core.  Here was born the possibly surrealistic concept of light traveling from the core of the Earth, through lava tubes[1] to the surface.

This concept of light inside and out of the Earth lead to ideas for a possible gameplay, with a connection between the seven components of light, that can be seen in the rainbow, and the seven continents of the Earth.  There each continent would have seven openings / portals into tubes leading to the Earth’s core, each having one of the seven rainbow colours.  The player would pick a portal, that would lead her through a tube to the core, where light particles could be seen floating around[2], each emitting light in one colour of the rainbow.  There the player would have the task of picking one light particle with the same colour as of the portal that was entered and then pick a tube opening leading back to a portal with that same colour.  The indication of what type of tube to pick would be its shape, which could be known after traveling through one tube of that shape to the core.  If the player picked a correct type of tube, the light particle would travel through it and shine at the portal on the surface, and the player could proceed onto another portal; if not, the light particle would travel to the surface and back to the core, leaving the player to have another go at completing a portal.

During a presentation of these concepts for students in the Introductory 3D course, the connection between the fitting of shapes and matching of lights with colours was pointed out to me; both are puzzles that require the player to find the correct fit, and I was glad to hear that others would see some coherence in these concepts, where I could have as well expected them to be perceived as a confusing soup of ideas.

As an extension to this coloured light chase, there could be set up an array of globe models, each textured with paleogeographic maps[3] showing how the Earth may have appeared at various increments of time.  In this setting, a game could revolve around fitting a light particle to each portal on one globe, before proceeding to the next level in the form of a another globe with a map showing the continents layout in another time slice, and in the process the player would gain some idea of Earth’s tectonic evolution.  Though the game would not be branded as educational, it might have some educational value as a by-product.

image06 image27


Realising the concept discussed above involved modeling, texturing, lighting and animation, detailed below.



The first task in the modeling process was to create the framing for the globe model.  To solve that task I was given the advice to create a circular arc, with Maya’s Arc Tools and extrude a box along it.  The base and stand were created from cylinder primitives, shaped with soft selections on their vertices.


It was fairly simple to model the globe itself from a sphere primitive, but the twist was that its surface had to be visible on both the inside and outside.  To achieve that, all faces of the sphere were extruded inwards and the normals of that inner layer were reversed, to face towards the globe’s core.  The sphere was created with an extremely high polygon count (198394 faces for both layers) only to have flexibility of where to cut holes for the tubes that were to pass through it.


Modeling the tubes started with creating CV Curves to define the path along which to extrude their profile.  Four CV Curves were created by adding their control vertices in spiral shapes, in one of Maya’s orthographic views (top).  In a perpendicular view, soft selections on the CVs (with falloff mode set to surface) were used to shape the curve path in 3D.


Two options were considered to create the tube models along the curve paths; polygon extrude and surface extrude[4].  The latter was chosen as UVs are already created with that method, so texturing requires no further effort.  Two concentric circle paths were created with the Arc Tools to define the inner and outer layers of the tube profiles.  Each of them were surface-extruded along the curve paths and the resulting meshes combined by bridging their edges at the ends.  Normals of the inner tube layer were reversed, to face inside the tube, as was done for the sphere.






All textures of the scene lie on the two sides of the main globe and on the inside of another encompassing globe.  The faces of the main globe’s inner layer were selected and a texture applied to them, different from the one on the outside layer.  A color map and a bump map[5] for the Earth were used on the inside, and the texture happens to be flipped there, which could have been easily fixed by flipping the texture in an image manipulation program, but this was considered a good surrealistic effect.  The outside faces of the globe were textured with a bathymetry map of the Earth, for it’s nice visual effect of black continents and light blue sea[6].  A cloud map composed of satellite images of the Earth was applied on a sphere encompassing the whole scene, with normals facing inwards.



Different kinds of textures were considered for the tubes, ranging from a custom gradient resembling the layers of the inner Earth or the inside of a hala fruit[7], to a bump map imitating the ribs of plastic tubing, or even a tree bark texture that could give the illusion of enormous trees growing from the Earth’s surface.  Brushed steel and chrome Mental Ray material presets were tried.  Even the gray default material was considered, as it was visually pleasing in this context.  But the final decision was to have the tubes without textures, even though they were easy to apply due to the surface extrusion, but tinted with the seven colours of the rainbow, using a Mental Ray material (mia_material_x) with a rubber preset.  Though this decision would not make sense in the context of a game where tubes would have to be selected for the correct colour, making it too easy, it makes sense for this project which is focused on the visual experience and those coloured tubes look delectable.




The lighting for the inside of the main sphere consists of one point light located at the center (core) and another pointlight attached as a child of the camera (in the Outliner), so it lights up the inside of the tube as the camera travels through it.  As the project uses the Final Gathering technique offered by Mental Ray, a considerable glow saturated the whole scene when rendering inside the globe, and that was found to be due to light bouncing off the inside faces of the globe.  This glow was mostly removed by disabling the Final Gather Cast Mental Ray option for all the faces of the inside globe layer[8].



For light emitting from the portals on the Earth’s surface, spotlights were created with a Light Fog effect, to simulate the beams coming from a disco light.  To ease the creation of the required 29 spotlights, duplicates were created from the initial spotlight created and adjusted, where each duplicate was rotated to it’s position with the pivot located at the world / Earth’s center.  Only when all spotlight duplicates had been configured in their position was it discovered, by rendering a one frame sample, that it is not possible to duplicate spotlights with the fog effect and have good results; all spotlights rendered with a yellowish tint, even though they had been configured with the seven rainbow colours, and further research on the web confirmed that this was to be expected.  So they only way to go was to manually create from scratch each spotlight and configure it with the right colour and fog effect and transform to an appropriate position.  This was one of the most time consuming tasks of the project.



The most interesting lighting technique used in the project is that of Object Based Lighting with the MIA Light Surface Shader, where the material applied to the outside layer of the globe emits light according to the bathymetry texture used with the material – a feature offered by the Final Gathering technique – simulating the glow of a typical globe model with a light bulb inside.  The result is a bit grainy glow on the surfaces around the globe, probably due to the low resolution of the texture used, but a less grainy result was obtained by increasing the Accuracy value (from 100 to 600 in one case) for Final Gathering in the Render Settings.  Higher accuracy values resulted in much longer rendering times and so the default of 100 was used for the animation batch render.




The animation was to take the viewer from the inside of the globe, through one of the tubes, to a view of the globe’s exterior.  One of the first decisions was to animate the camera along the same curve as was used to create the tube it would travel through.  Manual keyframe animation was initially used to move the camera in the first part, before entering the tube.  Then there was the question of how to transition smoothly from the keyframe animation to the guided animation along the tube path.  One option was to have two cameras and switch them at the transition point, but I preferred to have one path for the whole path and tweak it visually to my requirements.  To that end, two other paths were created for the animationsegments before and after the tube travel.  The Bezier Curve Tool was used to create the paths, as it is more familiar to work with those kinds of curves.


When the curves had been attached, the camera movement along the newly created parts was shaky.  Applying smoothing to the vertices where movement was rough made the situation a little better, but when vertices were moved they got sharp corners again with shakiness appearing again in the animation.  Part of the problem may have been that the tube curve was initially a CV Curve and the new curves were Beziers.  Converting the combined curved to a NURBS curve and choosing to smooth the whole curve didn’t completely solve the problem; movement was still hard to control around the points of attachment and sharp corners prevailed.  It wasn’t until I found the Rebuild Curve option on the Edit Curves menu that the problem was solved with a resulting smooth curve that was easier to control.


Timing along the motion path curve was controlled with position markers.  Orientation of the camera along the path was controlled with orientation markers on the curve with keyed values of Up Twist, Side Twist and Front Twist.  The possibility of blending keyframe and motion path animation in Maya was considered[9], but controlling those twist values was sufficient.

Camera movement is wobbly during the first seconds of the animation along the rebuilt curve, but instead of ironing that out, that now could have been easily done, it was kept for it’s nice effect of chaotic introduction to this surrealistic world.



Several music pieces were considered for the animation soundtrack, including Wave Dub by Dope on Plastic[10], Halcyon (On and On) by Orbital[11], Down Down by Nils Frahm[12], Justice One by Drokk[13], and S.A.D. by Mind in Motion[14].  The last minute of the S.A.D. was finally chosen as the change in mood midway through that part is a perfect fit for the transition from the inside of the globe to its exterior, and the rave music of the first part goes somehow well with the colorful, psychedelic pipes.  Importing the audio file into Maya helped synchronizing the animation to the soundtrack.


Only little over two days before handing in this project, rendering commenced.  As I have a five year old desktop computer at home, running Ubuntu Linux, and I discovered that Maya is offered for the Linux operating system, I decided to try and use it for the batch rendering of the animation.  As only a 64 bit version of Maya for Linux is offered, and that home computer was running a 32 bit version of the operating system, I decided it was time to re-install a 64 bit version of Ubuntu on the machine.  Having done that, I had to jump through a few hoops to be able to run Maya in that environment[15].

After initiating the batch render process, it became apparent after the first few frames rendered that this one machine wouldn’t finish rendering the animation in time at a HD 720 resolution.  So it was clear that I would have to utilise more machines for the task, and the lab computers at ITU were certainly an option.  After rendering 500 frames in around 18 hours, Maya became corrupt on the Ubuntu machine for some unknown reason (logging into another account, resulting in a login prompt freeze and subsequent reboot of the computer, was the start of the trouble).

So the ITU lab computers were now the only option to finish the rendering.  A few hours later I had manually created a rendering cluster by starting batch rendering processes on seven computers at ITU, each set to render a range of 200 frames.  The morning after I saw the lab machines had successfully finished the rendering and I was able to fetch the files from my file space at ITU via the Internet (SSH).

To assemble the rendered sequence frames, in the TIFF image format, into a movie file with the audio track, I used the avconv command line tool (ffmpeg fork)[16].  To add opening and ending titles, I imported the assembled file into iMovie, from where the final result is exported.


It’s been delightful to be introduced to the many facets of 3D rendering and get to know some of the many parts of Maya.  Back in 1996 I did some 3D graphics in 3D Studio for DOS[17] and have fond memories from that period, so it is especially interesting to become acquainted with a mature, modern tool like Maya.

Having a basic knowledge of 3D modeling, texturing, lighting, animation and even character rigging, will be a valuable tool for realising ideas for games or other interactive experiences.  This project turned out to be a visual experience with the possibility of becoming a basis for something interactive:  Being able to take ideas and concepts in whatever direction feasible gives a great sense of freedom.




Björn Þór Jónsson



[2] One idea is to represent the light particles by small lighthouses, modeled after the lighthouse in this

picture, taken by my wife: 

[3] Paleogeographic maps: 

[4] The extrusion of tube profiles along curves was based on this article:  Creating Rope And Tubing In Maya 

[5] Color and bump maps, and clouds were obtained from this tutorial: 

[6] Bathymetry is the underwater equivalent of land topograph: 


[8] This Digital-Tutors lesson helped learning how to define what objects cast ray in Final Gathering: 

This screenshot shows where the options was checked off: 

[11] Orbital – Halcyon On and On:

[15] To be able to install the latest Maya 2014, service pack 4 version, I modified an install script, as can be seen here: 

[16] Command used to assemble the video file: 

[17] 3D graphics done in 1996 with 3D Studio for DOS: 

Volcanic time lapse spanning two months

After cleaning up all the still images from the volcano webcam for the eruptions in Fimmvörðuháls and Eyjafjallajökull in 2010 I wanted to assemble a continuous time lapse video from all that mssaterial and now I’ve finally done that and the result is a movie length video covering those two months of eruptions. Well actually two videos from wide and narrow angles, that may be played together:

The source files can be found on

(I decided not to upload the 18GB DV files generated from the still images with avconv and converted them to MP4.)

Nú er að setja popp í skál…

Content aware cleanup for video

Last September a film production company asked to use the time-lapse webcam footage I assembled during the eruption in Fimmvörðuháls and Eyjafjallajökull.  They preferred the footage without logos but Vodafone in Iceland who had maintained the webcam did not have clean versions of the images.  Cropping would have been an option but then interesting parts of the frame would have been lost so I looked at the option to use the content aware fill techniques that have recently appeared in image editing programs.

     Still frame from: Eyjafjallajökull volcanic eruptions in 2010       Still frame from: Eyjafjallajökull volcanic eruptions in 2010

First I tried the Resynthesizer plugin for GIMP and it did a fairly good job, but to run the whole set of images through it (one image for each minute, 1440 images for each day) I would have needed to write something in Script-Fu as GIMP does not have the ability to record user interactions to a batch job – open a file, make a selection, run Resynthesizer on it, etc – and that was indeed an option as I don’t mind trying out new programming environments.

     Still frame from: Eyjafjallajökull volcanic eruptions in 2010       Still frame from: Eyjafjallajökull volcanic eruptions in 2010

But I decided to try the new Content Aware Fill feature in Adobe Photoshop also and it did a better job cleaning up the areas containing the logos.  And as it is very easy to create batch jobs from user interactions in Photoshop, it was the choice for this task which took about a week crunching through the 75888 images from the period the eruptions lasted and the webcam was in operation, March 24th (the eruption started on the 21st) to May 21st (eruption ended on the 23rd).  For each image a mask was created to limit the area the content aware filter would work with, four selections created to apply the filter on, and the image flattened.

     Still frame from: Eyjafjallajökull volcanic eruptions in 2010       Still frame from: Eyjafjallajökull volcanic eruptions in 2010

The results are quite impressive.  On a few frames the results are not so smooth and could be done better by hand but I wouldn’t bother, they just appear as slight glitches when the frames are run through, 12 per second or faster.  Here are a few days with the time-lapse videos before and after for comparison:

P.S. there on May 13th at min 1:30 I can be seen wearing a scarf and black sweater and gray pants, carrying a red and gray backpack :)

Nýbökuð gosmyndbönd daglega

Get Flash to see this player.

Það rættist úr veðrinu við gosstöðvarnar á Fimmvörðuhálsi seinnipartinn á afmælisdaginn en þá upp úr hádegi ákváðum við að keyra austur í Þórsmörk, Vignir mágur og þrjár dætur, Bjarni Þór og Stefán Þór en þeir tveir hossuðust í smájeppanum með mér inn ‘smörkina – drei männer im Jimny, 3. apríl og þriðja ferðin að eldsumbrotunum en áður hafði ég flogið yfir og fengið jeppafar að hraungosinu.

Á leiðinni austur voru Eyjafjöllin bólgin af óveðursskýjum en svo létti til síðdegis þegar við keyrðum inn með Markarfljóti og dásamlegt veður í Básum.  Þar fréttum við að best væri að sjá frá Útigönguhöfða svo við stefndum þangað í stað þess að leggja á Morinsheiði eins og hafði verið í plani en á stígamótunum við Réttarfell fréttum við hjá þeim sem höfðu víða ratað að af því felli væri best að sjá, svo við gengum þangað og þar var virkilega flott að sjá í nýopna sprunguna.

Svo eitt sinn þegar ég leit inn á eina af vefmyndavélunum sem er beint að gosinu datt mér í hug að heyja úr henni myndir, eina fyrir hverja mínútu, og raða saman í time-lapse myndskeið, tólf á sekúndu.  Gerði tilraun og setti nokkur myndskeið í möppuna og bar undir umsjónarmenn vefmyndavélarinnar sem leist stórvel á þetta framtak og nú er það komið í þann farveg að hvern dag er sjálfkrafa keyrt út á vefinn myndskeið úr myndum gærdagsins og heilar vikur hafa einnig verið teknar saman.

Þetta er útfært í Bash skeljarskriftum sem nýta sér forritin cURL, ImageMagick og FFmpeg og eru knúnar af cron.  Skrifturnar eru skrifaðar í Vim og ShellEd og þær má lesa á


Sjósund sautjánda júní 2009, þegar Birkir, Björn Þór og Orri syntu frá Kópavogi yfir Fossvoginn til Nauthólsvíkur, nánast eins og það hefur áður birst á féschinu en hér eftir aðra ítrun, ífiktað og knúið af þýsku elektrópoppi:

Eftir gott garðpulsupartí hjá Birki og Malin seinnipart sautjánda júní komu þau yfir með Sindra ásamt Stefáni Þór og Sólveigu til áframhaldandi snæðings og síðar um kvöldið stakk Brynjar inn nefinu. Þegar tók að líða á kvöldið fórum við að líta á HD myndefnið sem Bjarni Þór hafði tekið upp á fimmuna hans Birkis.  Klippiforritið sem ég er með nær ekki að spila þessar schvínþungu HáDé skrár í rauntíma eins og þær koma af skepnunni – þetta er víst allt miklu betra á makkanum – en þegar ég áttaði mig á að leyfa því að rendera preview skrár fyrst (ýta á enter á tímalínunni) þá var alveg smúþ að vinna með efnið.  Svo klukkan ellefu, þegar við Birkir vorum búnir að hrekja aðra gesti á brott með nördaskap, ákváðum við að setja saman eitthvað úr þessu og gáfum okkur klukkutíma.  Klukkutíma og kortéri eftir að sá frestur var liðinn var komið úttakið sem dúndruðum inn á fésið.

Núna um helgina er búið að vera ágætt inniveður og mig langaði að fikta aðeins meira í þessu, hefla nokkra kanta og knýja þetta áfram með Kaltes Klares Wasser af plötunni Chicks On Speed Will Save Us All! Rytminn í kortinu minnir mig núna svolítið á þá athöfn að reka nagla í við.



Get Flash to see this player.

FUNDURINN byggir á þeirri súrrealísku upplifun sem einleikarinn varð fyrir þegar hann kom að vinnufélaga sínum í því ástandi sem er reynt að endurskapa í myndinni.

Styttri útgáfa myndarinnar var frumsýnd á árshátíð HugarAx 2. maí 2009 í Stykkishólmi.  Stiklan var gerð til gamans og sett á innra net fyrirtækisins nokkrum dögum áður.  Myndin sjálf var ekki gerð til gamans en vakti samt hlátur á frumsýningunni.  Veit ekki afhverju.