Greinasafn fyrir flokkinn: leikir is no game

It seems like our concept of the Game of Likes is moving towards becoming no game at all.  Though we’ve had quite a few ideas for games to be played with liked and posted media items, we’re leaning towards a focus on exclusively offering playful exploration of the history of online activity in the form of interaction with multimedia on the various social websites, leading to digital introspection – a term coined by Edda which I really like.

Before our last meeting with our supervisor, Edda literally woke up with the vision of creating an interactive installation based on this project for an art gallery, where exhibition guests could form cubes based on their media likes and have them displayed on a large screen, along with cubes created by other guests.  Later we’ve learned that this idea could be compared to projects like Listening Post and the event of a thread.  This idea Edda presented in our last meeting and I discussed the possible, proper games that could be created in this context; the simplest one being a quiz where two participants would receive challenges involving questions like which one of three multimedia items the opponent likes best, or what order of preference the opponent chose for the given items – an interaction similar to for example the Big Web Quiz.

Our supervisor really liked the interactive installation idea.  He also pointed out that those games I described could in fact be created by analog means, while an interactive installation based on online multimedia items collected from personal activity databases would be unique and exclusive to the digital domain.  Yes, I must agree and in fact I referenced already existing paper versions of some of our game ideas in the last Game of Likes post.  And I also like the idea of an interactive installation, though it doesn’t have to be in an art gallery for what I’m concerned, it can be just as well in the app stores and on the web, or it can be in both contexts (I’m flexible like that ;) .

Though we will most likely keep the current focus for the coming weeks, on the playful exploration, I do still cling to the idea of offering simple games to play within this environment, for those interested to participate in.  In any case those ideas will have a low priority for now and the remaining time for our thesis project will probably not allow any such implementation.

Thinking inside the box

While tying my shoelaces in the mens room at a local swimming hall, the small shoe lockers in front of me caught my attention.  Each locker was box shaped and it reminded me of the like-cubes we have been thinking about.  One of the lockers was open, so I could see inside of that box and that lead to thoughts about the possibility not only allowing the like-cube creator to marvel at its exterior, but also allow her to look inside the cube, where she could write a note on why the cube is decorated with the selected multimedia items, what significance they hold for her.  When I came home and told Edda about this idea, she immediately connected it to a message in a bottle; the act of writing a message inside the cube, before sending it off into the ether, could be compared to the act of placing a message in a bottle, before throwing it into the sea.  Of course!

This message would be written one side of the cube inside, like personal graffiti on one wall of a bathroom stall, and the other sides would be free for other types of communication, like location, tags and general profile information.

Sketch of entry inside a like-cube, where a personal message can be written describing the motivation for the choice of exterior multimedia decoration.
Sketch of entry inside a like-cube, where a personal message can be written describing the motivation for the choice of exterior multimedia decoration.
Shoe locker causing thoughts inside the box.  Using phones and cameras inside this men's room is strictly prohibited and I have to admit that the heart pace a little faster while using a phone-camera to document this sight.
Shoe locker causing thoughts inside the box. Using phones and cameras inside this men’s room is strictly prohibited and I have to admit that the heart paced a little faster while using a phone-camera to document this sight.

Automatic exploration for the people

Being annoyed by the buttons that I had designed into the user interface, thinking they rather reminded of a tax return interface than anything playful, I’ve thought about how to get rid of buttons in a fixed place as much as possible.  In the so called breeding interface, where media item selection is performed by requesting random new or related items, I was thinking about allowing the user to tap each surface, to exchange it for a new one, instead of pressing the fixed Roll button that swaps out all items at once, that are not held.  This could be an interesting form of interaction, but the tapping gesture on each surface is reserved for showing the media item in full size, as the screen allows.  Showing the media items in full size, with a reference to their origin, I think is important for allowing both full detail to be viewed and giving due credit to the source of the content.

When discussing with Edda how this particular interaction could do without a fixed button, she came up with the idea of having the media surfaces flow automatically in and out, each time showing new items from your online activity, instead of requiring you to manually press the Roll button. The only required interaction would be to press the lock on each item that you’d like to keep, while allowing the other free items to keep flowing. Only when all items had been locked would they gather into a cube, instead of when the Create button would be pressed.

This method of user interaction could offer a more relaxed, hands off exploration of the self, in the spirit of Slow Technology, where one can’t click through the content as fast as possible but rather has to sit back and watch the flow, only interacting when sees fit – this interface could even be left unattended as a decoration „in the periphery of our attention, continuously providing us with contextual information without demanding a conscious effort on out behalf“.  Furthermore, this idea has led to thoughts about an interface where there are no fixed buttons, but rather with options that would flow into view at random moments.

For rotating the cube insides, I had thought about navigation buttons in a footer toolbar, either one button for each direction or one single button that would cycle through all possible movements.  While the exterior view of the cube can be rotated by simply touching its sides, the touch gestures for the surfaces on the insides of the cube might be reserved for the content being displayed, like scroll and map views, thus those thoughts about navigation buttons.  In an effort to be rid of fixed buttons altogether, I’ve now implemented grip surfaces of sorts for the insides, represented by a dotted pattern, which can be reserved for navigation inside the cube, while the rest of the surfaces are free for other kinds of touch movements.

State of play

The implementation has reached to this navigation of the cube insides, which are still to be decorated with facilities for content entry, like areas for text input and a map for location placement.  A screen recording of a walk through the current state of implementation can be seen in this video:


Update:  The above text was written four weeks ago, one sleepless night, then saved as a draft and forgotten.  Since then I’ve uploaded a couple more screen recordings, which aren’t either entirely new, but show some other aspects of the project progress:




As the implementation stands now, it is possible to click the cubes as they rain down like in the above video – in an area which we like to call the Corral – bringing them into the foreground and entering their insides, to view what message their creator has written.

This behaviour of the cubes falling down is inspired by raindrops sliding down a window sporadically, which is quite common in Iceland where the rainfall can often be more horizontal rather than vertical.  That sporadic behaviour is also chosen of performance reasons, as older mobile devices can’t handle smoothly animating multiple objects at once – at least not as I’m currently implementing it using, which might well be reviewed with performance in mind – so now there is an animation dispatcher that chooses one cube at a time to animate.

One aesthetic suggestion our supervisor gave us (in a Skype-meeting after the one mentioned above), is that the cubes falling down, as under the force of gravity, doesn’t seem to make much sense in the context of the star-field background, which suggests weightlessness the vacuum of space.  I’m thinking about giving that a quick fix by changing to a background image of the Earth seen from the Moon.  It might also give the interesting effect of the cubes appearing to be shot from the Earth, where their creators live.

Currently I’m looking into connecting this to an online database, so users will be able to authenticate and save their cubes, for others to see.  For that I’ll begin by using Firebase and their AngularFire library.

Alright, now I’ll publish this post already and get back to coding so there’ll soon be something to try out on !

Game of Likes

The Internet is full of social web sites, offering media of various kinds and the ability to show our appreciation in some way or another; liking, hearting, starring and so on.  The reason we interact with content in that way may be to show our appreciation to the content provider, bookmark it for further reference or generally serve our compulsive collecting need.  But otherwise we may not do much with all those likes and hearts and stars because we’re constantly exposed to so much new content that we continue responding to, with our stars, likes and hearts.

What if we were given the opportunity to play with the content we’ve liked in the past, interacting with people who like similar things, across our usual social connections, discovering new things we may like and by the way learning more about ourselves?

Game of Likes is an idea that has been brewing for over two years.  The initial idea is of a game that offers the ability to connect the various social (media) networks a player may use – Tumblr, We Heart It, Pinterest, Instagram, Flickr, YouTube, SoundCloud, Hype Machine, Spotify, 8tracks, deviantART, 9gag and so on – and play mini games based on the liked or posted media items from those networks, alone or against opponents that are friends or total strangers. The idea has then evolved into an environment where you gradually specify your interests and are assigned into groups of anonymous players that are regularly split up and reformed with progressively like-minded players, where you share your likes, with your likes(TM), as we like (!) to put it.   As a new player, or when the game has only a few players, you may not find yourself among like-minded people, but as you progress through the different  groups and deep-dive through the network of players, you may find yourself increasingly exposed to content that you find agreeable and along the way make connections with people sharing that content.

Later the idea has further evolved from this concept of assigning players into groups, to simply offering the ability for each player to propose a particular game to play, where others can jump into any of the proposed, open and active games, which catches their fancy.  The concept of assigned groups is still appealing, so the idea evolution may take a few more rounds or shift towards something entirely different.  I’m going to work on this with my wife, Edda Lára Kaaber, as a Masters Thesis project in the Games track at the IT University of Copenhagen.  We have written a thesis proposal, which can be read at:

Phillip Prager has agreed to be our supervisor for this project, which we are very grateful for.  He has connected well to the Game of Likes concept and has already provided us with a variety of interesting reading material.

In the coming weeks I’ll focus on the initial implementation steps towards realising this idea, along with a couple of other projects.  The most basic feature to implement will be the ability to connect to the various social media networks and incorporate media items from them as objects for play in the game of likes.  We envision the media items being arranged on one side of a cube each, in three dimensions.  Such cubes we call like-cubes.  Initial steps along that path have already been taken as a final project in a Procedural Content Generation (PCG) course at ITU, where along with Kasper Fryland Appeldorff, I implemented a prototype that collects media items from a given Tumblr account and presents four of them at a time. The mechanics of the interaction is supposed to resemble that of slot machines or so called bandits. The user can freeze any of the items and choose to roll the free, non-frozen items, where they are either replaced with random selection from the items associated with the specified accounts, or replaced with a selection influenced by the tags (if any) of the items that have been held / frozen.  More details can be read in the report we wrote on this project:–PCG-report–likeBreeder.pdf

and the prototype can be tried out at:

This process of collecting media items into sets, that will be used to decorate sides of cubes, can be viewed as playful in itself.  Beyond that, we have brainstormed various games to play with such cubes.  One of the most simple games, and possibly the most relatable, is a quiz where two participants are presented with three like-cubes, one of which has been created by their opponent and the aim is to guess which of the three it is.  Points are given for correct guesses and as the quiz progresses, the participants could gain a better idea of their opponents’ likes; they could increasingly become experts in each others likes.

Another idea is to have the players arrange the cubes into narratives by connecting them with text fragments.  When presented with this idea, some have received it with skepticism, but while shopping in Copenhagen before last Christmas, I came across the the game Rory’s Story Cubes, in the bookstore Bog & idé, which is based on pretty much the same idea; the main difference being that the story cubes offer a predefined set of images, or icons, while our like-cubes will have dynamically generated sets of images drawn from each user’s social media accounts.  In the same bookstore I also found a series of games that involve the discovery and expression of the players personality, which will also be a goal with the Game of Likes, as discussed in more detail in the thesis proposal linked above.  Those findings I think further validate that we are on to something with this Game of Likes concept.

Initially the idea has been to have these games playable online, with the players usually in different locations, interacting over the internet.  After seeing the possibilities of developing console-like games for the Chromecast device, where players are located at the same place at the same time, in front of a television set and using their smartphones as game controllers, I’ve also become interested in exploring the possibilities of using that technology for some playful activities within the Game of Likes.

Flúðum, janúar 2015.


In an Artificial Intelligence course at ITUModern AI for Games – the Interactive evolutionary computation (IEC) project Picbreeder, and its predecessors, inspired me to explore the feasibility of utilising the powerful properties of Compositional pattern-producing networks, evolved through NeuroEvolution of Augmenting Topologies (CPPNNEAT),  in the formation of unique waveforms to construct oscillators producing interesting timbres. Those oscillators could then be used as novel building blocks in complex synthesizers. Furthermore, I pondered the possibility of using the evolved CPPNs to construct those synthesizers.

To facilitate the beginning of this exploration, I developed a prototype called Breedesizer, for evolving populations of waveforms with the CPPN-NEAT technique, which is accessible at:

A report on the process and its results can be read at:

It showed to be an interesting and enjoyable process to guide the evolution of waveforms with this technique, and it is an exciting prospect to see how the variety of generated waveforms can be used with different sound synthesis techniques, as can be read more about in the Discussion section of the report.

Here are two quick screen recordings of the Breedesizer prototype in use:


Flúðum, janúar 2015

From the lighthouse to the business of orbitals

Mandatory report on a project done in a Game Development course at ITU.


– trailer made by Nils Sørensen


In this report I will reflect upon the process of coming up with an idea for a game to be developed with the Unity game engine, as a programmer in a group of five people attending the Game Development course at the IT University of Copenhagen.  Brainstorming was a large part of the process and that part alone had several phases where ideas shifted shape many times over and prototypes were created and thrown aside.  A large part of this report will thus accordingly be devoted to that process along with a briefer coverage of the technical implementation, where I had a delightful immersion in a technology new to me, opening up doors to new paths to explore.

With darlings set aside rather than killed, as I would like to see them, the group settled on a concept to follow through and the development of a deliverable product proceeded without hiccups, consisting mostly of the linear completion of tasks we commonly agreed upon and prioritized.  The outcome is a fairly polished game offering a rather basic gameplay, though with a few twists, that may appeal to a broad audience of casual players.


The development environment and division of roles

Before the process began, a Game Development course announcement of the requirement to use exclusively the Unity game engine within all groups, came as a surprise.  Even though I am aware of that Unity is the most commonly used engine by game studios, at least around the Copenhagen area, I had not been in a hurry to become acquainted with that environment.  One reason is that I do not specifically aim for a job within the game industry.  Another reason is that the games, or interactive experiences, that I have ideas and ambitions for, might not fit well within the framework of any given game engine, where I would rather like to use my background as a web developer and an enthusiast for the possibilities of the Internet.  When considering game engines I was most interested in the new Goo Engine[1], that is based on JavaScript and web technologies, like WebGL, that are closer to my heart as a web developer.  Another option I fancied was to continue developing a specialized game engine[2] I had begun work on in a previous ITU course and base a new game on it.

Even though this was an initial surprise, I was easily convinced of the benefits this requirement would give, where all team members were most likely to be familiar with the Unity game engine and so would have a level playing field, giving experience with one of the most commonly used tools and spoken languages in the field of game development.  In a previous Game Design course I lead the team into developing a custom server side solution and a client app, where no game engine was used whatsoever, utilising my forte as a web developer, where admittedly other team members may have had a hard time getting up to the same speed I had with the technology[3].  Even though the project from that course may be considered a success, the choice of technology may not have been fair to everyone.

When it came to dividing roles within the group, I was open to working on tasks other than programming.  Sound design interested me for instance, as I’m an amateur iPad composer and a compulsive audio app purchaser.  Also I was interested in 3D modeling, as I’ve previously done hobby projects in that area[4] and were at the time enrolled in a 3D course[5] at ITU.  Taking on those roles would have been a welcome break from programming, even though that is still my main passion, but it turned out to be logical that I took on the role of one of two programmers, as there lie my strongest skills and experience.

Still, I had no previous experience with the Unity game engine and it required some effort to come to grips with that tool.  Before doing anything else, I went through all the official Unity tutorials to gain a footing in that environment and in the process I came to appreciate the possibilities this tool has to offer, in terms of being able to quickly realise visually interactive designs and to reach a multitude of platforms with ease.  In other words, I was easily sold on Unity as a weapon of choice.  Within Unity I chose to use the C# programming language, that I had limited previous experience of, rather than JavaScript (UnityScript) that should have been more familiar to me as a web developer, since I was already in the business of trying out new things.  By now I have started new projects using Unity and can foresee using it in the future.

The student groups were assigned a limited amount of Unity Pro licenses, valid for the duration of the semester, and we utilized them for among other things to communicate with a Unity Asset Server that we set up on a cloud computing instance[6].  It gave the development process a more integrated feel to have source control management within the development environment, though we probably would have done just fine using for example a Git repository and an external tool[7].  The three licenses our group was assigned went to the two programmers and our game designer and graphics artist.  A fourth license for our 3D modeler would have been nice, so everyone could have communicated within Unity through the Asset Server, but we solved that with the less fancy, ad hoc manner of delivering 3D models through our private Facebook group.  #firstworldproblems



Initially we discussed what kind of play we wanted to create and for what platforms.  One style fancied was that of exploratory games, like Dear Esther[8] and Flower[9].  We discussed adhering to dogmas such as having no scoring system, no narrative or textual explanations but rather only visual cues, and we even considered exhibiting permadeath as a game characteristic[10].

Though we are interested in mobile touch devices as platforms for games, we did not want to exclude other platforms either and so kept the choice open until rather late in the process.  Even though we have now settled on the Android and iOS mobile platforms, the game actually works reasonably well also with mouse input in a desktop environment, which we indeed use for development and initial testing iterations.

The lighthouse

The first brainstorming sessions were sparked by a photograph of a lighthouse in Iceland[11] that has an unusual design which can resemble a space pod of some sorts.  We toyed with the idea of being able to fly a lighthouse like this, exploring and discovering the environment, collecting objects that might count towards completing a goal.

Photograph taken by Edda Lára Kaaber of a lighthouse in Vestmannaeyjar, Iceland, serving as inspiration.
Photograph taken by Edda Lára Kaaber of a lighthouse in Vestmannaeyjar, Iceland, serving as inspiration.


One concept had the player surrounded by seven platforms, one of which is populated by a lighthouse that can be entered and flown towards flashing lights on distant shores, going through swarms of obstacles and energy units[12], finding that those distant lights are other lighthouses that can be brought back to one of the home bases.  Each of the distant lighthouses flashed one of the seven colours of the rainbow and when all of them had been collected to their bases, their combined lights would form a white boulder or beam of light that would lead to the next level[13].

Several other concepts were based on that lighthouse photo, involving among other things guiding ships, rising and falling sea levels, and even going through different environments on each level, like underwater, through forests and space.

The elements

Those concepts based on the lighthouse may not have been tangible enough and too abstract to grasp, and we received the encouragement to simplify and focus our ideas.  Subsequently the idea sprung up to base a game on the classical elements of fire, earth, water and air.  That lead to the concept of fixing planets ruined by their inhabitants by applying the missing elements to them.  And from that we got the idea of a planet trading business, where the game takes place in the context of a market where the buying and selling of renovated planets takes place, with planets habiting elements of high demand being more valuable than others, and the player has the role of a planet businessman, taking on contracts to prepare planets according to his clients’ demands, collecting revenue from successfully completed contract work, enabling her or him to sign up for grander contract work offering larger payout.

As much as I had already invested enthusiasm for the concepts based on the lighthouse, it was a rather easy transition to shift focus to a game based the classical elements, especially given the environmentalist context of ruined planets and the capitalist context of selling planets with elements in high demand.  Still I was fond of those lighthouse concepts and took the position to explore them further at a later convenience, rather than to kill them on the spot, feeling well equipped by the newfound powers of Unity.

The mini games

Having settled on this concept based on the classical elements we began to define gameplay within that context.  We had the business- / repairman and a vision of his home base as some sort of a garage floating in outer space.  That was going to be the game’s center and a point of entry into mini games where the player would go on missions to gather elements.

One photo found on the internet inspiring a space garage with a planet floating in the middle.
One photo found on the internet inspiring a space garage with a planet floating in the middle.

Initial thought was to have four mini games, one for each of the elements, with different challenges and each even with different game mechanics.  We realized soon though, that creating four mini games and the central game management in the garage would probably overshoot our capacity during the semester and halved that possible overscope by discussing the feasibility of two mini games, where one would offer the collection of the more tangible elements of earth and water, and the other would inhabit the less tangible air and fire.

As I had become fascinated by an art installation design of a constellation done for theLAGI renewable energy initiative[14], I volunteered to create a prototype of a mini game where the player could collect both air and fire elements in an environment inspired by that design.  Another team member created a prototype for collecting the earth element and possibly also water, where boulders were rolled with game mechanics inspired by the gamesDragon, Fly![15] and Giant Boulder of Death[16].

Early mini game brainstorming.
Early mini game brainstorming.


Mini game prototype having drops of elements falling and rising.  Would like to explore this concept further as a small independent game, with mechanics inspired by Flower and even Flappy Bird!
Mini game prototype having drops of elements falling and rising.  Would like to explore this concept further as a small independent game, with mechanics inspired by Flower and even Flappy Bird!


The simplifying U-turn

In a meeting where we were presenting the two prototypes and trying them out, we had the rather insistent encouragement from our producer to simplify our concept even further, with the specific suggestion of implementing one mini game with 2D gameplay where spheres of elements would reside in each of the screen’s four corners.

Having seen that meeting as a time to flesh out a list of tasks to carry on with the already began development, I could not help being visibly irritated; an appearance supported by the lack of sleep the night before doing prototype work.  Those feelings carried on into the next day and had me dazed by how emotionally invested I had become in this project.  I felt done with brainstorming and ready to move on with development, having once developed enthusiasm for the lighthouse concept, set that aside and again built up steam for the classical elements concept with different mini games that implementation work had already begun on.  Gameplay in 2D for this project did not initially appeal to me either, as we were working in a mandatory 3D engine and I was thinking quite literally in the dimensions offered by that.

Even though I was still rather annoyed for the second day by those repeated changes to our concepts – having personally transitioned twice from a brainstorming mode to a focus on implementation, building up steam to only extinguish it and go back to the drawing board – I could also realise the sensibility of those suggestions and also feel they were well received and easily grasped by other team members.  So I wrote a comment to our team’s Facebook group, stating how I could understand the new angle as a feasible option, and other team members saw it the same way.  At a later meeting our producer mentioned his concerns of having possibly intervened too much in our process, which indeed at the time I felt to be the case, but regardless we went along with the suggestions and steered the development towards that new angle.

Comment in our private Facebook group, in response to a document a team member created to list the pros and cons of sticking to our prototyped concepts or moving on to the newly proposed angle, where I admit the pros of moving in a new direction.
Comment in our private Facebook group, in response to a document a team member created to list the pros and cons of sticking to our prototyped concepts or moving on to the newly proposed angle, where I admit the pros of moving in a new direction.

Settled on a new course of action, it was time again to implement a prototype based on the new concept of a single 2D mini gameplay.  Here our main concern was how to represent visually the reception of elements on the subject planet.  To start with something simple to implement we decided to split the planet’s sphere into four segments, or wedges, where one would be receptable to one element each.

Prototype for a new mini game, with touch interaction implemented and the decay of elements as they move.  Textures are blended together with a custom shader import found on a wiki page [17].
Prototype for a new mini game, with touch interaction implemented and the decay of elements as they move. Textures are blended together with a custom shader import found on a wiki page [17].

That representation indeed resembled slices of a pizza and having sharply defined quarters of the sphere, each only capable of receiving one element, was regarded as making little conceptual sense.  So we discussed a representation in the form of nested rings, one for each element.  How to implement those rings was not clear to me and I thought initially about stacked discs, like in the Towers of Hanoi puzzle[18], and later became fond of the form and mechanics of the gyroscope[19], which the game uses in its current form, with nested rings rotating randomly.


Toy with a stack of disks[20] and gyroscope rings[21] serving as inspiration.
Toy with a stack of disks[20] and gyroscope rings[21] serving as inspiration.




Trying out different forms of nested rings for receiving elements, that evolved from a stack of disks to a structure resembling a gyroscope.



Two steps in the evolution of the elements mini game, locally called eleMix from its Unity scene name.  The rotating element rings can be seen in the middle, to be gradually covered by a planet texture as the play progresses.
Two steps in the evolution of the elements mini game, locally called eleMix from its Unity scene name. The rotating element rings can be seen in the middle, to be gradually covered by a planet texture as the play progresses.


Development tasks and their division

Programming effort was in large part divided between the garage and the one mini game of delivering elements to the subject planet, where I devoted most of my time in the latter, focusing on the touch interaction with elements, their reception on the center planet and managing the progression of play in time and the approach of goals (rendered with progress bars).  Highlights in that process include the gyroscopic rings of element reception; a module for spawning asteroids as obstacles, fully configurable in the Unity editor; configuration of obstacles for each level in the editor for a designer without any programming; handling multiple, simultaneous, touches of the elements, offering a cooperative play; and making victory and failure exit animations with mecanim, a non-programming task I jumped on to try out some of the movement through 3D space I so much desired in the game.

Unity editor settings for a custom asteroid spawning module.
Unity editor settings for a custom asteroid spawning module.

Later on I joined the garage, implementing a HUD in the form of a clipboard, using Unity’s mecanim system.  Also I added the ability to look around the environment by touch and to buy elements from the shop before going on missions, among other tasks, like finding planet textures[22].

The modeling work, graphics and code from other team members flowed smoothly into the project via the Asset Server.  During development it was good to have as a reference a game design document created by one team member, where among other things aspects of the economy were outlined and interactions of elements were defined, that are in fact yet to be implemented in the game.


Garage views at different stages of polish, with a foreground Heads Up Display (HUD) in the form of a clipboard, where contract information, inventory balance and other messages are displayed.
Garage views at different stages of polish, with a foreground Heads Up Display (HUD) in the form of a clipboard, where contract information, inventory balance and other messages are displayed.


Name quest

One of the seemingly most difficult tasks of development has been choosing a name for the game.  In a cycling tour around Refshaleøen I came across a barrack marked with the sign “Copenhagen Suborbitals” and that got me thinking about the word “orbital” and subsequently suggesting the name “Out of Orbit”, which is actually used by a small game currently in the Google Play store, but it has stuck as a working title, while other ideas include Elementix, Elexia, Elementum, Elemia, Elemental Architect, Elemixir, Elemade, Elemaden, Eletrade, landMiller, landMaker, landMakia, planetShop, orbitalis, orbitalia, or even just suborbitals.

A barrack on Refshaleøen responsible for the game’s working title.
A barrack on Refshaleøen responsible for the game’s working title.



Beforehand we were warned that difficult moments could arise in the process but I was not really worried about that as I consider myself pretty much open to all types of games and not tied to any specific genres of play as I’m in the first place not so much of a game player – having reached the hardest core of game playing in the early 90’s when I finished the permadeath game Prince of Persia and all episodes of Commander Keen – and so felt open to follow along to wherever the process would take us.  Regardless, ideas sprung up early in the process that I quickly clung to and built up enthusiasm for, that I later on had to suppress when other ideas surfaced, that were possibly more realistic and generally acceptable.  Those changes proved indeed to be sources of surprisingly difficult moments for me, though fortunately they did not last for long.

Overall I feel that the group worked very well together and no conflicts arose among the members.  Only I had to make an effort to keep in line with the level of diplomacy and air of serenity within the group, having invested personal enthusiasm for what I must admit were darlings in the moment.  Along with the immersion in Unity’s technical intricacies, it was a great learning experience to go through those development phases, even though I have been around the block once or twice in a previous career as a software developer.  Especially it is interesting to have taken part in the evolution of ideas through all their phases and to compare that process to how other known works have changed dramatically from their initial concept to their published form[23].

Do I think we could have had more interesting gameplay with the multiple mini games previously prototyped?  Yes.  Do I think we have a more manageable project and a unified experience with the focus on the current mini game?  Also, yes.  And the experience the game offers in its current state is enjoyable enough to have me and other team members interested in continuing work on the game, to have it presentable for the Google Play and Apple App Stores.


IT University of Copenhagen
Game Development course – instructors:
Henrike Lode and Mark J. Nelson
spring 2014
Björn Þór Jónsson (



[3] Phosom is a game, or a toy, based on image similarity: 

[4] 3D graphics done in 3D Studio for DOS back in the 90’s: 

[5] Work in progress in the Introductory 3D course at ITU: – 

[6] We used a nano compute instance at Greenqloud to host the Unity Asset Server, where I currently host other projects and web sites.  The Asset Server is doubly backed up, once to another disk partition and then to CrashPlan Central, and I purchased more storage for the compute instance as our project has reached a size of ~700MB. 

[7] Atlassian SourceTree would have been an option as a Git source control client: 

[8] Dear Esther, game: 

[9] Flower, game:

[10] Permanent death in video games: 

[11] Inspirational photo of a lighthouse in Vestmannaeyjar, Iceland: 

[12] I had become fascinated by the idea of swarms of obstacles in a game after seeing this video of flocking birds:  And actually found implementation examples for Unity of the Boids flocking algorithim; see footnotes in the document Towards the Lighthouse linked below.

[13] Towards the Lighthouse, document describing an early game concept: 

[14] A Greenfield and a Constellation art installation: 

[22] Extraterrestrial planetary surface textures used in our game: 

[23] 12 Beloved Movies That Were Originally About Something Very Different: 

Light Source

Project done in the Introductory 3D course at in spring 2014

Re-render of the original animation handed in as an exam project in the Introductory 3D course at This rendition is in HD1080, where the original render was in HD720. Also an attempt was made to increase quality by changing the Final Gathering Accuracy from 100 to 600 and Point Interpolation from 10 to 50. This resulted in dramatically increased rendering time, especially inside the tube where each frame took one hour to render; caching of Final Gather points could probably have been employed with a Final Gather Map. Total waiting time was lessened by using 10 lab computers at; licensing wouldn't allow using more computers simultaneously.


Original render of the animation handed in as an exam project in the Introductory 3D course at There isn't much difference to be seen when compared with the re-render, it's most perceivable in the light cast from the globe onto its surrounding floor and ceiling, with the MIA Light Surface Shader; in this version it's quite coarse and a bit less so in the re-render, mostly due to a higher Final Gathering Point Interpolation value, but still there it's not as smooth as I would have it.


The concept behind the project discussed in this report has taken many shapes, as ideas have brewed during the course of this spring semester.  Being a Games Technology student, it could have been a good fit to focus on creating some sort of game assets, but I was delighted to learn about the freedom offered in the Introductory 3D course.  If the outcome of the project leaves the viewer wondering about what just happened, then it is a success.

Though the first idea was inspired by a game concept that I had just started working on, then my initial aim was only to create a visual experience, rather than a real asset usable in a game.  Then the concept changed as new inspiration struck but the aim was still to create visuals only.  Later in the brainstorming process, that new concept lead to ideas of a possible gameplay, so the intended pure visual was inspiring the creation of an interactive experience and the process had come full circle; starting with an idea based on game development, aiming for an art piece and evolving it into something that inspires further game development.

Concept evolution through constant inspiration

The evolution of ideas for this project, from a focus on shapes, to the components of light, was fueled by objects encountered in the everyday life.


These days I’m constantly exposed to children’s toys, and one toy in particular, offering the matching of differently shaped objects to correspondingly shaped holes, has provoked thoughts about basic forms and how they can be the basis of more complex shapes.  Everything in the universe has evolved into it’s current shape from simpler building blocks, which themselves are composed of smaller, more primitive elements.  An individual’s ability also develops from simple tasks to more accomplished movements and thoughts.


This evolution and constant change inspired an idea for a visualisation where we would follow a primitive shape, travelling from the core of a sphere through a pipe to the surface, and during the travel the shape would morph into a more complex shape resembling the outlines of a country, which we would then see as a patch on the sphere’s surface.


Alongside this visualisation idea I had taken small steps towards creating a game where the player would have the task of matching increasingly complex shapes to their corresponding slots, where the most complex shapes would resemble the outlines of continents.  That concept was placed outside a sphere rather than inside it, like floating in orbit around the Earth.  When the continental shapes would be matched, they could be seen falling onto the sphere’s surface, forming a collection of the Earth’s continents.

Prototype in Unity for a game called Fall into Shape,  in a setting inspired by the stage separation of the Saturn V rocket
Prototype in Unity for a game called Fall into Shape, in a setting inspired by the stage separation of the Saturn V rocket


Light play


All the ideas connected to this project were based on shapes and their morphing into others, until I noticed a globe model when walking past an office window at ITU one evening.  I became fascinated by this kind of objects, which I had in my bedroom as a child, and started thinking about the source of that globe’s light.  In the case of the model, it is obviously a regular light bulb on the inside that lights up its surface.  A few years ago I saw glowing lava light up the night sky in Iceland, during the notorious “Eyjafjallajökull” eruptions, and there thelight source was molten rock, lava, flowing from the Earth’s core.  Here was born the possibly surrealistic concept of light traveling from the core of the Earth, through lava tubes[1] to the surface.

This concept of light inside and out of the Earth lead to ideas for a possible gameplay, with a connection between the seven components of light, that can be seen in the rainbow, and the seven continents of the Earth.  There each continent would have seven openings / portals into tubes leading to the Earth’s core, each having one of the seven rainbow colours.  The player would pick a portal, that would lead her through a tube to the core, where light particles could be seen floating around[2], each emitting light in one colour of the rainbow.  There the player would have the task of picking one light particle with the same colour as of the portal that was entered and then pick a tube opening leading back to a portal with that same colour.  The indication of what type of tube to pick would be its shape, which could be known after traveling through one tube of that shape to the core.  If the player picked a correct type of tube, the light particle would travel through it and shine at the portal on the surface, and the player could proceed onto another portal; if not, the light particle would travel to the surface and back to the core, leaving the player to have another go at completing a portal.

During a presentation of these concepts for students in the Introductory 3D course, the connection between the fitting of shapes and matching of lights with colours was pointed out to me; both are puzzles that require the player to find the correct fit, and I was glad to hear that others would see some coherence in these concepts, where I could have as well expected them to be perceived as a confusing soup of ideas.

As an extension to this coloured light chase, there could be set up an array of globe models, each textured with paleogeographic maps[3] showing how the Earth may have appeared at various increments of time.  In this setting, a game could revolve around fitting a light particle to each portal on one globe, before proceeding to the next level in the form of a another globe with a map showing the continents layout in another time slice, and in the process the player would gain some idea of Earth’s tectonic evolution.  Though the game would not be branded as educational, it might have some educational value as a by-product.

image06 image27


Realising the concept discussed above involved modeling, texturing, lighting and animation, detailed below.



The first task in the modeling process was to create the framing for the globe model.  To solve that task I was given the advice to create a circular arc, with Maya’s Arc Tools and extrude a box along it.  The base and stand were created from cylinder primitives, shaped with soft selections on their vertices.


It was fairly simple to model the globe itself from a sphere primitive, but the twist was that its surface had to be visible on both the inside and outside.  To achieve that, all faces of the sphere were extruded inwards and the normals of that inner layer were reversed, to face towards the globe’s core.  The sphere was created with an extremely high polygon count (198394 faces for both layers) only to have flexibility of where to cut holes for the tubes that were to pass through it.


Modeling the tubes started with creating CV Curves to define the path along which to extrude their profile.  Four CV Curves were created by adding their control vertices in spiral shapes, in one of Maya’s orthographic views (top).  In a perpendicular view, soft selections on the CVs (with falloff mode set to surface) were used to shape the curve path in 3D.


Two options were considered to create the tube models along the curve paths; polygon extrude and surface extrude[4].  The latter was chosen as UVs are already created with that method, so texturing requires no further effort.  Two concentric circle paths were created with the Arc Tools to define the inner and outer layers of the tube profiles.  Each of them were surface-extruded along the curve paths and the resulting meshes combined by bridging their edges at the ends.  Normals of the inner tube layer were reversed, to face inside the tube, as was done for the sphere.






All textures of the scene lie on the two sides of the main globe and on the inside of another encompassing globe.  The faces of the main globe’s inner layer were selected and a texture applied to them, different from the one on the outside layer.  A color map and a bump map[5] for the Earth were used on the inside, and the texture happens to be flipped there, which could have been easily fixed by flipping the texture in an image manipulation program, but this was considered a good surrealistic effect.  The outside faces of the globe were textured with a bathymetry map of the Earth, for it’s nice visual effect of black continents and light blue sea[6].  A cloud map composed of satellite images of the Earth was applied on a sphere encompassing the whole scene, with normals facing inwards.



Different kinds of textures were considered for the tubes, ranging from a custom gradient resembling the layers of the inner Earth or the inside of a hala fruit[7], to a bump map imitating the ribs of plastic tubing, or even a tree bark texture that could give the illusion of enormous trees growing from the Earth’s surface.  Brushed steel and chrome Mental Ray material presets were tried.  Even the gray default material was considered, as it was visually pleasing in this context.  But the final decision was to have the tubes without textures, even though they were easy to apply due to the surface extrusion, but tinted with the seven colours of the rainbow, using a Mental Ray material (mia_material_x) with a rubber preset.  Though this decision would not make sense in the context of a game where tubes would have to be selected for the correct colour, making it too easy, it makes sense for this project which is focused on the visual experience and those coloured tubes look delectable.




The lighting for the inside of the main sphere consists of one point light located at the center (core) and another pointlight attached as a child of the camera (in the Outliner), so it lights up the inside of the tube as the camera travels through it.  As the project uses the Final Gathering technique offered by Mental Ray, a considerable glow saturated the whole scene when rendering inside the globe, and that was found to be due to light bouncing off the inside faces of the globe.  This glow was mostly removed by disabling the Final Gather Cast Mental Ray option for all the faces of the inside globe layer[8].



For light emitting from the portals on the Earth’s surface, spotlights were created with a Light Fog effect, to simulate the beams coming from a disco light.  To ease the creation of the required 29 spotlights, duplicates were created from the initial spotlight created and adjusted, where each duplicate was rotated to it’s position with the pivot located at the world / Earth’s center.  Only when all spotlight duplicates had been configured in their position was it discovered, by rendering a one frame sample, that it is not possible to duplicate spotlights with the fog effect and have good results; all spotlights rendered with a yellowish tint, even though they had been configured with the seven rainbow colours, and further research on the web confirmed that this was to be expected.  So they only way to go was to manually create from scratch each spotlight and configure it with the right colour and fog effect and transform to an appropriate position.  This was one of the most time consuming tasks of the project.



The most interesting lighting technique used in the project is that of Object Based Lighting with the MIA Light Surface Shader, where the material applied to the outside layer of the globe emits light according to the bathymetry texture used with the material – a feature offered by the Final Gathering technique – simulating the glow of a typical globe model with a light bulb inside.  The result is a bit grainy glow on the surfaces around the globe, probably due to the low resolution of the texture used, but a less grainy result was obtained by increasing the Accuracy value (from 100 to 600 in one case) for Final Gathering in the Render Settings.  Higher accuracy values resulted in much longer rendering times and so the default of 100 was used for the animation batch render.




The animation was to take the viewer from the inside of the globe, through one of the tubes, to a view of the globe’s exterior.  One of the first decisions was to animate the camera along the same curve as was used to create the tube it would travel through.  Manual keyframe animation was initially used to move the camera in the first part, before entering the tube.  Then there was the question of how to transition smoothly from the keyframe animation to the guided animation along the tube path.  One option was to have two cameras and switch them at the transition point, but I preferred to have one path for the whole path and tweak it visually to my requirements.  To that end, two other paths were created for the animationsegments before and after the tube travel.  The Bezier Curve Tool was used to create the paths, as it is more familiar to work with those kinds of curves.


When the curves had been attached, the camera movement along the newly created parts was shaky.  Applying smoothing to the vertices where movement was rough made the situation a little better, but when vertices were moved they got sharp corners again with shakiness appearing again in the animation.  Part of the problem may have been that the tube curve was initially a CV Curve and the new curves were Beziers.  Converting the combined curved to a NURBS curve and choosing to smooth the whole curve didn’t completely solve the problem; movement was still hard to control around the points of attachment and sharp corners prevailed.  It wasn’t until I found the Rebuild Curve option on the Edit Curves menu that the problem was solved with a resulting smooth curve that was easier to control.


Timing along the motion path curve was controlled with position markers.  Orientation of the camera along the path was controlled with orientation markers on the curve with keyed values of Up Twist, Side Twist and Front Twist.  The possibility of blending keyframe and motion path animation in Maya was considered[9], but controlling those twist values was sufficient.

Camera movement is wobbly during the first seconds of the animation along the rebuilt curve, but instead of ironing that out, that now could have been easily done, it was kept for it’s nice effect of chaotic introduction to this surrealistic world.



Several music pieces were considered for the animation soundtrack, including Wave Dub by Dope on Plastic[10], Halcyon (On and On) by Orbital[11], Down Down by Nils Frahm[12], Justice One by Drokk[13], and S.A.D. by Mind in Motion[14].  The last minute of the S.A.D. was finally chosen as the change in mood midway through that part is a perfect fit for the transition from the inside of the globe to its exterior, and the rave music of the first part goes somehow well with the colorful, psychedelic pipes.  Importing the audio file into Maya helped synchronizing the animation to the soundtrack.


Only little over two days before handing in this project, rendering commenced.  As I have a five year old desktop computer at home, running Ubuntu Linux, and I discovered that Maya is offered for the Linux operating system, I decided to try and use it for the batch rendering of the animation.  As only a 64 bit version of Maya for Linux is offered, and that home computer was running a 32 bit version of the operating system, I decided it was time to re-install a 64 bit version of Ubuntu on the machine.  Having done that, I had to jump through a few hoops to be able to run Maya in that environment[15].

After initiating the batch render process, it became apparent after the first few frames rendered that this one machine wouldn’t finish rendering the animation in time at a HD 720 resolution.  So it was clear that I would have to utilise more machines for the task, and the lab computers at ITU were certainly an option.  After rendering 500 frames in around 18 hours, Maya became corrupt on the Ubuntu machine for some unknown reason (logging into another account, resulting in a login prompt freeze and subsequent reboot of the computer, was the start of the trouble).

So the ITU lab computers were now the only option to finish the rendering.  A few hours later I had manually created a rendering cluster by starting batch rendering processes on seven computers at ITU, each set to render a range of 200 frames.  The morning after I saw the lab machines had successfully finished the rendering and I was able to fetch the files from my file space at ITU via the Internet (SSH).

To assemble the rendered sequence frames, in the TIFF image format, into a movie file with the audio track, I used the avconv command line tool (ffmpeg fork)[16].  To add opening and ending titles, I imported the assembled file into iMovie, from where the final result is exported.


It’s been delightful to be introduced to the many facets of 3D rendering and get to know some of the many parts of Maya.  Back in 1996 I did some 3D graphics in 3D Studio for DOS[17] and have fond memories from that period, so it is especially interesting to become acquainted with a mature, modern tool like Maya.

Having a basic knowledge of 3D modeling, texturing, lighting, animation and even character rigging, will be a valuable tool for realising ideas for games or other interactive experiences.  This project turned out to be a visual experience with the possibility of becoming a basis for something interactive:  Being able to take ideas and concepts in whatever direction feasible gives a great sense of freedom.




Björn Þór Jónsson



[2] One idea is to represent the light particles by small lighthouses, modeled after the lighthouse in this

picture, taken by my wife: 

[3] Paleogeographic maps: 

[4] The extrusion of tube profiles along curves was based on this article:  Creating Rope And Tubing In Maya 

[5] Color and bump maps, and clouds were obtained from this tutorial: 

[6] Bathymetry is the underwater equivalent of land topograph: 


[8] This Digital-Tutors lesson helped learning how to define what objects cast ray in Final Gathering: 

This screenshot shows where the options was checked off: 

[11] Orbital – Halcyon On and On:

[15] To be able to install the latest Maya 2014, service pack 4 version, I modified an install script, as can be seen here: 

[16] Command used to assemble the video file: 

[17] 3D graphics done in 1996 with 3D Studio for DOS: 


Project report done in a Game Design class at ITU.
Other versions of the report can be downloaded from the
Google Drive source document.


Phosom is a game, or a toy, based on image similarity, where players receive challenges in the form of a photograph, to which they respond to by either taking a picture or finding a picture from the web, and receive a score based on how visually similar the images are.  Creativity and visual memory are good skills to possess while coming up with an imitation of the given original, that may not have to represent the same motive as the original but rather be visually similar overall.  This kind of play offers the opportunity to perform the popular activities of creating or finding photographs, with a defined goal of similarity and rewards given according to performance within the frame defined by that goal.  Also it leads to thoughts about the originality of visual creations; is the original image the player is to imitate really original, or is it itself an imitation of something else, and can the imitation created by the player be considered as an original for imitation in some other context?



Today people commonly carry a camera in their pocket at all times, embedded in their smartphone.  Photography in general is a very popular hobby.  Playing digital games on mobile devices is a fast growing form of entertainment that often is interweaved in everyday life, where people pick up a casual game during short and maybe random moments they have during the course of the day.  Combining those elements – readily available cameras, interest in photography and casual gaming – in a software toy-game, is a goal with Phosom.

The game allows players to connect with others, people they know or other random (anonymous) players, and challenge them with photographs and receive challenges back.  How to respond to those challenges offers much creativity when searching your everyday environment for motives that may give a good score when compared to the given challenge.  Play with this toy-game can then be “…viewed as…[a] potentially artistic [enterprise] capable of stimulating and releasing the creative components of the participant … that gives satisfaction to [her] creative imagination, nurtures the emotions, excites the soul, and satisfies the senses.”[1]  Phosom may thus give social value and develop personal skills.  Also it encourages people to explore their environment in a new and exciting manner where they may learn more about their surroundings in the process.


Design iterations

The design of Phosom as a toy to play with or even as a game, has gone through a few iterations and here is an account of what I consider to be the highlights of that process.

Initial idea

When taking a nap with my nine months old son last September (2013) the unrequested idea sprung up to create a mobile application that would allow a group of people, all present at the same (possibly large) location and each holding mobile devices running that app, to create an Internet connected game where the group would be divided into two teams in which each member would be assigned an opponent from the other team.  Everyone would be given the task to photograph a motive from their own choosing.  Having taken a picture, a team member sends it to his or her opponent, and then waits for a picture to be delivered from him in the same manner.  Having received a picture, a team member has to create a photograph, with his mobile device camera, that resembles the received picture as closely as possible, either by finding the motive his opponent photographed or by taking a visually similar picture in some creative way.  Each team member’s effort towards image similarity in this way is graded and the total grade earned within each team determines whether it won the game.

Although I was not actively seeking ideas while taking that nap, I did know that a game would have to be implemented as a final project in the Game Design course.  During the first days of the semester, we students were guided into group games like Ninja[2], to mingle, break ice and start us thinking about games.  That probably influenced the game setting described in the previous paragraph.  The required Internet connectivity of the game is also probably influenced by my fondness of the Internet as a technology and how it enables communication.

What about playful communication with photographs?  The act of comparing images within a game is an obvious result of the frequent use of image search engines and an app like Google Goggles[3].  Indeed there already exist mobile photo toys like Instagram and Snapchat, but competing at finding the most similar motive to the one given may be considered as something novel.

Taking that nap probably tuned the brain into an open mode[4] where it could be aflood with ideas[5] sourced from those influences and inspirations.


phosomimage01 phosomimage05

 Initial user interface sketches. Name ideas other than Phosom include PhotoKO and Phosimi.


Group brainstorming

All team members were enthusiastic about the basic idea of creating play involving photography, visual memory and interpretation.  Everyone also saw from their own angle what could be done with that core game mechanic[6] of playing with image comparison, and so in our discussions about what kinds of gameplay could be conceived based on that foundation, several different ideas were collected, into a Google Drive document and our Trello board. was used as an electronic Scrum wall to organise the tasks to do. was used as an electronic Scrum wall to organise the tasks to do.


The most discussed gameplay scenarios were one-on-one challenges and turn based group challenges, where players can either get automatically assigned photos, or create their own challenges by taking a picture or uploading one.  Other possible types of gameplay we discussed include a memory drawing game, where a drawing is shown for a limited time and the player is to draw it from memory and take a picture of it; art quiz where the player is shown a well known image and has to track it down on the net to submit as a response; a tourist guide in the form of photos from sites of interest, which players find and photograph to have the next location revealed (which could be offered as a white label product[7]).  Also we discussed the offering of different categories to play within, where challenge pictures could come from the chosen category, and bribery was even considered, where players could in some way bribe the system to get a better result, which lead to some discussion about ethics.

One of the more fascinating elements within the gameplay scenarios we discussed is the possibility of wordless communication between geographically distant players, as they go about their everyday lives and may at random moments spot interesting visual motives to respond to a challenge with, or create a new one[8].  That element of remote challenges is not present in the initial idea, which is basically about a toy (or a tool enabling play), for a group of people present at the same place at the same moment, to play with.  So here the idea was already on its way to evolve into something different.


We used a closed Google+ group for team communication.
We used a closed Google+ group for team communication.



From that collection of ideas, we settled on a bare minimum of features to implement initially, to get a first hands on feel for how it is playing with the act of coming up with an image that is supposed to be the most similar to another image that is given as a challenge.  This minimum consisted of enabling the player to ask for a challenge, that would be delivered as a random photograph from the Flickr image hosting service and the player would then perform a web search, within the game, to find the most similar picture.

Bare minimum of features to implement initially, marked in red on a flow diagram.
Bare minimum of features to implement initially, marked in red on a flow diagram.


As the initial idea is about taking pictures, to create something similar to a given image, the ability to search the web for images instead was only considered as a quick means to have something working, that would then be thrown away at later stages of development when a camera would be accessible from the game running on a mobile device.  Much to our surprise, casual playtesting with people we met in passing, indicated that the option to search the web to find images similar to the one given, was regarded as a pleasing game mechanic in itself, and the game could be based on that alone in a desktop environment, where a mobile camera is not an option[9].  More formal playtesting later on confirmed this, where players liked the option to either search for images on the web or to find a motive from their environment with the camera.


Results shown in the first prototype, with an image from a web search as a response to a random challenge photo from Flickr.
Results shown in the first prototype, with an image from a web search as a response to a random challenge photo from Flickr.


Even though this initial prototype was well received, with its limited offering of automatic challenges and web searches, we were still eager to try playing with the mechanic of taking pictures with a camera when given a photo to imitate.  When that ability was available in a further iterated prototype, it mostly added a new dimension to the gameplay that had been available until then, rather than changing it completely.  Indeed, it was more interesting to explore the environment for a similar motive and asking someone to pose in this or that way, that should be similar to what the picture being imitated showed.  This allowed for more social interaction within the present environment, than would been had by doing endless web searches, and that local interaction could be a good addition to the remote interaction discussed previously.



Trying out the camera to respond to challenges in Phosom.
Trying out the camera to respond to challenges in Phosom.


Technical limitations masked by narrative

What became apparent in the first prototype with web searches and in later versions offering photography interaction, is the inferiority of the applied method of image comparison and the perceived uncertainty of how it works.  Players were confused about how their results were being evaluated and as a result they were not sure what they were looking for[10].  At least this was often the case the first few times a player took a challenge, but then he or she got a better feel for what worked and not.

Was this something to worry about or was this a part of learning how the game works?  This is a question we discussed quite a lot.  It was a concern regarding the playability of the game when players found the results they were given to be unfair.  A player might find the same object as in the given challenge photograph, but still get an aggravatingly low score because the overall tone and brightness of the image she produced was different.  In those cases she would be likely to throw the game away immediately and never play it again.



Two of the worst examples of unfair scores, where the score 589 of 1000, and 632 of 1000, is given.
Two of the worst examples of unfair scores, where the score 589 of 1000, and 632 of 1000, is given.


Before we commenced with formal playtests we decided to use those technical limitations to our advantage and conceived a narrative that introduced a fictional character whose opinions would represent the evaluation of similarity.  Any possible peculiarity of the underlying image comparison method could be attributed to this character’s quirkiness.  We were quite happy with this solution, but still, playtesters who had in some cases not taken the time to familiarize themselves with our fictional character and her role in the mechanics of the game, were confused nonetheless.  From playtests we learned that the participants would have appreciated some kind of an introduction on how the images were being evaluated, so they would have a better idea of what they were about to do exactly.


We called the fictional character Phosie, that evaluates image similarity within the game.
We called the fictional character Phosie, that evaluates image similarity within the game.  Graphics by Anders Wiedemann.


Player passion

Apart from getting the highest score when comparing your image with another, what is your goal while playing with this toy-game?  This question lead to discussion about the possible metagaming[11] players could be involved in while interacting with the basic play offered by Phosom.  Given the positive feedback we received from the prototypes, where playtesters said that they would like to play this kind of a game, there seemed to be little doubt regarding the potential of the idea for a playable game.  But the question remained about how engaging the game could be, what would keep players coming back to it?

With that in mind, we thought about possible in-game values that players would compete to win the most of.  Typical representations of such values are coins or points, but I was most fond of using photo prints to represent those values, that players would collect to be able to take pictures.  Those prints I liked to call pholoroids, which players could win by coming up with an image that is more similar to a challenge than their competitors could produce.  To be able to take a picture, a player would have to possess a pholoroid, and each day he would be given a handful of them for a good start, that could lead him to win a whole pile of pholoroids, which could then enable him to send a picture as a challenge to a group of other players, where, in a sense, he would be putting them all at stake, possibly winning pholoroids from all the players in that group after all rounds had been taken, or possibly losing them all to another player within that group that performed better – in a high risk, high reward scenario.


Further development

After reflecting on the previously discussed design process and the collection of ideas, I have become to believe that all this is overly complex, adding layers of narrative and virtual game values, while the values gained directly from the core mechanic could be interesting enough and how the game works could be self-explanatory without words.  It could be more interesting to steer the development towards a minimalist game design, where “…self-imposed, deliberate constraints on both the design process and game are an important component to exploring new types of game and play” and “…these point to choosing a few powerful, evocative elements, and then exploring the design space they constitute” where “the goal is not just to strip away the unnecessary parts but to highlight and perfect the necessary elements”[12].




~ vs ~

Flow sketch of two possible ways to start the game, by first choosing what or whom to play with, then followed by a rather complex set of options. This can be simplified to the single option of taking a photograph that imitates another, but still there could be a rich set of goals to explore in that minimalist game design that would allow for a deep gameplay.


Collection of core game values as a metagame

Instead of virtual in-game values in the form of pholoroids, players could see to how many photos they have the most similar imitation, and they could decide whether this count of similarities is something they care about and if they want to compare it with what others playing the game have gained.  All images put into the game would be open to imitation by any player.

At one point in time you could have the most similar imitation of one photo, and thus in some sense own it, but then later on, someone else could do a better imitation of that photo and in the same sense win it from you.  Notifications could be delivered about those events, that might ignite competitive fires in a player who may decide to try and do a better imitation of that photo, to win it back, or he may decide to look at what other photos that competing player “owns” and try to win some of them from him.  And so on, back and forth.

As a player, you could care about collecting increasing numbers of best imitations, comparing your gain with that of all other players within the game, rising and falling through the ranks, or you could care more about narrowing the view of whom to compare with, seeing how your performance stacks up against that of in-game friends, who could be defined manually, by adding them in a “traditional” manner, or they could manifest organically as they decide to compete against you and vice versa.  Spontaneous social connections could be forming as everyone can compete against anyone, possibly imitating a photograph that was created to imitate another photograph, in an endless recursive spiral, forming a snowball of imitations that rolled out of the very first photograph initially added to the game; in a postmodern world[13] of endless imitation to explore, where players create the games they like with the mechanics offered by this photo toy-game.  Here we would have metagaming directly based on the core game loop and the values it creates[14].

The interface would be minimalistic, initially only showing the basic element around which the play turns; a photo, one at a time.  The photos could be navigated in a sequential order of popularity or by some other metric, such as location.  The photos with the most imitations would be considered the most popular – imitations as likes.  In the same way, players who have gotten the most imitations of their pictures would be considered the most popular.  Would the game then be about collecting the highest count of the best imitations, or to become the most imitated photographer?  Players decide, since “being playful is an activity of people, not of rules”[15].

No introduction would be provided, just that basic element covering the whole screen, and possibly not obvious interactions for the player to discover, as he taps, touches and swipes[16].  This type of interface is inspired by recent mobile apps such as Snapchat[17],Vine[18], Mindie[19] and Rando[20].  The player will see images with given scores, attached to the images they are imitating, and see the ability to take a picture.  Within that context a player should soon realise what she is looking for when taking a picture; the attached images with the highest score should make it instantly visible what works within the game.

Graphic design can make a game or a toy look beautiful, but it can be argued that it is not the reason why anyone likes to play with it, but rather the affordance[21] of play it offers, and maybe that should then be the most, or the only visible element.  That is at least one way to approach the design, that suits well a graphically challenged developer and seems now to be fashionable as the example apps mentioned here show.

Should the development of Phosom proceed in the spirit described here, I would then be taking it closer to that of a non-game mobile app, ignoring the elements that define games[22], or at least leaving their implementation up to the players, within the facilities provided.  This may be a natural progression, as it aligns well with my background as a traditional software and mobile app developer, with little gaming experience (I did finish Prince of Persia and all episodes of Commander Keen back in the days of DOS[23]!  -and I was the proud owner of an Atari 2600 Video Computer System[24]).  That background of skimpy gaming experience can indeed be considered as an asset[25] and that is a perspective I will try to embrace.



Phosom is a service dependent on a backend, implemented in Java, running on Google App Engine which offers an API using Google Cloud Endpoints.  The interface is implemented in HTML5 / JavaScript / CSS using the jQuery Mobile framework, for both the web and smartphone versions.  The mobile app versions are compiled with Apache Cordova, the open source engine behind PhoneGap, using the tooling support of NetBeans.  Images are stored and uploaded directly to Google Cloud Storage and their similarity is computed by a servlet, running on an Google Compute Engine instance, that currently uses the OpenIMAJ toolkit.

Image comparison methods

When first contemplating the feasibility of the idea of creating a game based on image similarity analysis, I searched for readily available libraries to handle the function of image comparison and settled on two libraries to try out:  LIRE[26] and OpenIMAJ[27].  Preliminary tests with those libraries indicated that some kind of play could be facilitated with the image comparison features they offer, computing distance vectors between images based on their histograms.

The fact that both those libraries are implemented in Java lead to Google App Engine for Java[28] being chosen as an environment for the backend.  With the development process under way it became apparent that those libraries could not be used within the Java Runtime Environment provided by App Engine, due to its sandbox restrictions[29].  The possibility of comparing the images on the (mobile device) client was explored, using JavaScript and the HTML5 canvas element[30], but the browser’s Same-origin policy[31] made that difficult when communicating between App Engine and Cloud Storage on different domains.

As a quick solution, a simple servlet using the OpenIMAJ library was created and run in the lightweight Winestone servlet container on a nano compute instance at GreenQloud[32].  The resources of that inexpensive instance are very low and so the image comparison took a long time to run.  After receiving a $2000 starter pack for the Google Cloud Platform[33], the image analysis servlet was moved to a more powerful and expensive Compute Engine instance, with a better response time as a result.  This starter pack, with its six month expiration time, is one motivation to continue the development of Phosom and see where it goes.


Programming the backend I used a traditional object oriented approach, with object relationships in hierarchies, probably due to my software development background.  With that approach, the relatively simple model behind the game quickly became confusing and now I have learned that the Entity component system (ECS) design pattern is often regarded as a better fit when programming computer games[34].

For the future development of Phosom outlined above I would like to refactor the underlying data model, from being centered around a game entity to being focused on a photo entity instead, and in that process a move to ECS could be in order.  In that same process I would like to consider using hammer.js[35] instead of the jQuery Mobile UI framework currently in use, for the simplified, minimalistic user interactions I have in mind.

Source control

The client code is hosted in source control at:

Code for the image analysis servlet is in source control at:

The backend server code is at:



Given the widespread use of mobile devices and interest in photography, across all demographics, an opportunity to play with a combination of those may be welcomed.  Phosom would not be the first opportunity to play with photography on mobile devices, but could provide a novel angle to approach that play from.  Also it can ignite philosophical thoughts about originality and authenticity, while players use their visual memory to scout their surroundings for reminiscent images.


The desktop prototype of Phosom can be played at:

and the mobile prototype for Android can be downloaded at:



IT University of Copenhagen
Game Design course – instructor:  Miguel Sicart
autumn 2013
Björn Þór Jónsson (

[1] Klaus V. Meier:  An Affair of Flutes: An Appreciation of Play, p. 8 & 10.

[3] “Google Goggles is a downloadable image recognition application…”

[4] “…the open mode, is relaxed… expansive… less purposeful mode… in which we’re probably more contemplative, more inclined to humor (which always accompanies a wider perspective) and, consequently, more playful.” – John Cleese on Creativity: , transcript:

[5] When my brain decides it’s time for great ideas – The Oatmeal:

[6] “Game mechanics are methods invoked by agents, designed for interaction with the game state.”  – Miguel Sicart: Defining Game Mechanics.

[7] “A white-label product…is…produced by one company…that other companies…rebrand to make it appear as if they made it.”  –

[8] “In multiplayer games, other players are typically the primary source of conflict.”  “We like to see how we compare to others, whether it is in terms of skill, intelligence, strength, or just dumb luck.”  – Tracy Fullerton: Game design workshop, 2nd ed., p. 77 & 313.

[9] The playability of the desktop prototype was compared to that of GeoGuessr:

[10] “What does the player need to know?:  Where am I?  What are the challenges?  What can I do/what am I doing?  Am I winning or losing?  What can I do next/where can I go next?  – Miguel Sicart: “User Interface and Player Experience”, lecture slide in Game Design-E2013, ITU, Copenhagen, October 21:

[11] “Metagaming is a broad term usually used to define any strategy, action or method used in a game which transcends a prescribed ruleset, uses external factors to affect the game, or goes beyond the supposed limits or environment set by the game.  Another definition refers to the game universe outside of the game itself.”  –

[12] Andy Nealen et al.: Towards Minimalist Game Design, p. 1 & 2.

[13] „Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery – celebrate it if you feel like it. In any case, always remember what Jean-Luc Godard said: “It’s not where you take things from – it’s where you take them to.”“ -Jim Jarmusch

[14] “The metagame, essentially, refers to what everyone else is playing.” -Jeff Cunningham:  What is the Metagame?

[15] Linda A. Hughes:  Beyond the Rules of the Game:  Why Are Rooie Rules Nice?  Annual Meetings of The Association for the Anthropological Study of Play (TAASP), Fort Worth, Texas, April, 1981, p. 189.

[16] “What challenges are developers and designers facing creating apps for touch devices after 30 years of ‘mouse and buttons’.”  Teaching Touch – Josh Clark:

[17] “Snapchat is a photo messaging application.” –

[18] “Vine enables its users to create and post short video clips.”  –

[19] “Mindie is a new way to share life through music video…” –

[20] “Rando is an experimental photo exchange platform for people who like photography.”

[21] “Affordances provide strong clues to the operations of things” – Donald A. Norman: The design of everyday things.

[22] “…we…identify seven elements in games:  1. Purpose or raison d’être  2. Procedures for action.  3.  Rules governing action.  4. Number of required players.  5. Roles of participant.  6. Participant interaction patterns.  7. Results or pay-off.”  – E. M. Avedon:  The Structural Elements of Games.

[23] “DOS, short for Disk Operating System…dominated the IBM PC compatible market between 1981 and 1995…”

[25] ““Students who know every game often have preconceptions about what games are … I have to find ways to make them see that games are an aesthetic form that hasn’t been exhausted. …[it] is sometimes more difficult than starting from scratch with someone who’s maybe a casual gamer or just curious“ – Faye”  – José P. Zagal, Amy Bruckman: Novices, Gamers, and Scholars: Exploring the Challenges of Teaching About Games.

[27] Open Intelligent Multimedia Analysis toolkit for Java:

[30] A basic image comparison algorithm using average hashes, implemented in JavaScript, was considered:

[33] Google Cloud Platform Starter Pack:

[34] When designing a game, an object-oriented approach may lead to “deep unnatural object hierarchies with lots of overridden methods” – “Anatomy of a knockout”

[35] Hammer.js – A javascript library for multi-touch gestures:


Project report done in a Game Engines class at ITU.
This site’s template is a bit outdated and a PDF version
may look better, which can be downloaded from the
Google Drive source document,
but there the embedded videos are missing.


Rotatengine is a JavaScript framework intended to facilitate the creation of games, for mobile touch devices, that require the player to spin around in circles, in different directions, to reach various goals.  As the player turns around, holding the device in front of him or her with arms stretched out, the game’s content moves accordingly as if it is attached to a cylinder or a sphere, within which the player is standing.



Young children often spin around as a form of play, which they perform for the sheer joy of it and maybe the resulting dizzying effect as well[1].  Many forms of dance involve spinning around to various degrees (!) at different moments in time, and people usually dance for their enjoyment.  Spinning around can also be a part of religious acts, whether they be Tibetan[2] or Islamic[3].  The act of spinning in circles has even been promoted as a means towards weight loss[4].

So the impulse to spin around, for different reasons and in various contexts, seems to be quite fundamental in us humans.  Usually the act is related to fun and play, and rotatengine.js is based on the idea of encouraging, and maybe structuring, object play[5] that requires or offers spinning around.

Many types of games can be conceived, based on this frame of play and implemented with this kind of a game engine.  One kind can include textual elements that are arranged in a circle and the player must tap them in the correct order to organize them into a coherent goal.  Another could present the player with images that she must tap on when certain conditions are met, like when the image aligns with another fixed image or text or sound.

Here are outlined more specifically a few game ideas:

  • Rotabet:  The player is presented with the letters of the alphabet in a random order, spread out in a circle around him, as if he is standing within a cylinder where the letters are painted on the wall and his view into that world is through the mobile device screen.  By rotating the device in different directions and angles, the player sees different portions of the cylinder wall and has the goal of tapping the letters in the correct alphabetical order.  As the player taps the letters they are arranged sequentially in a fixed position on the screen, so he sees the progression towards the goal:  The letters of the alphabet being arranged in correct order.A game like this utilizing rotatengine.js can then offer various gameplay mechanics, such as penalizing incorrect choices (taps) by cropping off the already accomplished result, rewarding for so and so many correct choices in a row by then allowing an incorrect choice without penalty, and giving an overall score based on how quickly the goal was reached.  Levels can be made increasingly difficult, for example by varying the required player response time; at first removing the choices (letters) as they are chosen, to make the level easier as it progresses, and then at later levels, leave all choices in to make the level constantly difficult.  Fonts could change with level progression, with at first a large and readable sans-serif font, then later on, less (quickly) readable script fonts.This could be considered as an educational game, where the player gets practice and fluency with the alphabet.  Rotabet is actually the initial idea that inspired the creation of rotatengine.js
  • Rotaquote, a quotes game:  The player sees the name of a known person fixed on the screen and then a few words scattered around him or her in random order.  Here the goal would be to tap the words in an order that would arrange them into a (famous) quote that is attributed to the named person.
  • Similar to Quotabet, a game could be created that is based on poetry instead of quotes.  Here the player would have to assemble (known) poems, line by line.  Each part of the poem (paragraph) could form one level of the game.  If the player gets so and so many errors, she will for example have to start at the previous level.  The goal is to assemble the whole poem.
  • Form grammatically correct sentences:  Similar to the quotes and poetry games, the player is presented with a soup of words she has to arrange into any of possibly several grammatically correct sentences.
  • Anagram:  Player is given the task of assembling a given number of anagrams from a set of letters she can spin around, for example:  emit, item, mite, time.
  • Palindrome:  Given a random sequence of letters, the player has to arrange them into a palindrome, for example:  abba, radar, kayak.
  • Rotatag – match pictures with tags:  Various photo hosting services, like Flickr or Instagram, offers users to (#) tag pictures with descriptive words.  Those services also offer public programming interfaces (APIs) and a game could use them to pick a few random pictures each time to spread around the player, then pick a tag from one of the pictures to place in a fixed position on the screen.  The player is then to guess from which picture the tag came from by tapping on it.
  • Match a picture to a written word:  Around the player, on the walls of the cylinder, are visual representations of various objects and beings, such as a chair or a duck.  A word appears on the screen in a fixed position and the player is to tap on the visual that matches that word.  So if the word “horse” appears, then a player must spin around until he sees a picture of a horse and tap it, with a resulting encouragement in the form of a cheering sound or some in game value, such as a trophy.This type of a game could be suitable for young children learning to read or those learning a new language.
  • Tap a graphic matching with the sound you just heard:  Five visuals are spread around the player and at regular intervals, one of five matching sounds is played and the player must in time rotate to the matching graphic and tap it.  So for example, if the player hears a goat screaming, she must rotate to the position where the drawing of a goat is located and tap it in time.  Progressing levels decrease the required response time and the goal is to stay playing as long as possible without an error or too long a response time.This type of game is inspired by the audio game Bop It[6], originally implemented in specialized hardware (which this author has tried) and is now apparently available in an iOS version[7] and a clone is available for Android[8].  But those touch-device implementations don’t require full body movement, like the game proposed here does, though from their description they do seem to require some device movement by recognising hand gestures (possibly using Dynamic Time Warping (DTW) and Hidden Markov Models[9], which were considered for the implementation for rotatengine.js but deemed not a good fit, as they seem best for sensing spontaneous, short lived movements but the engine discussed here requires continuous monitoring of movement and position – more on that below).



Rotatengine is dependent on accessing the device’s sensors in order to position the game world (cylinder or sphere) according the players’ movements.  While the engine could be implemented for one particular platform, for example iOS, in code native to it and thus gaining the highest possible speed, the decision was made to try and target more than one, if not all platforms by using JavaScript, HTML5 and CSS3, to be run in each platform’s web view.

JavaScript is increasingly gaining access to the graphics acceleration hardware on the platforms it runs.  Though the performance of applications implemented in JavaScript and HTML is still, and probably always will be, less than that of native applications, the graphics requirements of rotatengine.js are quite modest and so it might not gain much from a native implementation – investigation of that possibility is currently outside the scope of this project.

Visual rendering and interaction

At the current stage of implementation, focus has been on the part that handles rendering of elements in the desired manner and the interaction with those rendered elements by the player’s circular movements.

Choice of technology

The game world provided by rotatengine.js must in some ways give a three dimensional illusion, where the player is to sense that he is the pivot around which game elements rotate as he spins in circles.  Modern desktop browsers provide the WebGL 3D graphics programming interface (API), which uses the HTML5 canvas element and provides code execution on a computer’s Graphics Processing Unit (GPU).  That could be an obvious choice for an engine that manages content in 3D.  Mobile devices have lacked support for WebGL[10], though the most recent versions of web views on some platforms are now supporting it, with iOS being a notable exception[11].  So to reach the widest range of mobile devices, it is worthwhile to consider other options for 3D content rendition.

The CSS3 specification includes a module that defines 3D transforms that “allows elements styled with CSS to be transformed in two-dimensional or three-dimensional space”[12] and CSS3 3D Transforms are supported on most of the recent mobile platforms[13].  Given that wide support and the simplicity of arranging various types of HTML elements – letters and images, as in the game ideas listed above – with the same set of CSS rules, it seems to be a good choice to base the visual implementation of rotatengine.js on CSS3 3D transforms.

Interactive animation mechanics

A couple of approaches have been considered to manage the objects in the game world as the player interacts with it.

The first one considered is inspired by the Cover Flow[14] graphical user interface, where off-screen game objects would wait in line, decoupled from the animation mechanism, until next on screen  Then they would be coupled with the animation and flow across the screen, until decoupled again at the other end.  The objects would be incrementally skewed to either side and translated in distance from the viewer, as they flow across the screen to simulate a cylindrical / circular 3D effect.  This approach could have the benefit of allowing virtually infinitely many game objects, that would wait in the off screen queues for the player to pass by them while he rotates, and then more than one full circle could be required to display all objects.


The Cover Flow interface[15]

Another approach is to arrange the elements on a circle using the trigonometric functions sine and cosine.  This is the method currently implemented, where each item is assigned an amount of radians by dividing the amount of radians in a full circle (2 * π) by the number of game elements.  Then the x  and – z coordinates are calculated for each element by applying cosine  and – sine , respectively, to it’s radian value multiplied by the integer value of it’s sequential order.  The coordinates are multiplied by a radius which itself is a scaling of the view’s width.  So now the elements are spread evenly around a circle that is roughly double the width of the scene.  Each item is rotated individually by it’s radian value minus  π / 2  to have it facing inside the circle.  The radius value is added to each z  coordinate to have the viewer inside the circle, close to it’s perimeter.  A simplified version of the relevant code is as follows:

       var perspective = viewWidth / 2;
       var radius = viewWidth * 1.2;
       container.css({"transform": "perspective("+perspective+")"});
            // let's go clockwise, thus fullCircleRadins - ...
            var thisItemRadians =
               fullCircleRadians - (self.radiansPerItem * i) + viewRotation;
            var x = Math.cos(thisItemRadians) * radius;
            var z = - Math.sin(thisItemRadians) * radius;
            var transform =  
                "perspective("+ (perspective) +") " +
                "translateZ("+(z + radius)+"px) " +                
                "translateX("+(x)+"px) " +
                "rotateY("+(thisItemRadians - (Math.PI/2))+"rad)";
            $(this).css({"transform": transform});

Initial testing of this game element rendition was performed in a desktop browser, where interaction input was read from mouse movement and keyboard button presses.  One animation anomaly that became apparent during this testing was that if CSS3 transition-duration was set to high and fast movements were made, the elements would spin around themselves and take a shorter path to their new destination, over the circle’s area, instead of animating smoothly along its perimeter.  Setting the duration speed to 0.1 seconds solved this for most animation speeds, and zero seconds also but then sacrificing a little in animation smoothness.

Here the radius is subtracted from the z value, instead of adding to it, to have the viewer / camera outside the circle with a full view of it.
Here the radius is subtracted from the z value, instead of adding to it, to have the viewer / camera outside the circle with a full view of it.

When most flaws had been ironed out and the elements were animating in the intended manner in a desktop browser with mouse interaction, it was time to proceed to testing on mobile devices with input from their sensors.

Differences between sensors and platforms

Mobile devices commonly offer three types of sensors[16] that provide information on their orientation – magnetometer (compass), accelerometer and gyroscope – and together those sensors can be referred to as an Inertial Measurement Unit (IMU)[17].  Of these, the compass is a straightforward choice for input to the rotatengine.js game world, as it provides information on the device’s heading in degrees.

Magnetometers on mobile devices can be inaccurate when used indoors, due to interference from the building[18].  That would not be problematic for rotatengine.js as it does not need an accurate measurement of heading, but rather a responsive and mostly stable and consistent reading of where the device is pointed.

Apache Cordova[19], the open source engine behind PhoneGap, was used to package rotatengine.js into applications to be run on the Android and iOS mobile platforms.  NetBeans with it’s recent Cordova support was used to manage that process[20].

Rotatengine on the iOS simulator, run through the built-in Cordova support in NetBeans.
Rotatengine on the iOS simulator, run through the built-in Cordova support in NetBeans.


A sample application of rotatengine.js running on Android and iOS, receiving periodic heading input via the Cordova API[21], performed correctly but not with the desired responsiveness; when moving around in circles holding the device with hands stretched out, the updates of the game world objects were quite slow and somewhat jittery and equally so on both Android (Nexus 4) and iOS (iPad 2) devices.  See video recordings of tests:


Though Cordova’s API offers the option to specify the update frequency from the compass, further debugging showed that updates were in fact being delivered around every 200 milliseconds, even though a 50 ms update interval was requested.  This seems to be a hardware limitation on both platforms tested and a 200 ms interval is unacceptably long for a smooth animation and responsive updates.

A simple attempt was made to extrapolate future changes in rotation with shorter intervals, from the previous update interval, until the next update would be received from the compass.  Those future changes are calculated by dividing the delta of the previous two compass updates by four, and the scene is rotated by that fraction every 50 milliseconds.  Those intermediate updates from extrapolated values did not result in much improvement when run on a device, and as expected, an initial halt of animation is visible when rotation is started or direction is changed and two compass readings are being collected (with their 200 ms interval) for further extrapolation.  See video recording of test:

Testing raw compass sensor data as input to rotatengine.js
Testing raw compass sensor data as input to rotatengine.js


Gyroscopes have recently become available as sensors in mobile phones, first in the iPhone4[22] and then in various Android devices.  That kind of a sensor gives information on orientation changes[23], so it was the next option as input to rotatengine.js.  Here, Cordova’s API was bypassed as HTML5 provides direct access to updates from the gyroscope (via callback function attached to window.ondeviceorientation)[24].  The device’s heading is read from one of three values returned from the sensor (alpha) to which the scene is rotated.

The gyroscope on the iOS devices tested, iPhone 4s and iPad 2, proved to be quite stable and it’s raw data readings provided smooth updates of the game objects, even without any CSS3 transition-duration and it’s smoothing effect of animation easing[25].  See test video recording:

The same can not be said about the results from the Android device tested (Nexus 4), where orientation updates were very jittery and unstable, even when the device was being held still, the game objects would jump around constantly.  See test video recording:

Using the gyroscope with the current implementation would be adequate for targeting iOS devices, but if the aim is for a more cross platform solution, then the raw data from the sensors needs to be handled in some manner that both provides stability and responsiveness.  One approach considered was to implement a basic low pass filter[26] on the data returned from the sensors, but having something to filter requires first the accumulation of sensor data to then apply the filter on, and that results in an initial delay and a lack of responsiveness.

A viable approach for the use of sensors to update game object positions in a responsive and stable manner, on multiple platforms, could be to fuse the information from multiple sensors[27] and apply advanced filtering on the data with statistical estimates of the real heading.  The gyroscope and accelerometer can be combined to form a 6-axis inertial sensor (roll, pitch, yaw and vertical, lateral, longitudinal acceleration).  Two prominent methods for integrating gyro and accelerometer readings are the Kalman filter[28] and the Complementary filter, the latter is easier to understand and implement[29] and is said to give similar results[30].   Also, the lighter implementation of the Complementary filter would require less processing power, and that is an important consideration for the battery consumption on mobile devices.

The next step in developing the rotational interaction with game elements in rotatengine.js would be to apply this fusion of the gyroscope and accelerometer with either the Kalman or Complementary filter[31].  The Complementary filter’s simplicity makes it a tempting first choice.

Next steps

Further development of rotatengine.js includes reacting to interaction with game elements by touch, where events would be fired that a game implementation can register for and different element behaviours could be defined for those events, like animating to a different place or disappearing.

Also, the game world could be rendered in a spherical fashion, in addition to the cylindrical one, where navigation of game elements would be done by tilting the device up- and downwards, along with the basic circular motion.  Options for this kind of spherical world could include the definition of how many layers of spheres surround the player and whether the spheres are shrinking towards the player, to allow for a more dynamic and engaging environment.


In the first phase of the implementation discussed here, attention to code structure for the project has been minimal and has at most consisted of encapsulating functionality in object literals and function closures[32].  For further development, the module pattern[33] has been considered as a way to organise the code.  For inter-module communication the mediator pattern[34] is a choice.

The module pattern fits well into the Entity component system (ECS)[35] design pattern, which favors composition over inheritance.  When designing a game engine that may handle entities that have many different variations, an object-oriented approach may lead to “deep unnatural object hierarchies with lots of overridden methods”[36].  A game engine based on the ECS pattern, however, “provides the ultimate flexibility in game design” where you “mix and match the pre-built functionality (components) to suit your needs”[37].

With rotatengine.js based on the ECS pattern, a game using it can define behaviour and functionality by referencing different modules, for example specifying how game objects respond when a player taps on them.  Work on the filtered sensor fusion described above should continue within a separate module that can, when implemented, be swapped with the direct, unfiltered single-sensor reading implementation currently in use.

For managing modules within rotatengine.js, the Asynchronous Module Definition (AMD) API[38] has been chosen as it works better in the browser / webview than other JavaScript Object APIs, like CommonJS and “the AMD execution model is better aligned with how ECMAScript Harmony modules are being specified. … AMD’s code execution behaviour is more future compatible”[39].  Work has started on modularising the current implementation, that can be seen in the project’s js/rotatengine directory.

Data schema and automatic content generation interfaces

Each level instance in rotatengine.js contains game objects created from data defined in a JSON configuration file.  For now, the definition of how the data is structured is arbitrary, to have something running as quickly as possible.  But when the engine should be ready for use by a party not involved in it’s development, some means is needed to communicate the required structure of the configuration files.

To define the structure of configuration files for rotatengine.js, the JSON Schema specification[40] has been considered.  To ease the creation of a schema definition, there can be found tools based on that specification, to generate a definition from an example data file[41].

Declaring data files, based on the schema, could be done manually by reading the schema and carefully adhering to it in a text editor.  A specialised user interface for entering the level data according to the schema could help a level designer get up to speed.  User interfaces can be automatically generated from a schema definition and the Metawidget object/user interface mapping tool[42] could be a helpful choice in that regard, with it’s JSON Schema UI Generator[43].  With that tool, the engine could include simple webpages, that allow data entry resulting in a JSON string conforming to the given JSON Schema[44], which could then be saved to a configuration file ready for use[45].



rotatengine.js is a specialised engine designed for limited game mechanics, where the player is to spin around in circles and interact with game objects as they rotate around him.  But within that limitation, it is possible to conceive diverse types of games, as the examples above show.  The engine will hopefully lead to a unique variety of play and games.

As of this writing, the runnable code can be initiated by opening the index.html file in the project’s root, or by compiling it into a mobile application with Apache Cordova or Adobe PhoneGap.  As mentioned previously, work has started on modularizing that same code in the js/rotatengine directory, but it is currently not in a runnable state.

The project can be found on the attached optical disk and online at:


IT University of Copenhagen
Game Engines course – instructor:  Mark J. Nelson
autumn 2013
Björn Þór Jónsson (

[1] “Ilinx is a kind of play” that “creates a temporary disruption of perception, as with vertigo, dizziness, or disorienting changes in direction of movement.”

[2] First of Five Tibetan Rites involves spinning around “until you become slightly dizzy”:

[3] “Sufi whirling is a form of … physically active meditation … spinning one’s body in repetitive circles, which has been seen as a symbolic imitation of planets in the Solar System orbiting the sun”

[4] “How Spinning Aroundin a CircleLike a 4-year old Child will Skyrocket Your Weight Loss Success“

[5] Physical activity with toys in object play:

[6] Bop It audio game / toy:

[9] A plethora of articles can be found on the subject of using Dynamic Time Warping and Hidden Markov Models for recognising gestures, for example:

“Smartphone-enabled Gestural Interaction with Multi-Modal Smart-Home Systems”

“Online Context Recognition in Multisensor Systems using Dynamic Time Warping”

“Motion-based Gesture Recognition with an Accelerometer”

“A Novel Accelerometer-based Gesture Recognition System”

“Gesture Recognition with a 3-D Accelerometer”

“uWave: Accelerometer-based personalized gesture recognition and its applications”

“Improving Accuracy and Practicality of Accelerometer-Based Hand Gesture Recognition”

“Using an Accelerometer Sensor to Measure Human Hand Motion”

[10] CocoonJS is an interesting technology that makes up for the lack of WebGL support by bridging into native OpenGL libraries:

[11] Compatibility table for support of WebGL in desktop and mobile browsers:

[12] CSS Transforms Module Level 1:

[13] CSS3 3D Transforms support:

[16] “A Survey of Mobile Phone Sensing”

[18] Interestingly, disturbances in the Earth’s magnetic field within buildings can used to advantage when positioning devices within them:

Startup Uses a Smartphone Compass to Track People Indoors

“Indoor Positioning Using a Mobile Phone with an Integrated Accelerometer and Digital Compass”

„Making Indoor Maps with Portable Accelerometer and Magnetometer”

[19] „Apache Cordova is a set of device APIs that allow a mobile app developer to access native device function such as the camera or accelerometer from JavaScript“:

[20] NetBeans 7.4 can build HTML5 project as native Android or iOS application

[21] Cordova API: At a regular interval, get the compass heading in degrees:

[22] Steve Jobs demonstrates iPhone4’s gyroscope capabilities:

[23] “A gyroscope measures either changes in orientation (regular gyro or integrating rate gyro) or changes in rotational velocity (rate gyro)”  –

[24] DeviceOrientation Event Specification:

[26] Example of a low-pass filter for smoothing sensor data:

[27] Google Tech Talk:  “Sensor Fusion on Android Devices: A Revolution in Motion Processing”

[28] „The Kalman filter, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone.“

[29] “Reading a IMU Without Kalman: The Complementary Filter”

“Android Sensor Fusion Tutorial”

[30] „I have used both of them and find little difference between them. The Complimentary filter is much easier to use, tweak and understand. Also it uses much less code…“ –

„In my opinion the complementary filter can substitue the Kalaman filter. It is more easy, more fast. The Kalman filter is the best filter, also from the theorical point of view, but the its complexity is too much….“ –

“The Balance Filter – A Simple Solution for Integrating Accelerometer and Gyroscope Measurements for a Balancing Platform”

[31] An interesting practical application of the Kalman filter is the Android app Steady compass, which has the description that “sensor data is treated by a Kalman filter in order to obtain superior stability in readings”:

[32] “JavaScript Patterns & Grokking Closures!”

[40] JSON Schema:

[41] JSON –

[42] Metawidget:

[45] An HTML5 saveAs() FileSaver implementation:

Loading, Editing, and Saving a Text File in HTML5 Using Javascript:




Other smaller projects in the Game Engines class:

Project 1:  Platformer game engine


For the first programming assignment in Game Engines I’ve implemented a platformer engine in JavaScript and used the Canvas element in HTML5.

The main components of the engine are four pseudo-classes / functions (JavaScript only has the notion of functions):  Player, Platform, GameLoop and GameOver.  Those classes and supporting variables / data are encapsulated in another function called Plafgine (plat-former-engine) for closure.

At the top of plafgine.js are a few configuration variables to define the level, which could be factored into another file:

  • defaultPlayerPosition:  Defines the size of the main player and where it is positioned initially.

  • enemyPositions:  Positions for the enemies and their sizes.

  • platformDefinitions:  Placement of the platforms.

  • defaultEnemyAutomation:  This is maybe an interesting experiment using the dynamic nature of JavaScript to have the behaviour of enemies configurable by plugging in a function that implements their movement.

There are no real physics in this platformer and it rather implements pseudo-physics by starting a jump at a fixed speed and then decreasing it on each game loop, until the jump speed reaches zero, then a fall speed is incremented until collision with a platform or the ground happens.  Those are the jump, checkJump, checkFall, fallStop functions within the player object.

Different instances of the same implementation of the Player object are used for both the main player and the enemies (NPC), with their configurations differing in setting whether they are automated and then with the added behaviour-function mentioned above.  Some sort of class inheritance could have been a good idea here.

The collision detection is as inefficient as can be where player collisions are checked against all platforms and all other players on each game loop.  Spatial segmentation of what to check against would of course be better for any reasonably sized level.

Control of the main character is handled by registering pressed keys into a state variable and then reading those states on each game loop (in the player implementation) and moving the character accordingly.

Camera movement is implemented by keeping the main character still and moving all other game world objects in the opposite direction when the character has reached a certain threshold on the screen.  That threshold is in fact centered on the screen so the character is pretty much always in the horizontal center.  There are glitches in the implementation of this camera movement that can almost always be reproduced when jumping from the highest platform; then the player doesn’t fall completely to the ground, but that can be fixed by jumping again! – the cause of this should be investigated in another iteration.

Timing is used to determine when to update player sprites based on their activity; when a predefined amount of time has elapsed the portion of the sprite to render is updated.

Same goes for the lives bookkeeping, only when a set interval has elapsed can the count of lives be decreased, so all lives don’t disappear instantly when multiple characters collisions are fired.  So if the main character hits an enemy he loses one life and loses one again if he keeps hitting an enemy when the set interval has elapsed.  Unless the main player hits the enemy from the top – jumps on top of him – then the enemy gets killed.

Then the GameLoop function / class calls itself repeatedly via setTimeout until all lives have been lost – then GameOver is called – and in each round it updates player positions, either from input or automation, checks collisions, checks if to update any pseudo-physics and then draws all players and platforms.

The game seems to run smoother in WebKit based browsers like Chrome or Safari, rather than Firefox for example.



Project 2:  Wireframe renderer

Implementing the 3d wireframe renderer was pretty much straightforward after reading through the given overview of the 3d transform process[1] for the second time and realizing that what’s needed is basically computing three matrixes, multiplying them together and using the resulting matrix to multiply with each vertex as a column vector, as shown in the given pseudocode.

The implementation consists of a HTML5 file, index.html, that includes jQuery Mobile for easy inclusion of a few UI widgets and then there is the wire frame rendering code in 3dWireframeRenderer.js

The function getCameraLoacationTransformMatrix implements what’s described in section 3.1.1 of the overview, Setting the camera location,

function getCameraLookTransformMatrix returns the matrix described in section 3.1.2, Pointing and orienting the camera,

and the projection matrix from section 3.2 comes from the function getPerspectiveTransformMatrix

The test meshes are in meshdata.js and I got them from and it’s export function.  One of the biggest difficulties was deciphering the faces array in the JSON export from there but then I found some documentation[2] on where the vertex indexes are located and the function getTriangleFaces does the job of extracting the vertices for each face.

When I had the function renderWireframe (basically like the given pseudocode) drawing vertices (I have them cover four pixels for clarity) and connecting them with lines, I had some difficulty finding a good combination of near and far values and camera Z position.  Adding sliders for those values in the UI helped, but, near clipping happens quite far away from the camera as it seems – I haven’t found a combination of near, far and camera Z position that allow the object to come near the camera without clipping, except, if I reverse the near and far values, for example set near to -300 and far to -10, and the camera Z position to 150, then the object (cube) renders happily close to the camera; is that a feature of the transformation matrixes or a bug in my implementation?  I don’t know…

The camera movement could be connected to the arrow / wasd keys and mouse but to see clearly the interplay between camera XYZ and the near and far planes is of most interest here so I’ll let those sliders do.

I tried getting the width and height from given FoV, near plane position and aspect ratio, as discussed in the overview and that didn’t play well with my UI so I abandoned that, but what I tried can be seen in the function getWidthAndHeightFromFOV.



[1] “Overview of transforms used in rendering”

[2] three.js JSON Model format 3.1:


Project 3:  Pathfinding

For this project I started by reading an article on the web titled A* Pathfinding for Beginners for a text on how the algorithm proceeds but then I based the implementation pretty much verbosely on the pseudocode in the Wikipedia entry for the A* algorithm, along with supporting functions.

The implementation can be run by opening the index.html file in a recent browser (I’ve tried Chrome and Firefox) and pressing the Start button.  The chasing agent is represented by the letter C and the goal or target is represented by T.

When the target is moving the run can end without the target being reached when the open set becomes empty (line 145 in pathfinding.js), but the run can also end where the target has been reached.  It could be worth looking at the D* algorithm for this case.

The target moves randomly one step at a time horizontally, vertically or diagonally, except when it’s the next neighbor to the chasing agent (C), then it tries to choose a move that results in it not being the chaser’s next neighbor, which is not always possible (when the target is on the grid’s perimeter – see the last condition in the do…while loop at line 198).  It’s possible to have the target fixed in it’s starting position by unchecking the checkbox (Target (T) wanders and tries to avoid) in the interface.



Nú á vormánuðum var efnt til keppni um Íslendingaapp meðal háskólanema og þó ég hefði lítinn tíma aflögu var það nokkuð lokkandi að skrá sig til keppni þar sem ég hafði tekið upp þráðinn í tölvunarfræðináminu með stefnu á útskrift um vorið og var því skráður við Háskóla Íslands.  Helst langaði mig að útfæra lista yfir vinsælustu nöfn í ætt viðkomandi með það í huga að tengja þann eiginleika við Nefnu, smáforrit fyrir íslensk mannanöfn sem ég vann sem lokaverkefni til B.Sc. gráðu.  Þessa hugmynd nefndi ég við fulltrúa keppninnar og nú hefur verið útfærður listi yfir vinsælustu nöfnin í vefviðmóti Íslendingabókar.

Í tæknilegum forsendum keppninnar var lagt upp með að lausnirnar yrðu útfærðar fyrir Android stýrikerfið sérstaklega.  Undanfarið hef ég kafað nokkuð í iOS forritun og útfærði þjónustuapp Símans fyrir það stýrikerfi og einnig Nefnuna.  Svo ég var ekki tilbúinn til að gefa mér tíma í að kafa í native Android forritun en með langan bakgrunn í vefforritun með JavaScript / HTML / CSS, sem má nota til að útfæra smáforrit, ákvað ég að senda fyrirspurn á keppnisstjórn um hvort mætti senda inn lausnir sem byggðu á slíkri cross-platform veftækni og úr varð að slíkar lausnir voru samþykktar og bætt við keppnisskilmálana.  Þá ákvað ég að slá til og skráði mig til keppni.  Keppnisstjórn lagði áherslu á að í hverju liði væru allt í senn tæknimenn, útlitshönnuðir og markaðsmenn.  Ég skráði mig sem einstakling og var skipað í lið með Einari Jóni Kjartanssyni úr Listaháskóla Íslands og Hlín Leifsdóttur úr HÍ.

Eftir setningu keppninnar sat ég við eldhúsborðið heima og velti fyrir mér möguleikanum á að gera einhverskonar leik byggðan á Íslendingabók þar sem ég stefni á nám í leikjagerð ásamt Eddu Láru.  Þó ég hafi hug á þessu námi þá er ég lítill leikjaspilari og hef takmarkaða reynslu af leikjaforritun en finnst þetta skemmtilegt viðfangsefni og við Edda Lára höfum meðal annars áhuga á að sameina bakgrunna okkar í að útbúa skemmtilegt námsefni í formi leikja, en hún kennir ensku við Fjölbrautaskólann í Ármúla.  Þess vegna velti ég því fyrir mér hvort hér væri tækifæri til að spreyta sig á leikjagerð og varð strax ljóst að ættliðirnir í ættartrjánum sem Íslendingabók byggir á gætu auðveldlega staðið fyrir eitt borð í einhvers konar leik.  Þessi hugmynd heillaði meira en þær sem höfðu komið fram áður og eftir að hafa velt henni með Eddu Láru ákvað ég að útfæra spurningaleik þar sem í hverju leikjaborði kæmu fram spurningar um skyldmenni úr einum ættlið og sem fyrirmynd að framsetningu hafði ég leikinn Song Pop.

Viðmót Skyldleiksins er byggt á jQuery Mobile og efni í spurningar er unnið með köllum í forritunarviðmót (API) Íslendingabókar með JavaScript þar sem jQuery léttir undir.  Bein köll forritunarviðmótið eru ekki möguleg þegar leikurinn er hýstur sem vefsíða eða vefapp vegna öryggismála (Same origin policy), svo vefþjónninn sem hýsir leikinn tekur á móti fyrirspurnunum og handlangar þær yfir til Íslendingabókar.  Vefþjónninn er útfærður ofan á node.js og hýstur hjá Nodejitsu.  PhoneGap Build er notað til að pakka leiknum í smáforritspakka fyrir app-búðirnar – iTunes App Store og Google Play – og í því umhverfi eru öryggismál ekki til trafala og bein AJAX köll í API Íslendingabókar möguleg.  Frumkóði Skyldleiksins er hýstur í útgáfustýringu á GitHub og hann var skrifaður í Brackets ritþórnum, sem er enn á frumstigum í útfærslu og hafði sína kosti og galla, helsti kosturinn að hafa virkan yfirlestur kóðans með JsHint (JsLint er of anal) en ég notaði ekki live preview fítusinn sem ritþórinn flaggar helst þar sem ég notaði ofangreindan vefþjón fyrir samskipti við vefþjónustuna.

Nokkrum dögum fyrir skil kom Einar Jón með tillögur að grafík fyrir útlit leiksins sem heilluðu mjög og ég nýtti síðustu stundirnar í að koma grafíkinni fyrir í leiknum, sem gerði hann mun meira aðlaðandi.  Þá var komið að því að setja leikinn í loftið og upp hófst einhverskonar keppni um athygli á samfélagsmiðlum þar sem keppnisstjórn lagið mikið upp úr markaðssetningu sem yrði tekin með í mati á innleggjum í keppnina.  Þó okkur gengi ágætlega að afla áhangenda (læk-a) fyrir Skyldleikinn varð strax ljóst að við áttum ekki séns í þá sem voru öflugastir á þessum vettvangi vinsældakeppni.  Meðal annars útbjó ég kynningarmyndband sem fékk þónokkrar spilanir, Einar Jón útbjó kosningaáróðursmynd og Hlín póstaði um samfélagsnetið þvert og endilangt.  Tveir fjölmiðlar leituðu til okkar um viðtöl fyrir greinar sem birtust í Séð & Heyrt og The Reykjavik Grapevine.

Úrslitin urðu þau að Skyldleikurinn hafnaði í öðru sæti og þegar Kári Stefánsson afhenti okkur verðlaunin þótti mér vænt um að heyra hann segja að sér fyndist þetta skemmtilegasta innleggið í keppnina.  Strákarnir sem unnu voru skemmtilega duglegir að koma appinu á framfæri í erlendum fjölmiðlum sem náði hámarki þegar Jimmy Kimmel sendi frá sér skets sem grínast með sifjaspellsspillisfítusinn þeirra og merkilegt nokk skilaði þessi umfjöllun mikilli umferð inn á Skyldleikinn sem náði hámarki daginn sem grínmyndbandið gekk um netið, þá heimsóttu um sjöhundruð manns leikinn og flestir voru virkir í að spila hann.  Í framhaldinu hefur verið stöðug virkni í spilun leiksins, lengi vel nokkur hundruð einstakar heimsóknir á dag, sem hefur fjarað hægt út og þegar þetta er skrifað eru heimsóknirnar um sextíu.  Rúmlega sexþúsund hafa náð sér í leikinn í App Store og tæplega eittþúsund á Google Play.  Það hefur verið gleðilegt að heyra utan af sér að fólk hefur virkilega gaman að þessu og væri gaman að bæta eiginleikum við leikinn, til dæmis fjölbreyttari spurningum og kviku ættartré sem var upphaflega inn í myndinni að útfæra.

Það er eitthvað sérstakt við það að spila leik sem spyr spurninga um þig sjálfan.