It seems like our concept of the Game of Likes is moving towards becoming no game at all. Though we’ve had quite a few ideas for games to be played with liked and posted media items, we’re leaning towards a focus on exclusively offering playful exploration of the history of online activity in the form of interaction with multimedia on the various social websites, leading to digital introspection – a term coined by Edda which I really like.
Before our last meeting with our supervisor, Edda literally woke up with the vision of creating an interactive installation based on this project for an art gallery, where exhibition guests could form cubes based on their media likes and have them displayed on a large screen, along with cubes created by other guests. Later we’ve learned that this idea could be compared to projects like Listening Post and the event of a thread. This idea Edda presented in our last meeting and I discussed the possible, proper games that could be created in this context; the simplest one being a quiz where two participants would receive challenges involving questions like which one of three multimedia items the opponent likes best, or what order of preference the opponent chose for the given items – an interaction similar to for example the Big Web Quiz.
Our supervisor really liked the interactive installation idea. He also pointed out that those games I described could in fact be created by analog means, while an interactive installation based on online multimedia items collected from personal activity databases would be unique and exclusive to the digital domain. Yes, I must agree and in fact I referenced already existing paper versions of some of our game ideas in the last Game of Likes post. And I also like the idea of an interactive installation, though it doesn’t have to be in an art gallery for what I’m concerned, it can be just as well in the app stores and on the web, or it can be in both contexts (I’m flexible like that ;) .
Though we will most likely keep the current focus for the coming weeks, on the playful exploration, I do still cling to the idea of offering simple games to play within this environment, for those interested to participate in. In any case those ideas will have a low priority for now and the remaining time for our thesis project will probably not allow any such implementation.
Thinking inside the box
While tying my shoelaces in the mens room at a local swimming hall, the small shoe lockers in front of me caught my attention. Each locker was box shaped and it reminded me of the like-cubes we have been thinking about. One of the lockers was open, so I could see inside of that box and that lead to thoughts about the possibility not only allowing the like-cube creator to marvel at its exterior, but also allow her to look inside the cube, where she could write a note on why the cube is decorated with the selected multimedia items, what significance they hold for her. When I came home and told Edda about this idea, she immediately connected it to a message in a bottle; the act of writing a message inside the cube, before sending it off into the ether, could be compared to the act of placing a message in a bottle, before throwing it into the sea. Of course!
This message would be written one side of the cube inside, like personal graffiti on one wall of a bathroom stall, and the other sides would be free for other types of communication, like location, tags and general profile information.
Automatic exploration for the people
Being annoyed by the buttons that I had designed into the user interface, thinking they rather reminded of a tax return interface than anything playful, I’ve thought about how to get rid of buttons in a fixed place as much as possible. In the so called breeding interface, where media item selection is performed by requesting random new or related items, I was thinking about allowing the user to tap each surface, to exchange it for a new one, instead of pressing the fixed Roll button that swaps out all items at once, that are not held. This could be an interesting form of interaction, but the tapping gesture on each surface is reserved for showing the media item in full size, as the screen allows. Showing the media items in full size, with a reference to their origin, I think is important for allowing both full detail to be viewed and giving due credit to the source of the content.
When discussing with Edda how this particular interaction could do without a fixed button, she came up with the idea of having the media surfaces flow automatically in and out, each time showing new items from your online activity, instead of requiring you to manually press the Roll button. The only required interaction would be to press the lock on each item that you’d like to keep, while allowing the other free items to keep flowing. Only when all items had been locked would they gather into a cube, instead of when the Create button would be pressed.
For rotating the cube insides, I had thought about navigation buttons in a footer toolbar, either one button for each direction or one single button that would cycle through all possible movements. While the exterior view of the cube can be rotated by simply touching its sides, the touch gestures for the surfaces on the insides of the cube might be reserved for the content being displayed, like scroll and map views, thus those thoughts about navigation buttons. In an effort to be rid of fixed buttons altogether, I’ve now implemented grip surfaces of sorts for the insides, represented by a dotted pattern, which can be reserved for navigation inside the cube, while the rest of the surfaces are free for other kinds of touch movements.
State of play
The implementation has reached to this navigation of the cube insides, which are still to be decorated with facilities for content entry, like areas for text input and a map for location placement. A screen recording of a walk through the current state of implementation can be seen in this video:
Update: The above text was written four weeks ago, one sleepless night, then saved as a draft and forgotten. Since then I’ve uploaded a couple more screen recordings, which aren’t either entirely new, but show some other aspects of the project progress:
As the implementation stands now, it is possible to click the cubes as they rain down like in the above video – in an area which we like to call the Corral – bringing them into the foreground and entering their insides, to view what message their creator has written.
This behaviour of the cubes falling down is inspired by raindrops sliding down a window sporadically, which is quite common in Iceland where the rainfall can often be more horizontal rather than vertical. That sporadic behaviour is also chosen of performance reasons, as older mobile devices can’t handle smoothly animating multiple objects at once – at least not as I’m currently implementing it using Famo.us, which might well be reviewed with performance in mind – so now there is an animation dispatcher that chooses one cube at a time to animate.
One aesthetic suggestion our supervisor gave us (in a Skype-meeting after the one mentioned above), is that the cubes falling down, as under the force of gravity, doesn’t seem to make much sense in the context of the star-field background, which suggests weightlessness the vacuum of space. I’m thinking about giving that a quick fix by changing to a background image of the Earth seen from the Moon. It might also give the interesting effect of the cubes appearing to be shot from the Earth, where their creators live.
Currently I’m looking into connecting this to an online database, so users will be able to authenticate and save their cubes, for others to see. For that I’ll begin by using Firebase and their AngularFire library.
Alright, now I’ll publish this post already and get back to coding so there’ll soon be something to try out on likesplay.com !
Fjórtán skannaðar teikningar við þrettán verk hafa bæst á vefinn alaska.is. Gingi bróðir skannaði þessar teikningar sem komu í leitirnar og sendi mér í nóvember síðastliðnum. Fyrst núna gaf ég mér tíma til að koma þessu á vefinn. Verkin eru eftirfarandi:
Tímabundið eru teikningarnar ekki aðgengilegar með Deep Zoom sniði, eins og þær voru áður, þar sem zoom.it þjónustan liggur niðri eftir að Microsoft hætti rekstri hennar, en góðir menn eru annað slagið í sjálfboðavinnu við að koma henni í loftið aftur – sjáum til hvernig sú vinna gengur en það kemur til greina að ég hýsi Deep Zoom gögnin sjálfur og búi þau til með gamalli skriftu sem ég hugðist nota upphaflega. Þangað til eru teikningarnar birtar á vefnum í miðlungs (x1200) upplausn, með hlekkjum á útgáfur í fullri (600dpi) upplausn.
The Internet is full of social web sites, offering media of various kinds and the ability to show our appreciation in some way or another; liking, hearting, starring and so on. The reason we interact with content in that way may be to show our appreciation to the content provider, bookmark it for further reference or generally serve our compulsive collecting need. But otherwise we may not do much with all those likes and hearts and stars because we’re constantly exposed to so much new content that we continue responding to, with our stars, likes and hearts.
What if we were given the opportunity to play with the content we’ve liked in the past, interacting with people who like similar things, across our usual social connections, discovering new things we may like and by the way learning more about ourselves?
Game of Likes is an idea that has been brewing for over two years. The initial idea is of a game that offers the ability to connect the various social (media) networks a player may use – Tumblr, We Heart It, Pinterest, Instagram, Flickr, YouTube, SoundCloud, Hype Machine, Spotify, 8tracks, deviantART, 9gag and so on – and play mini games based on the liked or posted media items from those networks, alone or against opponents that are friends or total strangers. The idea has then evolved into an environment where you gradually specify your interests and are assigned into groups of anonymous players that are regularly split up and reformed with progressively like-minded players, where you share your likes, with your likes(TM), as we like (!) to put it. As a new player, or when the game has only a few players, you may not find yourself among like-minded people, but as you progress through the different groups and deep-dive through the network of players, you may find yourself increasingly exposed to content that you find agreeable and along the way make connections with people sharing that content.
Later the idea has further evolved from this concept of assigning players into groups, to simply offering the ability for each player to propose a particular game to play, where others can jump into any of the proposed, open and active games, which catches their fancy.The concept of assigned groups is still appealing, so the idea evolution may take a few more rounds or shift towards something entirely different.I’m going to work on this with my wife, Edda Lára Kaaber, as a Masters Thesis project in the Games track at the IT University of Copenhagen.We have written a thesis proposal, which can be read at:
Phillip Prager has agreed to be our supervisor for this project, which we are very grateful for.He has connected well to the Game of Likes concept and has already provided us with a variety of interesting reading material.
In the coming weeks I’ll focus on the initial implementation steps towards realising this idea, along with a couple of other projects.The most basic feature to implement will be the ability to connect to the various social media networks and incorporate media items from them as objects for play in the game of likes.We envision the media items being arranged on one side of a cube each, in three dimensions.Such cubes we call like-cubes.Initial steps along that path have already been taken as a final project in a Procedural Content Generation (PCG) course at ITU, where along with Kasper Fryland Appeldorff, I implemented a prototype that collects media items from a given Tumblr account and presents four of them at a time. The mechanics of the interaction is supposed to resemble that of slot machines or so called bandits. The user can freeze any of the items and choose to roll the free, non-frozen items, where they are either replaced with random selection from the items associated with the specified accounts, or replaced with a selection influenced by the tags (if any) of the items that have been held / frozen.More details can be read in the report we wrote on this project:
This process of collecting media items into sets, that will be used to decorate sides of cubes, can be viewed as playful in itself.Beyond that, we have brainstormed various games to play with such cubes.One of the most simple games, and possibly the most relatable, is a quiz where two participants are presented with three like-cubes, one of which has been created by their opponent and the aim is to guess which of the three it is.Points are given for correct guesses and as the quiz progresses, the participants could gain a better idea of their opponents’ likes; they could increasingly become experts in each others likes.
Another idea is to have the players arrange the cubes into narratives by connecting them with text fragments.When presented with this idea, some have received it with skepticism, but while shopping in Copenhagen before last Christmas, I came across the the game Rory’s Story Cubes, in the bookstore Bog & idé, which is based on pretty much the same idea; the main difference being that the story cubes offer a predefined set of images, or icons, while our like-cubes will have dynamically generated sets of images drawn from each user’s social media accounts.In the same bookstore I also found a series of games that involve the discovery and expression of the players personality, which will also be a goal with the Game of Likes, as discussed in more detail in the thesis proposal linked above.Those findings I think further validate that we are on to something with this Game of Likes concept.
Initially the idea has been to have these games playable online, with the players usually in different locations, interacting over the internet.After seeing the possibilities of developing console-like games for the Chromecast device, where players are located at the same place at the same time, in front of a television set and using their smartphones as game controllers, I’ve also become interested in exploring the possibilities of using that technology for some playful activities within the Game of Likes.
In an Artificial Intelligence course at ITU – Modern AI for Games – the Interactive evolutionary computation (IEC) project Picbreeder, and its predecessors, inspired me to explore the feasibility of utilising the powerful properties of Compositional pattern-producing networks, evolved through NeuroEvolution of Augmenting Topologies (CPPN-NEAT), in the formation of unique waveforms to construct oscillators producing interesting timbres. Those oscillators could then be used as novel building blocks in complex synthesizers. Furthermore, I pondered the possibility of using the evolved CPPNs to construct those synthesizers.
To facilitate the beginning of this exploration, I developed a prototype called Breedesizer, for evolving populations of waveforms with the CPPN-NEAT technique, which is accessible at:
It showed to be an interesting and enjoyable process to guide the evolution of waveforms with this technique, and it is an exciting prospect to see how the variety of generated waveforms can be used with different sound synthesis techniques, as can be read more about in the Discussion section of the report.
Here are two quick screen recordings of the Breedesizer prototype in use:
It was great fun playing with this and occasionally during the past years I’ve thought about doing something interactive based on procedurally generated mountains like this. One idea was to port this implementation to WebGL, when that standard was introduced. A few years ago, when looking again at this project, I changed the code so it would compile on Ubuntu Linux, but originally this was compiled for a then current Windows version (’95 or ’98 I guess).
Today I read that the visual artist who created a fake game for Her, a movie I highly recommend, has revealed a game created with Unity called Mountain, with „questions designed to be far more psychologically invasive than anything Facebook wants to know about you“ where you play as a mountain and answer the questions „by creating a drawing, which is then used to help create a procedurally-generated mountain“. This resonates a bit with ideas Edda Lára and I have been kicking around, that we might use for our masters thesis project, though they have nothing to do with mountains.
That game will definitely go on my to-play list. The past years I really haven’t actively played games but there are now on that list a few titles that I find very interesting; the not-so-much-of-a-game Dear Esther and Flower (on PS4 – I do seem to like exploratory games), and the platformer 140 are the only ones I’ve recently given a proper try so far, and I really enjoy those, but then are The Stanley Parable, Portal (2), FEZ and Flow (on PS4), and Antichamber (that I just bought on the current Steam sale – the thesis ideas mentioned have currently something to do with chambers), and I might go for the platformer Super Meat Boy – yes, we were just watching Indie Game: The Movie and I just purchased a copy of Us and the game industry, which is still on the to-watch list… I really hope Journey will make it to the PS4; everything thatgamecompanydoesseems to be veryinteresting.
During a Data Mining course at ITU I participated in a group project that aimed at identifying places of interest by clustering photographs from Flickr by their location and discovering frequent patterns in the places individual photographers visit.
Initially the idea was to cluster the photographers themselves by their common tags of interest, but later on in the process we moved on to this concept of automatically finding spots people flock to, by looking at where they commonly use their camera, as suggested by my wife, Edda Lára, when I was discussing this project with her. This could be useful for tourist offices in recommending places to visit or avoid, or for city planners to identify load points where special emphasis should be placed on installing public facilities, etc.
In this project I had the task of implementing the photo clustering and writing about it in the report, while teammates Julian Stengaard and Kasper Appeldorff implemented the data gathering, frequency analysis and visualization, along with writing the rest of the report, which can be read here:
- a quick presentation for an oral exam based on this project.
Before proceeding on to the group project, we were to complete an individual assignment that involved using „the data set … created during the first lecture, through … filling in a questionnaire“ and we were instructed to „formulate one or several questions to be answered based on this data, write the code and carry out the analysis necessary to answer them. … apply at least two different preprocessing methods, one frequent pattern mining method, one clustering method and one supervised learning method. (For example, normalization+missing value replacement+Apriori+k-means+ID3, though many other combinations are possible.)“ – according the project description.
In this report I will reflect upon the process of coming up with an idea for a game to be developed with the Unity game engine, as a programmer in a group of five people attending the Game Development course at the IT University of Copenhagen. Brainstorming was a large part of the process and that part alone had several phases where ideas shifted shape many times over and prototypes were created and thrown aside. A large part of this report will thus accordingly be devoted to that process along with a briefer coverage of the technical implementation, where I had a delightful immersion in a technology new to me, opening up doors to new paths to explore.
With darlings set aside rather than killed, as I would like to see them, the group settled on a concept to follow through and the development of a deliverable product proceeded without hiccups, consisting mostly of the linear completion of tasks we commonly agreed upon and prioritized. The outcome is a fairly polished game offering a rather basic gameplay, though with a few twists, that may appeal to a broad audience of casual players.
The development environment and division of roles
Even though this was an initial surprise, I was easily convinced of the benefits this requirement would give, where all team members were most likely to be familiar with the Unity game engine and so would have a level playing field, giving experience with one of the most commonly used tools and spoken languages in the field of game development. In a previous Game Design course I lead the team into developing a custom server side solution and a client app, where no game engine was used whatsoever, utilising my forte as a web developer, where admittedly other team members may have had a hard time getting up to the same speed I had with the technology. Even though the project from that course may be considered a success, the choice of technology may not have been fair to everyone.
When it came to dividing roles within the group, I was open to working on tasks other than programming. Sound design interested me for instance, as I’m an amateur iPad composer and a compulsive audio app purchaser. Also I was interested in 3D modeling, as I’ve previously done hobby projects in that area and were at the time enrolled in a 3D course at ITU. Taking on those roles would have been a welcome break from programming, even though that is still my main passion, but it turned out to be logical that I took on the role of one of two programmers, as there lie my strongest skills and experience.
The student groups were assigned a limited amount of Unity Pro licenses, valid for the duration of the semester, and we utilized them for among other things to communicate with a Unity Asset Server that we set up on a cloud computing instance. It gave the development process a more integrated feel to have source control management within the development environment, though we probably would have done just fine using for example a Git repository and an external tool. The three licenses our group was assigned went to the two programmers and our game designer and graphics artist. A fourth license for our 3D modeler would have been nice, so everyone could have communicated within Unity through the Asset Server, but we solved that with the less fancy, ad hoc manner of delivering 3D models through our private Facebook group. #firstworldproblems
Initially we discussed what kind of play we wanted to create and for what platforms. One style fancied was that of exploratory games, like Dear Esther and Flower. We discussed adhering to dogmas such as having no scoring system, no narrative or textual explanations but rather only visual cues, and we even considered exhibiting permadeath as a game characteristic.
Though we are interested in mobile touch devices as platforms for games, we did not want to exclude other platforms either and so kept the choice open until rather late in the process. Even though we have now settled on the Android and iOS mobile platforms, the game actually works reasonably well also with mouse input in a desktop environment, which we indeed use for development and initial testing iterations.
The first brainstorming sessions were sparked by a photograph of a lighthouse in Iceland that has an unusual design which can resemble a space pod of some sorts. We toyed with the idea of being able to fly a lighthouse like this, exploring and discovering the environment, collecting objects that might count towards completing a goal.
One concept had the player surrounded by seven platforms, one of which is populated by a lighthouse that can be entered and flown towards flashing lights on distant shores, going through swarms of obstacles and energy units, finding that those distant lights are other lighthouses that can be brought back to one of the home bases. Each of the distant lighthouses flashed one of the seven colours of the rainbow and when all of them had been collected to their bases, their combined lights would form a white boulder or beam of light that would lead to the next level.
Several other concepts were based on that lighthouse photo, involving among other things guiding ships, rising and falling sea levels, and even going through different environments on each level, like underwater, through forests and space.
Those concepts based on the lighthouse may not have been tangible enough and too abstract to grasp, and we received the encouragement to simplify and focus our ideas. Subsequently the idea sprung up to base a game on the classical elements of fire, earth, water and air. That lead to the concept of fixing planets ruined by their inhabitants by applying the missing elements to them. And from that we got the idea of a planet trading business, where the game takes place in the context of a market where the buying and selling of renovated planets takes place, with planets habiting elements of high demand being more valuable than others, and the player has the role of a planet businessman, taking on contracts to prepare planets according to his clients’ demands, collecting revenue from successfully completed contract work, enabling her or him to sign up for grander contract work offering larger payout.
As much as I had already invested enthusiasm for the concepts based on the lighthouse, it was a rather easy transition to shift focus to a game based the classical elements, especially given the environmentalist context of ruined planets and the capitalist context of selling planets with elements in high demand. Still I was fond of those lighthouse concepts and took the position to explore them further at a later convenience, rather than to kill them on the spot, feeling well equipped by the newfound powers of Unity.
The mini games
Having settled on this concept based on the classical elements we began to define gameplay within that context. We had the business- / repairman and a vision of his home base as some sort of a garage floating in outer space. That was going to be the game’s center and a point of entry into mini games where the player would go on missions to gather elements.
Initial thought was to have four mini games, one for each of the elements, with different challenges and each even with different game mechanics. We realized soon though, that creating four mini games and the central game management in the garage would probably overshoot our capacity during the semester and halved that possible overscope by discussing the feasibility of two mini games, where one would offer the collection of the more tangible elements of earth and water, and the other would inhabit the less tangible air and fire.
As I had become fascinated by an art installation design of a constellation done for theLAGI renewable energy initiative, I volunteered to create a prototype of a mini game where the player could collect both air and fire elements in an environment inspired by that design. Another team member created a prototype for collecting the earth element and possibly also water, where boulders were rolled with game mechanics inspired by the gamesDragon, Fly! and Giant Boulder of Death.
The simplifying U-turn
In a meeting where we were presenting the two prototypes and trying them out, we had the rather insistent encouragement from our producer to simplify our concept even further, with the specific suggestion of implementing one mini game with 2D gameplay where spheres of elements would reside in each of the screen’s four corners.
Having seen that meeting as a time to flesh out a list of tasks to carry on with the already began development, I could not help being visibly irritated; an appearance supported by the lack of sleep the night before doing prototype work. Those feelings carried on into the next day and had me dazed by how emotionally invested I had become in this project. I felt done with brainstorming and ready to move on with development, having once developed enthusiasm for the lighthouse concept, set that aside and again built up steam for the classical elements concept with different mini games that implementation work had already begun on. Gameplay in 2D for this project did not initially appeal to me either, as we were working in a mandatory 3D engine and I was thinking quite literally in the dimensions offered by that.
Even though I was still rather annoyed for the second day by those repeated changes to our concepts – having personally transitioned twice from a brainstorming mode to a focus on implementation, building up steam to only extinguish it and go back to the drawing board – I could also realise the sensibility of those suggestions and also feel they were well received and easily grasped by other team members. So I wrote a comment to our team’s Facebook group, stating how I could understand the new angle as a feasible option, and other team members saw it the same way. At a later meeting our producer mentioned his concerns of having possibly intervened too much in our process, which indeed at the time I felt to be the case, but regardless we went along with the suggestions and steered the development towards that new angle.
Settled on a new course of action, it was time again to implement a prototype based on the new concept of a single 2D mini gameplay. Here our main concern was how to represent visually the reception of elements on the subject planet. To start with something simple to implement we decided to split the planet’s sphere into four segments, or wedges, where one would be receptable to one element each.
That representation indeed resembled slices of a pizza and having sharply defined quarters of the sphere, each only capable of receiving one element, was regarded as making little conceptual sense. So we discussed a representation in the form of nested rings, one for each element. How to implement those rings was not clear to me and I thought initially about stacked discs, like in the Towers of Hanoi puzzle, and later became fond of the form and mechanics of the gyroscope, which the game uses in its current form, with nested rings rotating randomly.
Development tasks and their division
Programming effort was in large part divided between the garage and the one mini game of delivering elements to the subject planet, where I devoted most of my time in the latter, focusing on the touch interaction with elements, their reception on the center planet and managing the progression of play in time and the approach of goals (rendered with progress bars). Highlights in that process include the gyroscopic rings of element reception; a module for spawning asteroids as obstacles, fully configurable in the Unity editor; configuration of obstacles for each level in the editor for a designer without any programming; handling multiple, simultaneous, touches of the elements, offering a cooperative play; and making victory and failure exit animations with mecanim, a non-programming task I jumped on to try out some of the movement through 3D space I so much desired in the game.
Later on I joined the garage, implementing a HUD in the form of a clipboard, using Unity’s mecanim system. Also I added the ability to look around the environment by touch and to buy elements from the shop before going on missions, among other tasks, like finding planet textures.
The modeling work, graphics and code from other team members flowed smoothly into the project via the Asset Server. During development it was good to have as a reference a game design document created by one team member, where among other things aspects of the economy were outlined and interactions of elements were defined, that are in fact yet to be implemented in the game.
One of the seemingly most difficult tasks of development has been choosing a name for the game. In a cycling tour around Refshaleøen I came across a barrack marked with the sign “Copenhagen Suborbitals” and that got me thinking about the word “orbital” and subsequently suggesting the name “Out of Orbit”, which is actually used by a small game currently in the Google Play store, but it has stuck as a working title, while other ideas include Elementix, Elexia, Elementum, Elemia, Elemental Architect, Elemixir, Elemade, Elemaden, Eletrade, landMiller, landMaker, landMakia, planetShop, orbitalis, orbitalia, or even just suborbitals.
Beforehand we were warned that difficult moments could arise in the process but I was not really worried about that as I consider myself pretty much open to all types of games and not tied to any specific genres of play as I’m in the first place not so much of a game player – having reached the hardest core of game playing in the early 90’s when I finished the permadeath game Prince of Persia and all episodes of Commander Keen – and so felt open to follow along to wherever the process would take us. Regardless, ideas sprung up early in the process that I quickly clung to and built up enthusiasm for, that I later on had to suppress when other ideas surfaced, that were possibly more realistic and generally acceptable. Those changes proved indeed to be sources of surprisingly difficult moments for me, though fortunately they did not last for long.
Overall I feel that the group worked very well together and no conflicts arose among the members. Only I had to make an effort to keep in line with the level of diplomacy and air of serenity within the group, having invested personal enthusiasm for what I must admit were darlings in the moment. Along with the immersion in Unity’s technical intricacies, it was a great learning experience to go through those development phases, even though I have been around the block once or twice in a previous career as a software developer. Especially it is interesting to have taken part in the evolution of ideas through all their phases and to compare that process to how other known works have changed dramatically from their initial concept to their published form.
Do I think we could have had more interesting gameplay with the multiple mini games previously prototyped? Yes. Do I think we have a more manageable project and a unified experience with the focus on the current mini game? Also, yes. And the experience the game offers in its current state is enjoyable enough to have me and other team members interested in continuing work on the game, to have it presentable for the Google Play and Apple App Stores.
IT University of Copenhagen
Game Development course – instructors:
Henrike Lode and Mark J. Nelson
spring 2014 Björn Þór Jónsson (firstname.lastname@example.org)
 We used a nano compute instance at Greenqloud to host the Unity Asset Server, where I currently host other projects and web sites. The Asset Server is doubly backed up, once to another disk partition and then to CrashPlan Central, and I purchased more storage for the compute instance as our project has reached a size of ~700MB. http://greenqloud.com/
 I had become fascinated by the idea of swarms of obstacles in a game after seeing this video of flocking birds: http://vimeo.com/31158841 And actually found implementation examples for Unity of the Boids flocking algorithim; see footnotes in the document Towards the Lighthouse linked below.
Project done in the Introductory 3D course at ITU.dk in spring 2014
The concept behind the project discussed in this report has taken many shapes, as ideas have brewed during the course of this spring semester. Being a Games Technology student, it could have been a good fit to focus on creating some sort of game assets, but I was delighted to learn about the freedom offered in the Introductory 3D course. If the outcome of the project leaves the viewer wondering about what just happened, then it is a success.
Though the first idea was inspired by a game concept that I had just started working on, then my initial aim was only to create a visual experience, rather than a real asset usable in a game. Then the concept changed as new inspiration struck but the aim was still to create visuals only. Later in the brainstorming process, that new concept lead to ideas of a possible gameplay, so the intended pure visual was inspiring the creation of an interactive experience and the process had come full circle; starting with an idea based on game development, aiming for an art piece and evolving it into something that inspires further game development.
Concept evolution through constant inspiration
The evolution of ideas for this project, from a focus on shapes, to the components of light, was fueled by objects encountered in the everyday life.
These days I’m constantly exposed to children’s toys, and one toy in particular, offering the matching of differently shaped objects to correspondingly shaped holes, has provoked thoughts about basic forms and how they can be the basis of more complex shapes. Everything in the universe has evolved into it’s current shape from simpler building blocks, which themselves are composed of smaller, more primitive elements. An individual’s ability also develops from simple tasks to more accomplished movements and thoughts.
This evolution and constant change inspired an idea for a visualisation where we would follow a primitive shape, travelling from the core of a sphere through a pipe to the surface, and during the travel the shape would morph into a more complex shape resembling the outlines of a country, which we would then see as a patch on the sphere’s surface.
Alongside this visualisation idea I had taken small steps towards creating a game where the player would have the task of matching increasingly complex shapes to their corresponding slots, where the most complex shapes would resemble the outlines of continents. That concept was placed outside a sphere rather than inside it, like floating in orbit around the Earth. When the continental shapes would be matched, they could be seen falling onto the sphere’s surface, forming a collection of the Earth’s continents.
All the ideas connected to this project were based on shapes and their morphing into others, until I noticed a globe model when walking past an office window at ITU one evening. I became fascinated by this kind of objects, which I had in my bedroom as a child, and started thinking about the source of that globe’s light. In the case of the model, it is obviously a regular light bulb on the inside that lights up its surface. A few years ago I saw glowing lava light up the night sky in Iceland, during the notorious “Eyjafjallajökull” eruptions, and there thelight source was molten rock, lava, flowing from the Earth’s core. Here was born the possibly surrealistic concept of light traveling from the core of the Earth, through lava tubes to the surface.
This concept of light inside and out of the Earth lead to ideas for a possible gameplay, with a connection between the seven components of light, that can be seen in the rainbow, and the seven continents of the Earth. There each continent would have seven openings / portals into tubes leading to the Earth’s core, each having one of the seven rainbow colours. The player would pick a portal, that would lead her through a tube to the core, where light particles could be seen floating around, each emitting light in one colour of the rainbow. There the player would have the task of picking one light particle with the same colour as of the portal that was entered and then pick a tube opening leading back to a portal with that same colour. The indication of what type of tube to pick would be its shape, which could be known after traveling through one tube of that shape to the core. If the player picked a correct type of tube, the light particle would travel through it and shine at the portal on the surface, and the player could proceed onto another portal; if not, the light particle would travel to the surface and back to the core, leaving the player to have another go at completing a portal.
During a presentation of these concepts for students in the Introductory 3D course, the connection between the fitting of shapes and matching of lights with colours was pointed out to me; both are puzzles that require the player to find the correct fit, and I was glad to hear that others would see some coherence in these concepts, where I could have as well expected them to be perceived as a confusing soupof ideas.
As an extension to this coloured light chase, there could be set up an array of globe models, each textured with paleogeographic maps showing how the Earth may have appeared at various increments of time. In this setting, a game could revolve around fitting a light particle to each portal on one globe, before proceeding to the next level in the form of a another globe with a map showing the continents layout in another time slice, and in the process the player would gain some idea of Earth’s tectonic evolution. Though the game would not be branded as educational, it might have some educational value as a by-product.
Realising the concept discussed above involved modeling, texturing, lighting and animation, detailed below.
The first task in the modeling process was to create the framing for the globe model. To solve that task I was given the advice to create a circular arc, with Maya’s Arc Tools and extrude a box along it. The base and stand were created from cylinder primitives, shaped with soft selections on their vertices.
It was fairly simple to model the globe itself from a sphere primitive, but the twist was that its surface had to be visible on both the inside and outside. To achieve that, all faces of the sphere were extruded inwards and the normals of that inner layer were reversed, to face towards the globe’s core. The sphere was created with an extremely high polygon count (198394 faces for both layers) only to have flexibility of where to cut holes for the tubes that were to pass through it.
Modeling the tubes started with creating CV Curves to define the path along which to extrude their profile. Four CV Curves were created by adding their control vertices in spiral shapes, in one of Maya’s orthographic views (top). In a perpendicular view, soft selections on the CVs (with falloff mode set to surface) were used to shape the curve path in 3D.
Two options were considered to create the tube models along the curve paths; polygon extrude and surface extrude. The latter was chosen as UVs are already created with that method, so texturing requires no further effort. Two concentric circle paths were created with the Arc Tools to define the inner and outer layers of the tube profiles. Each of them were surface-extruded along the curve paths and the resulting meshes combined by bridging their edges at the ends. Normals of the inner tube layer were reversed, to face inside the tube, as was done for the sphere.
All textures of the scene lie on the two sides of the main globe and on the inside of another encompassing globe. The faces of the main globe’s inner layer were selected and a texture applied to them, different from the one on the outside layer. A color map and a bump map for the Earth were used on the inside, and the texture happens to be flipped there, which could have been easily fixed by flipping the texture in an image manipulation program, but this was considered a good surrealistic effect. The outside faces of the globe were textured with a bathymetry map of the Earth, for it’s nice visual effect of black continents and light blue sea. A cloud map composed of satellite images of the Earth was applied on a sphere encompassing the whole scene, with normals facing inwards.
Different kinds of textures were considered for the tubes, ranging from a custom gradient resembling the layers of the inner Earth or the inside of a hala fruit, to a bump map imitating the ribs of plastic tubing, or even a tree bark texture that could give the illusion of enormous trees growing from the Earth’s surface. Brushed steel and chrome Mental Ray material presets were tried. Even the gray default material was considered, as it was visually pleasing in this context. But the final decision was to have the tubes without textures, even though they were easy to apply due to the surface extrusion, but tinted with the seven colours of the rainbow, using a Mental Ray material (mia_material_x) with a rubber preset. Though this decision would not make sense in the context of a game where tubes would have to be selected for the correct colour, making it too easy, it makes sense for this project which is focused on the visual experience and those coloured tubes look delectable.
The lighting for the inside of the main sphere consists of one point light located at the center (core) and another pointlight attached as a child of the camera (in the Outliner), so it lights up the inside of the tube as the camera travels through it. As the project uses the Final Gathering technique offered by Mental Ray, a considerable glow saturated the whole scene when rendering inside the globe, and that was found to be due to light bouncing off the inside faces of the globe. This glow was mostly removed by disabling the Final Gather Cast Mental Ray option for all the faces of the inside globe layer.
For light emitting from the portals on the Earth’s surface, spotlights were created with a Light Fog effect, to simulate the beams coming from a disco light. To ease the creation of the required 29 spotlights, duplicates were created from the initial spotlight created and adjusted, where each duplicate was rotated to it’s position with the pivot located at the world / Earth’s center. Only when all spotlight duplicates had been configured in their position was it discovered, by rendering a one frame sample, that it is not possible to duplicate spotlights with the fog effect and have good results; all spotlights rendered with a yellowish tint, even though they had been configured with the seven rainbow colours, and further research on the web confirmed that this was to be expected. So they only way to go was to manually create from scratch each spotlight and configure it with the right colour and fog effect and transform to an appropriate position. This was one of the most time consuming tasks of the project.
The most interesting lighting technique used in the project is that of Object Based Lighting with the MIA Light Surface Shader, where the material applied to the outside layer of the globe emits light according to the bathymetry texture used with the material – a feature offered by the Final Gathering technique – simulating the glow of a typical globe model with a light bulb inside. The result is a bit grainy glow on the surfaces around the globe, probably due to the low resolution of the texture used, but a less grainy result was obtained by increasing the Accuracy value (from 100 to 600 in one case) for Final Gathering in the Render Settings. Higher accuracy values resultedin much longer rendering times and so the default of 100 was used for the animation batch render.
The animation was to take the viewer from the inside of the globe, through one of the tubes, to a view of the globe’s exterior. One of the first decisions was to animate the camera along the same curve as was used to create the tube it would travel through. Manual keyframe animation was initially used to move the camera in thefirst part, before entering the tube. Then there was the question of how to transition smoothly from the keyframe animation to the guided animation along the tube path. One option was to have two cameras and switch them at the transition point, but I preferred to have one path for the whole path and tweak it visually to my requirements. To that end, two other paths were created for the animationsegments before and after the tube travel. The Bezier Curve Tool was used to create the paths, as it is more familiar to work with those kinds of curves.
When the curves had been attached, the camera movement along the newly created parts was shaky. Applying smoothing to the vertices where movement was rough made the situation a little better, but when vertices were moved they got sharp corners again with shakiness appearing again in the animation. Part of the problem may have been that the tube curve was initially a CV Curve and the new curves were Beziers. Converting the combined curved to a NURBS curve and choosing to smooth the whole curve didn’t completely solve the problem; movement was still hard to control around the points of attachment and sharp corners prevailed. It wasn’t until I found the Rebuild Curve option on the Edit Curves menu that the problem was solved with a resulting smooth curve that was easier to control.
Timing along the motion path curve was controlled with position markers. Orientation of the camera along the path was controlled with orientation markers on the curve with keyed values of Up Twist, Side Twist and Front Twist. The possibility of blending keyframe and motion path animation in Maya was considered, but controlling those twist values was sufficient.
Camera movement is wobbly during the first seconds of the animation along the rebuilt curve, but instead of ironing that out, that now could have been easily done, it was kept for it’s nice effect of chaotic introduction to this surrealistic world.
Several music pieces were considered for the animation soundtrack, including Wave Dub by Dope on Plastic, Halcyon (On and On) by Orbital, Down Down by Nils Frahm, Justice One by Drokk, and S.A.D. by Mind in Motion. The last minute of the S.A.D. was finally chosen as the change in mood midway through that part is a perfect fit for the transition from the inside of the globe to its exterior, and the rave music of the first part goes somehow well with the colorful, psychedelic pipes. Importing the audio file into Maya helped synchronizing the animation to the soundtrack.
Only little over two days before handing in this project, rendering commenced. As I have a five year old desktop computer at home, running Ubuntu Linux, and I discovered that Maya is offered for the Linux operating system, I decided to try and use it for the batch rendering of the animation. As only a 64 bit version of Maya for Linux is offered, and that home computer was running a 32 bit version of the operating system, I decided it was time to re-install a 64 bit version of Ubuntu on the machine. Having done that, I had to jump through a few hoops to be able to run Maya in that environment.
After initiating the batch render process, it became apparent after the first few frames rendered that this one machine wouldn’t finish rendering the animation in time at a HD 720 resolution. So it was clear that I would have to utilise more machines for the task, and the lab computers at ITU were certainly an option. After rendering 500 frames in around 18 hours, Maya became corrupt on the Ubuntu machine for some unknown reason (logging into another account, resulting in a login prompt freeze and subsequent reboot of the computer, was the start of the trouble).
So the ITU lab computers were now the only option to finish the rendering. A few hours later I had manually created a rendering cluster by starting batch rendering processes on seven computers at ITU, each set to render a range of 200 frames. The morning after I saw the lab machines had successfully finished the rendering and I was able to fetch the files from my file space at ITU via the Internet (SSH).
To assemble the rendered sequence frames, in the TIFF image format, into a movie file with the audio track, I used the avconv command line tool (ffmpeg fork). To add opening and ending titles, I imported the assembled file into iMovie, from where the final result is exported.
It’s been delightful to be introduced to the many facets of 3D rendering and get to know some of the many parts of Maya. Back in 1996 I did some 3D graphics in 3D Studio for DOS and have fond memories from that period, so it is especially interesting to become acquainted with a mature, modern tool like Maya.
Having a basic knowledge of 3D modeling, texturing, lighting, animation and even character rigging, will be a valuable tool for realising ideas for games or other interactive experiences. This project turned out to be a visual experience with the possibility of becoming a basis for something interactive: Being able to take ideas and concepts in whatever direction feasible gives a great sense of freedom.
Vefurinn alaska.is veitir aðgang að teikningasafni Jóns H. Björnssonar ásamt ævisögulegu efni. Jón – fæddur 19. desember 1922, dáinn 15. júlí 2009 – stofnaði Alaska gróðrarstöðvarnar og blómaverslanirnar, og hann var fyrsti landslagsarkitekt Íslendinga. Í dag, þegar hann hefði orðið 91 árs, er tilefni til að segja hér frá þessum vef.
Í gegnum árin hefur verið þó nokkur sókn í teikningasafn Jóns, sem hefur undanfarin ár verið hýst í geymslum Listasafns Reykjavíkur og nú í Þjóðskjalasafninu, þar á meðal frá nemendum við LBHÍ og LHÍ. Þennan áhuga þótti Jóni vænt um en hafði jafnframt áhyggjur af ástandi teikninganna, þar sem þær eru flestar unnar á stökkan smjöpappír sem verður brothættari með árunum. Til að veita fullan aðgang að teikningasafninu og jafnframt vernda frumritin lagði Jón til að afrit væru tekin af teikningunum sem áhugasamir gætu nálgast. Ljósrit voru augljósasti kosturinn en ég, sonur hans og áhugamaður um Internetið, lagði til að teikningarnar yrðu skannaðar og gerðar aðgengilegar á vefnum.
Skráning og varðveisla
Skönnun teikninganna og útfærsla vefjarins má segja að sé búin að vera tíu ár í vinnslu, með hléum þó. Verkefnið byrjaði þegar var ákveðið að flytja teikningasafnið af teiknistofunni hans pabba niður í Hafnarhúsið að frumkvæði Einars E. Sæmundsen og Samsonar B. Harðarsonar landslagsarkitekta, í samstarfi við Gunnlaug Björn Jónsson og Pétur Ármannsson arkitekta, árið 2004. Þá útbjó ég einfalt skráningarkerfi og pabbi færði inn í það upplýsingar um hvert og eitt verk sem hann lét af hendi. Á þeim gagnagrunni, sem var lengi vel aðgengilegur á jhb.bthj.is, byggir alaska.is .
Vorið 2009 útfærði ég nýtt kerfi fyrir sama gagnagrunn sem heldur utan um skannaðar teikningar og venslar við upphaflegu textaskráninguna. Haustið 2009 sýndi ég kerfið á fundi hjá Félagi íslenskra landslagsarkitekta þar sem við Gingi bróðir vorum með kynningu, en þá var ég kominn áleiðis með umsýsluhlutann, þar sem skráning efnis er unnin, og átti eftir að útfæra vefviðmót kerfisins sem snýr að almennum vefgestum. Síðan þá hefur vinna við þetta að mestu legið niðri fyrir utan skönnun, þangað til nú í lok sumars 2013 að ég tók upp þráðinn aftur og endurforritaði skráningarviðmótið til að halda utan um staðsetningu verkanna og útfærði vefviðmótið sem má núna sjá á http://alaska.is/
Útlitið er með naumhyggjulegasta móti og gengur út frá ALASKA kennimerkinu. Persónulega finnst mér það nokkurnveginn sleppa en mætti kannski slípa eitthvað til. Undir http://alaska.is/Jon-Hallgrimur-Bjornsson
er handahófskennt efni sem ég hef fundið um pabba sem mætti skipuleggja betur og bæta við. Nýlega barst þangað kærkomin viðbót en í grein eftir Ástu Camillu Gylfadóttur sá ég minnst á ritgerð um Jón eftir Arnar Birgi Ólafsson og sendi því fyrirspurn til hans um rafrænt eintak. Arnar sagðist því miður ekki lengur eiga eintak af ritgerðinni á stafrænu formi en sagðist myndu vera í sambandi ef hann gæti skannað hana eða slegið inn upp á nýtt. Nokkrum vikum síðar, nú í haust, hafði Arnar aftur samband og sagðist hafa skannað inn eintak sem kom í leitirnar og nú er ritgerðin aðgengileg á vefnum.
Af 597 skráðum verkum vantar enn skönn fyrir 102 í kerfið. Á landakortinu má þekkja þau á ,,hálf-strikuðum“ lurkum. Þó eru skönnin orðin 807 og ég þykist vera búinn að fara í gegnum allar möppurnar sem fengust afhentar frá Listasafni Reykjavíkur á sínum tíma, sem eru núna í Þjóðskjalasafninu – ég skilaði síðustu möppunni þangað föstudaginn fyrir búferlaflutninga til Danmerkur síðastliðinn júlí 2013. Nýlega komu í leitirnar fleiri teikningar sem væri gott að skanna og koma á vefinn sem fyrst þar sem meðal þeirra eru líklega nokkrar af þeim helstu, eins og t.d. af Hallargarðinum en á alaska.is er núna bara ljósrit og riss af honum.
Upphaflegt skráningarkerfi, sem gagnagrunnurinn byggir á, var útfært í forritunarmálinu TCL fyrir vefþjóninn AOLserver, þar sem gögnin voru vistuð í PostgreSQL gagnagrunnskerfið.
Þegar kom að því að vensla skannaðar teikningar við teikningaskráninguna var nýtt kerfi útfært sem byggir á Django veframmanum. Einn af kostum Django er að umsýsluviðmót verður til sjálfkrafa út frá skilgreindum gagnagrunni og það er hægt að sníða viðmótið að þeim sérþörfum sem hver og einn vefur kann að hafa. Möguleikinn að sérsníða sjálfgerða umsýsluviðmótið var nýttur til að útfæra valglugga fyrir skannaðar teikningar, þar sem kemur fram hvort hver og ein hafi þegar verið vensluð við teikningarskrá. Upphaflega voru teikningaskönnin hýst á sömu vél og vefkerfið (í heimahýsingu) og upplýsingar fyrir valgluggann sóttar með beinum skráakerfisaðgangi. Þessari útfærslu bakendans var lokið árið 2009, eins og fyrr segir, og framendaviðmót fyrir vefinn átti eftir að útfæra.
Þegar þráðurinn var tekinn upp aftur nú í sumar var tekin ákvörðun um að hýsa teikningarnar í skýjageymslu og knýja vefkerfið á sýndarvél í skýjavinnslu hjá GreenQloud, sem býður upp á forritunarviðmót samhæfð vinsælu Amazon vefþjónustunum. Þetta kallaði á nýja leið fyrir skráningarviðmótið til að sækja upplýsingar um teikningaskrárnar og forritasafnið boto var notað til þess ásamt stuðningi við biðminni í Django.
Til að sýna staðsetningu hvers teikniverkefnis á korti var högun gagnagrunnsins breytt til að taka við þeim upplýsingum. Sérstakur stuðningur við landfræðiupplýsingar í Django var skoðaður en niðurstaðan var að nota einfaldlega heiltölusvæði fyrir lengdar- og breiddargráðu og styðjast að öðru leyti við Google Maps forritunarviðmótið. Teikningaskráin inniheldur heimilisföng við flest verk og nú þurfti að finna hnit út frá þeim upplýsingum.
Ein leið til þess er að nota forritunarviðmót frá Google til að breyta heimilisföngum í hnit en nákvæmni þeirrar þjónustu takmarkast við heilar götur frekar en einstök hús. Borgarvefsjá finnur hús nákvæmlega út frá heimilsfangi og bak við tjöldin má finna vefþjónustu sem veitir þessar upplýsingar. En sú þjónusta skilar hnitum í ISN93kerfinu svo fyrir notkun með Google Maps þarf að breyta þeim yfir í hnattræna WGS84 kerfið. Fyrir fjórum árum sendi ég fyrirspurn til Landmælinga Íslands um leið til að varpa hnitum á milli þessara kerfa og var bent á Cocodati vörpunartólið. Svo í uppfærðum umsýsluhluta útfærði ég skráningarviðmót sem sendir heimilisfang í vefþjónustu Borgarvefsjár og með ISN93 hnitunum sem fást þaðan er í framhaldinu framkvæmd fyrirspurn í Cocodati viðmótið til að varpa þeim í GPS hnit. Leiðin sem hér var farin til að fá nákvæm hnit fyrir teikningarnar byggir því á óopinberri vefþjónustu Borgarvefsjár og skröpun á vefniðurstöðum úr vörpunartóli Landmælinga; semsagt fjallabaksleið að þessum upplýsingum.
Sjá skjáupptökur af notkun skráningarviðmótsins:
Upplýsingum um staðsetningu teikninganna er skilað til viðmótsins á KML sniði, sem vefkerfið útbýr með kvikum hætti út frá gagnagrunninum (einnig með stuðningi biðminnis), og viðmótið vinnur úr því með aðstoð geoxml3 forritasafnsins.
Teikningarnar eru skannaðar í hárri upplausn, 600dpi, því vistaðar í stórum myndskrám sem eru þungar í meðförum. Á vefnum eru þær birtar á Deep Zoom sniði sem gerir birtingu stórra mynda lipra og þjált að kafa í smáatriði, með svipuðum hætti og kortaviðmót eins og Google Maps byggja á. Upphaflega var teikningunum breytt á þetta snið með Python skriftu og var ætlunin að hýsa Deep Zoom gögnin sem fást með henni á sama stað og frum-skönnin með vefkerfinu. Síðar var ákveðið að nota zoom.it þjónustuna og forritunarviðmótið sem hún býður upp á, með tilheyrandi aðlögun á vefkerfinu sem alaska.is byggir á.
Þegar vefkerfið var komið út á alaska.is varð ljóst að svartíminn var engan veginn ásættanlegaur, þar sem tók að meðaltali þrjár sekúndur að birta hverja síðu. Uppsetningin á vefþjóninum byggði á Apache og mod_wsgi viðbótinni en hún reyndist of frek á tilföng fyrir þá litlu nano sýndarvél sem knýr vefinn, þar sem hún hlóð í raun vefkerfið inn í nýtt ferli fyrir hverja vefbeiðni. Þessa uppsetningu hefði verið hægt að stilla betur til en í staðinn var prófað að setja upp vefþjóninn nginx ásamt fastcgi fyrir þetta vefkerfi (og reyndar önnur sem keyra á sömu vél, til dæmis vefinn sem hýsir þessa færslu). Þessi breyting skilaði stórkostlegum hraðamun þar sem hver síða birtist á broti úr sekúndu.
Fyrir aðstoð við þetta verkefni fá þakkir Björn Axelsson landslagsarkitekt hjá Umhverfis- og skipulagssviði Reykjavíkurborgar, fyrir veittan aðgang að skanna; Samskipti og Pixel fyrir skönnun á hluta teikningasafnsins; Einar E. Sæmundsen, Samson B. Harðarson, Pétur Ámannsson og Gunnlaugur Björn Jónsson (Gingi bróðir) fyrir frumkvæði að því að koma teikningasafninu fyrir á Listasafni Reykjavíkur; Elín S. Kristinsdóttir skjalavörður hjá Þjóðskjalasafni Íslands fyrir að taka vel á móti mér þegar ég skilaði hluta teikningasafnsins á síðustu stundu fyrir brottflutning til Kaupmannahafnar; Guja Dögg Hauksdóttir fyrir veittan aðgang að teikningasafninu þegar það var geymt hjá Listasafni Reykjavíkur; og Arnar Birgir Ólafson fyrir að útvega stafrænt eintak af ritgerð sinni.
Hér má sjá nokkrar myndir úr fórum Einars E. Sæmundsen: