Stafaskálin

Í gjafainnkaupum fyrir jólin rakst ég á skál sem minnti á nýlegt skólaverkefni:

Skál með stafagerð eftir Arne Jacobsen.

Skál með stafagerð eftir Arne Jacobsen.

 

Og jólasveinninn virðist hafa tekið eftir því þar sem ég fékk aðra eins í jólagjöf:

IMG_20131231_232538

 

ALASKA.is

Vefurinn alaska.is veitir aðgang að teikningasafni Jóns H. Björnssonar ásamt ævisögulegu efni.  Jón – fæddur 19. desember 1922, dáinn 15. júlí 2009 – stofnaði Alaska gróðrarstöðvarnar og blómaverslanirnar, og hann var fyrsti landslagsarkitekt Íslendinga.  Í dag, þegar hann hefði orðið 91 árs, er tilefni til að segja hér frá þessum vef.

Í gegnum árin hefur verið þó nokkur sókn í teikningasafn Jóns, sem hefur undanfarin ár verið hýst í geymslum Listasafns Reykjavíkur og nú í Þjóðskjalasafninu, þar á meðal frá nemendum við LBHÍ og LHÍ.  Þennan áhuga þótti Jóni vænt um en hafði jafnframt áhyggjur af ástandi teikninganna, þar sem þær eru flestar unnar á stökkan smjöpappír sem verður brothættari með árunum.  Til að veita fullan aðgang að teikningasafninu og jafnframt vernda frumritin lagði Jón til að afrit væru tekin af teikningunum sem áhugasamir gætu nálgast.  Ljósrit voru augljósasti kosturinn en ég, sonur hans og áhugamaður um Internetið, lagði til að teikningarnar yrðu skannaðar og gerðar aðgengilegar á vefnum.

 

Skráning og varðveisla

Skönnun teikninganna og útfærsla vefjarins má segja að sé búin að vera tíu ár í vinnslu, með hléum þó.  Verkefnið byrjaði þegar var ákveðið að flytja teikningasafnið af teiknistofunni hans pabba niður í Hafnarhúsið að frumkvæði Einars E. Sæmundsen  og Samsonar B. Harðarsonar landslagsarkitekta, í samstarfi við Gunnlaug Björn Jónsson og Pétur Ármannsson arkitekta, árið 2004.  Þá útbjó ég einfalt skráningarkerfi og pabbi færði inn í það upplýsingar um hvert og eitt verk sem hann lét af hendi.  Á þeim gagnagrunni, sem var lengi vel aðgengilegur á jhb.bthj.is, byggir alaska.is .

Vorið 2009 útfærði ég nýtt kerfi fyrir sama gagnagrunn sem heldur utan um skannaðar teikningar og venslar við upphaflegu textaskráninguna.  Haustið 2009 sýndi ég kerfið á fundi hjá Félagi íslenskra landslagsarkitekta þar sem við Gingi bróðir vorum með kynningu, en þá var ég kominn áleiðis með umsýsluhlutann, þar sem skráning efnis er unnin, og átti eftir að útfæra vefviðmót kerfisins sem snýr að almennum vefgestum.  Síðan þá hefur vinna við þetta að mestu legið niðri fyrir utan skönnun, þangað til nú í lok sumars 2013 að ég tók upp þráðinn aftur og endurforritaði skráningarviðmótið til að halda utan um staðsetningu verkanna og útfærði vefviðmótið sem má núna sjá á http://alaska.is/

Útlitið er með naumhyggjulegasta móti og gengur út frá ALASKA kennimerkinu.  Persónulega finnst mér það nokkurnveginn sleppa en mætti kannski slípa eitthvað til.  Undir http://alaska.is/Jon-Hallgrimur-Bjornsson
er handahófskennt efni sem ég hef fundið um pabba sem mætti skipuleggja betur og bæta við.  Nýlega barst þangað kærkomin viðbót en í grein eftir Ástu Camillu Gylfadóttur sá ég minnst á ritgerð um Jón eftir Arnar Birgi Ólafsson og sendi því fyrirspurn til hans um rafrænt eintak.  Arnar sagðist því miður ekki lengur eiga eintak af ritgerðinni á stafrænu formi en sagðist myndu vera í sambandi ef hann gæti skannað hana eða slegið inn upp á nýtt.  Nokkrum vikum síðar, nú í haust, hafði Arnar aftur samband og sagðist hafa skannað inn eintak sem kom í leitirnar og nú er ritgerðin aðgengileg á vefnum.

Af 597 skráðum verkum vantar enn skönn fyrir 102 í kerfið.  Á landakortinu má þekkja þau á ,,hálf-strikuðum“ lurkum.  Þó eru skönnin orðin 807 og ég þykist vera búinn að fara í gegnum allar möppurnar sem fengust afhentar frá Listasafni Reykjavíkur á sínum tíma, sem eru núna í Þjóðskjalasafninu – ég skilaði síðustu möppunni þangað föstudaginn fyrir búferlaflutninga til Danmerkur síðastliðinn júlí 2013.  Nýlega komu í leitirnar fleiri teikningar sem væri gott að skanna og koma á vefinn sem fyrst þar sem meðal þeirra eru líklega nokkrar af þeim helstu, eins og t.d. af Hallargarðinum en á alaska.is er núna bara ljósrit og riss af honum.

 

Tæknin

Upphaflegt skráningarkerfi, sem gagnagrunnurinn byggir á, var útfært í forritunarmálinu TCL fyrir vefþjóninn AOLserver, þar sem gögnin voru vistuð í PostgreSQL gagnagrunnskerfið.

Þegar kom að því að vensla skannaðar teikningar við teikningaskráninguna var nýtt kerfi útfært sem byggir á Django veframmanum.  Einn af kostum Django er að umsýsluviðmót verður til sjálfkrafa út frá skilgreindum gagnagrunni og það er hægt að sníða viðmótið að þeim sérþörfum sem hver og einn vefur kann að hafa.  Möguleikinn að sérsníða sjálfgerða umsýsluviðmótið var nýttur til að útfæra valglugga fyrir skannaðar teikningar, þar sem kemur fram hvort hver og ein hafi þegar verið vensluð við teikningarskrá.  Upphaflega voru teikningaskönnin hýst á sömu vél og vefkerfið (í heimahýsingu) og upplýsingar fyrir valgluggann sóttar með beinum skráakerfisaðgangi.  Þessari útfærslu bakendans var lokið árið 2009, eins og fyrr segir, og framendaviðmót fyrir vefinn átti eftir að útfæra.

Frekari útfærsla

Þegar þráðurinn var tekinn upp aftur nú í sumar var tekin ákvörðun um að hýsa teikningarnar í skýjageymslu og knýja vefkerfið á sýndarvél í skýjavinnslu hjá GreenQloud, sem býður upp á forritunarviðmót samhæfð vinsælu Amazon vefþjónustunum.  Þetta kallaði á nýja leið fyrir skráningarviðmótið til að sækja upplýsingar um teikningaskrárnar og forritasafnið boto var notað til þess ásamt stuðningi við biðminni í Django.

Til að sýna staðsetningu hvers teikniverkefnis á korti var högun gagnagrunnsins breytt til að taka við þeim upplýsingum.  Sérstakur stuðningur við landfræðiupplýsingar í Django var skoðaður en niðurstaðan var að nota einfaldlega heiltölusvæði fyrir lengdar- og breiddargráðu og styðjast að öðru leyti við Google Maps forritunarviðmótið.  Teikningaskráin inniheldur heimilisföng við flest verk og nú þurfti að finna hnit út frá þeim upplýsingum.

Ein leið til þess er að nota forritunarviðmót frá Google til að breyta heimilisföngum í hnit en nákvæmni þeirrar þjónustu takmarkast við heilar götur frekar en einstök hús.  Borgarvefsjá finnur hús nákvæmlega út frá heimilsfangi og bak við tjöldin má finna vefþjónustu sem veitir þessar upplýsingar.  En sú þjónusta skilar hnitum í ISN93 kerfinu svo fyrir notkun með Google Maps þarf að breyta þeim yfir í hnattræna WGS84 kerfið.  Fyrir fjórum árum sendi ég fyrirspurn til Landmælinga Íslands um leið til að varpa hnitum á milli þessara kerfa og var bent á Cocodati vörpunartólið.  Svo í uppfærðum umsýsluhluta útfærði ég skráningarviðmót sem sendir heimilisfang í vefþjónustu Borgarvefsjár og með ISN93 hnitunum sem fást þaðan er í framhaldinu framkvæmd fyrirspurn í Cocodati viðmótið til að varpa þeim í GPS hnit.  Leiðin sem hér var farin til að fá nákvæm hnit fyrir teikningarnar byggir því á óopinberri vefþjónustu Borgarvefsjár og skröpun á vefniðurstöðum úr vörpunartóli Landmælinga; semsagt fjallabaksleið að þessum upplýsingum.

Sjá skjáupptökur af notkun skráningarviðmótsins:

Upplýsingum um staðsetningu teikninganna er skilað til viðmótsins á KML sniði, sem vefkerfið útbýr með kvikum hætti út frá gagnagrunninum (einnig með stuðningi biðminnis), og viðmótið vinnur úr því með aðstoð geoxml3 forritasafnsins.

Teikningarnar eru skannaðar í hárri upplausn, 600dpi, því vistaðar í stórum myndskrám sem eru þungar í meðförum.  Á vefnum eru þær birtar á Deep Zoom sniði sem gerir birtingu stórra mynda lipra og þjált að kafa í smáatriði, með svipuðum hætti og kortaviðmót eins og Google Maps byggja á.  Upphaflega var teikningunum breytt á þetta snið með Python skriftu og var ætlunin að hýsa Deep Zoom gögnin sem fást með henni á sama stað og frum-skönnin með vefkerfinu.  Síðar var ákveðið að nota zoom.it þjónustuna og forritunarviðmótið sem hún býður upp á, með tilheyrandi aðlögun á vefkerfinu sem alaska.is byggir á.

Útgáfa

Upphaflega var haldið utan um kóða verkefnisins hjá Google Code en fyrir ofangreindar breytingar var kóðinn fluttur í hýsingu hjá GitHub.  Kerfið er forritað í Eclipse með Python stuðningi.

Þegar vefkerfið var komið út á alaska.is varð ljóst að svartíminn var engan veginn ásættanlegaur, þar sem tók að meðaltali þrjár sekúndur að birta hverja síðu.  Uppsetningin á vefþjóninum byggði á Apache og mod_wsgi viðbótinni en hún reyndist of frek á tilföng fyrir þá litlu nano sýndarvél sem knýr vefinn, þar sem hún hlóð í raun vefkerfið inn í nýtt ferli fyrir hverja vefbeiðni.  Þessa uppsetningu hefði verið hægt að stilla betur til en í staðinn var  prófað að setja upp vefþjóninn nginx ásamt fastcgi fyrir þetta vefkerfi (og reyndar önnur sem keyra á sömu vél, til dæmis vefinn sem hýsir þessa færslu).  Þessi breyting skilaði stórkostlegum hraðamun þar sem hver síða birtist á broti úr sekúndu.

Lausn sniðin að skýjavinnslu

Hér er komið vefkerfi fyrir stórar myndskrár með landakortaskráningu sem er sniðið að vinsælu skýjalausnunum  S3 og EC2 sem fást hýstar með ódýrum hætti hjá til dæmis GreenQloud eða Amazon.  Að auki væri tiltölulega einfalt að aðlaga kerfið að skýjalausnum eins og Google App Engine með Google Cloud SQL og Django stuðningi þeirra.

 

Þakkir

Fyrir aðstoð við þetta verkefni fá þakkir Björn Axelsson landslagsarkitekt hjá Umhverfis- og skipulagssviði Reykjavíkurborgar, fyrir veittan aðgang að skanna; Samskipti og Pixel fyrir skönnun á hluta teikningasafnsins; Einar E. Sæmundsen, Samson B. Harðarson, Pétur Ámannsson og Gunnlaugur Björn Jónsson (Gingi bróðir) fyrir frumkvæði að því að koma teikningasafninu fyrir á Listasafni Reykjavíkur; Elín S. Kristinsdóttir skjalavörður hjá Þjóðskjalasafni Íslands fyrir að taka vel á móti mér þegar ég skilaði hluta teikningasafnsins á síðustu stundu fyrir brottflutning til Kaupmannahafnar; Guja Dögg Hauksdóttir fyrir veittan aðgang að teikningasafninu þegar það var geymt hjá Listasafni Reykjavíkur; og Arnar Birgir Ólafson fyrir að útvega stafrænt eintak af ritgerð sinni.

 

Hér má sjá nokkrar myndir úr fórum Einars E. Sæmundsen:

Phosom

Project report done in a Game Design class at ITU.
Other versions of the report can be downloaded from the
Google Drive source document.

 

Phosom is a game, or a toy, based on image similarity, where players receive challenges in the form of a photograph, to which they respond to by either taking a picture or finding a picture from the web, and receive a score based on how visually similar the images are.  Creativity and visual memory are good skills to possess while coming up with an imitation of the given original, that may not have to represent the same motive as the original but rather be visually similar overall.  This kind of play offers the opportunity to perform the popular activities of creating or finding photographs, with a defined goal of similarity and rewards given according to performance within the frame defined by that goal.  Also it leads to thoughts about the originality of visual creations; is the original image the player is to imitate really original, or is it itself an imitation of something else, and can the imitation created by the player be considered as an original for imitation in some other context?

 

Motivation

Today people commonly carry a camera in their pocket at all times, embedded in their smartphone.  Photography in general is a very popular hobby.  Playing digital games on mobile devices is a fast growing form of entertainment that often is interweaved in everyday life, where people pick up a casual game during short and maybe random moments they have during the course of the day.  Combining those elements – readily available cameras, interest in photography and casual gaming – in a software toy-game, is a goal with Phosom.

The game allows players to connect with others, people they know or other random (anonymous) players, and challenge them with photographs and receive challenges back.  How to respond to those challenges offers much creativity when searching your everyday environment for motives that may give a good score when compared to the given challenge.  Play with this toy-game can then be “…viewed as…[a] potentially artistic [enterprise] capable of stimulating and releasing the creative components of the participant … that gives satisfaction to [her] creative imagination, nurtures the emotions, excites the soul, and satisfies the senses.”[1]  Phosom may thus give social value and develop personal skills.  Also it encourages people to explore their environment in a new and exciting manner where they may learn more about their surroundings in the process.

 

Design iterations

The design of Phosom as a toy to play with or even as a game, has gone through a few iterations and here is an account of what I consider to be the highlights of that process.

Initial idea

When taking a nap with my nine months old son last September (2013) the unrequested idea sprung up to create a mobile application that would allow a group of people, all present at the same (possibly large) location and each holding mobile devices running that app, to create an Internet connected game where the group would be divided into two teams in which each member would be assigned an opponent from the other team.  Everyone would be given the task to photograph a motive from their own choosing.  Having taken a picture, a team member sends it to his or her opponent, and then waits for a picture to be delivered from him in the same manner.  Having received a picture, a team member has to create a photograph, with his mobile device camera, that resembles the received picture as closely as possible, either by finding the motive his opponent photographed or by taking a visually similar picture in some creative way.  Each team member’s effort towards image similarity in this way is graded and the total grade earned within each team determines whether it won the game.

Although I was not actively seeking ideas while taking that nap, I did know that a game would have to be implemented as a final project in the Game Design course.  During the first days of the semester, we students were guided into group games like Ninja[2], to mingle, break ice and start us thinking about games.  That probably influenced the game setting described in the previous paragraph.  The required Internet connectivity of the game is also probably influenced by my fondness of the Internet as a technology and how it enables communication.

What about playful communication with photographs?  The act of comparing images within a game is an obvious result of the frequent use of image search engines and an app like Google Goggles[3].  Indeed there already exist mobile photo toys like Instagram and Snapchat, but competing at finding the most similar motive to the one given may be considered as something novel.

Taking that nap probably tuned the brain into an open mode[4] where it could be aflood with ideas[5] sourced from those influences and inspirations.

 

phosomimage01 phosomimage05

 Initial user interface sketches. Name ideas other than Phosom include PhotoKO and Phosimi.

 

Group brainstorming

All team members were enthusiastic about the basic idea of creating play involving photography, visual memory and interpretation.  Everyone also saw from their own angle what could be done with that core game mechanic[6] of playing with image comparison, and so in our discussions about what kinds of gameplay could be conceived based on that foundation, several different ideas were collected, into a Google Drive document and our Trello board.

 

Trello.com was used as an electronic Scrum wall to organise the tasks to do.

Trello.com was used as an electronic Scrum wall to organise the tasks to do.

 

The most discussed gameplay scenarios were one-on-one challenges and turn based group challenges, where players can either get automatically assigned photos, or create their own challenges by taking a picture or uploading one.  Other possible types of gameplay we discussed include a memory drawing game, where a drawing is shown for a limited time and the player is to draw it from memory and take a picture of it; art quiz where the player is shown a well known image and has to track it down on the net to submit as a response; a tourist guide in the form of photos from sites of interest, which players find and photograph to have the next location revealed (which could be offered as a white label product[7]).  Also we discussed the offering of different categories to play within, where challenge pictures could come from the chosen category, and bribery was even considered, where players could in some way bribe the system to get a better result, which lead to some discussion about ethics.

One of the more fascinating elements within the gameplay scenarios we discussed is the possibility of wordless communication between geographically distant players, as they go about their everyday lives and may at random moments spot interesting visual motives to respond to a challenge with, or create a new one[8].  That element of remote challenges is not present in the initial idea, which is basically about a toy (or a tool enabling play), for a group of people present at the same place at the same moment, to play with.  So here the idea was already on its way to evolve into something different.

 

We used a closed Google+ group for team communication.

We used a closed Google+ group for team communication.

 

Prototyping

From that collection of ideas, we settled on a bare minimum of features to implement initially, to get a first hands on feel for how it is playing with the act of coming up with an image that is supposed to be the most similar to another image that is given as a challenge.  This minimum consisted of enabling the player to ask for a challenge, that would be delivered as a random photograph from the Flickr image hosting service and the player would then perform a web search, within the game, to find the most similar picture.

Bare minimum of features to implement initially, marked in red on a flow diagram.

Bare minimum of features to implement initially, marked in red on a flow diagram.

 

As the initial idea is about taking pictures, to create something similar to a given image, the ability to search the web for images instead was only considered as a quick means to have something working, that would then be thrown away at later stages of development when a camera would be accessible from the game running on a mobile device.  Much to our surprise, casual playtesting with people we met in passing, indicated that the option to search the web to find images similar to the one given, was regarded as a pleasing game mechanic in itself, and the game could be based on that alone in a desktop environment, where a mobile camera is not an option[9].  More formal playtesting later on confirmed this, where players liked the option to either search for images on the web or to find a motive from their environment with the camera.

 

Results shown in the first prototype, with an image from a web search as a response to a random challenge photo from Flickr.

Results shown in the first prototype, with an image from a web search as a response to a random challenge photo from Flickr.

 

Even though this initial prototype was well received, with its limited offering of automatic challenges and web searches, we were still eager to try playing with the mechanic of taking pictures with a camera when given a photo to imitate.  When that ability was available in a further iterated prototype, it mostly added a new dimension to the gameplay that had been available until then, rather than changing it completely.  Indeed, it was more interesting to explore the environment for a similar motive and asking someone to pose in this or that way, that should be similar to what the picture being imitated showed.  This allowed for more social interaction within the present environment, than would been had by doing endless web searches, and that local interaction could be a good addition to the remote interaction discussed previously.

 

phosomimage07

Trying out the camera to respond to challenges in Phosom.

Trying out the camera to respond to challenges in Phosom.

 

Technical limitations masked by narrative

What became apparent in the first prototype with web searches and in later versions offering photography interaction, is the inferiority of the applied method of image comparison and the perceived uncertainty of how it works.  Players were confused about how their results were being evaluated and as a result they were not sure what they were looking for[10].  At least this was often the case the first few times a player took a challenge, but then he or she got a better feel for what worked and not.

Was this something to worry about or was this a part of learning how the game works?  This is a question we discussed quite a lot.  It was a concern regarding the playability of the game when players found the results they were given to be unfair.  A player might find the same object as in the given challenge photograph, but still get an aggravatingly low score because the overall tone and brightness of the image she produced was different.  In those cases she would be likely to throw the game away immediately and never play it again.

 

phosomimage08

Two of the worst examples of unfair scores, where the score 589 of 1000, and 632 of 1000, is given.

Two of the worst examples of unfair scores, where the score 589 of 1000, and 632 of 1000, is given.

 

Before we commenced with formal playtests we decided to use those technical limitations to our advantage and conceived a narrative that introduced a fictional character whose opinions would represent the evaluation of similarity.  Any possible peculiarity of the underlying image comparison method could be attributed to this character’s quirkiness.  We were quite happy with this solution, but still, playtesters who had in some cases not taken the time to familiarize themselves with our fictional character and her role in the mechanics of the game, were confused nonetheless.  From playtests we learned that the participants would have appreciated some kind of an introduction on how the images were being evaluated, so they would have a better idea of what they were about to do exactly.

 

We called the fictional character Phosie, that evaluates image similarity within the game.

We called the fictional character Phosie, that evaluates image similarity within the game.  Graphics by Anders Wiedemann.

 

Player passion

Apart from getting the highest score when comparing your image with another, what is your goal while playing with this toy-game?  This question lead to discussion about the possible metagaming[11] players could be involved in while interacting with the basic play offered by Phosom.  Given the positive feedback we received from the prototypes, where playtesters said that they would like to play this kind of a game, there seemed to be little doubt regarding the potential of the idea for a playable game.  But the question remained about how engaging the game could be, what would keep players coming back to it?

With that in mind, we thought about possible in-game values that players would compete to win the most of.  Typical representations of such values are coins or points, but I was most fond of using photo prints to represent those values, that players would collect to be able to take pictures.  Those prints I liked to call pholoroids, which players could win by coming up with an image that is more similar to a challenge than their competitors could produce.  To be able to take a picture, a player would have to possess a pholoroid, and each day he would be given a handful of them for a good start, that could lead him to win a whole pile of pholoroids, which could then enable him to send a picture as a challenge to a group of other players, where, in a sense, he would be putting them all at stake, possibly winning pholoroids from all the players in that group after all rounds had been taken, or possibly losing them all to another player within that group that performed better – in a high risk, high reward scenario.

 

Further development

After reflecting on the previously discussed design process and the collection of ideas, I have become to believe that all this is overly complex, adding layers of narrative and virtual game values, while the values gained directly from the core mechanic could be interesting enough and how the game works could be self-explanatory without words.  It could be more interesting to steer the development towards a minimalist game design, where “…self-imposed, deliberate constraints on both the design process and game are an important component to exploring new types of game and play” and “…these point to choosing a few powerful, evocative elements, and then exploring the design space they constitute” where “the goal is not just to strip away the unnecessary parts but to highlight and perfect the necessary elements”[12].

 

phosomimage09

 

~ vs ~

phosomimage04

Flow sketch of two possible ways to start the game, by first choosing what or whom to play with, then followed by a rather complex set of options. This can be simplified to the single option of taking a photograph that imitates another, but still there could be a rich set of goals to explore in that minimalist game design that would allow for a deep gameplay.

 

Collection of core game values as a metagame

Instead of virtual in-game values in the form of pholoroids, players could see to how many photos they have the most similar imitation, and they could decide whether this count of similarities is something they care about and if they want to compare it with what others playing the game have gained.  All images put into the game would be open to imitation by any player.

At one point in time you could have the most similar imitation of one photo, and thus in some sense own it, but then later on, someone else could do a better imitation of that photo and in the same sense win it from you.  Notifications could be delivered about those events, that might ignite competitive fires in a player who may decide to try and do a better imitation of that photo, to win it back, or he may decide to look at what other photos that competing player “owns” and try to win some of them from him.  And so on, back and forth.

As a player, you could care about collecting increasing numbers of best imitations, comparing your gain with that of all other players within the game, rising and falling through the ranks, or you could care more about narrowing the view of whom to compare with, seeing how your performance stacks up against that of in-game friends, who could be defined manually, by adding them in a “traditional” manner, or they could manifest organically as they decide to compete against you and vice versa.  Spontaneous social connections could be forming as everyone can compete against anyone, possibly imitating a photograph that was created to imitate another photograph, in an endless recursive spiral, forming a snowball of imitations that rolled out of the very first photograph initially added to the game; in a postmodern world[13] of endless imitation to explore, where players create the games they like with the mechanics offered by this photo toy-game.  Here we would have metagaming directly based on the core game loop and the values it creates[14].

The interface would be minimalistic, initially only showing the basic element around which the play turns; a photo, one at a time.  The photos could be navigated in a sequential order of popularity or by some other metric, such as location.  The photos with the most imitations would be considered the most popular – imitations as likes.  In the same way, players who have gotten the most imitations of their pictures would be considered the most popular.  Would the game then be about collecting the highest count of the best imitations, or to become the most imitated photographer?  Players decide, since “being playful is an activity of people, not of rules”[15].

No introduction would be provided, just that basic element covering the whole screen, and possibly not obvious interactions for the player to discover, as he taps, touches and swipes[16].  This type of interface is inspired by recent mobile apps such as Snapchat[17],Vine[18], Mindie[19] and Rando[20].  The player will see images with given scores, attached to the images they are imitating, and see the ability to take a picture.  Within that context a player should soon realise what she is looking for when taking a picture; the attached images with the highest score should make it instantly visible what works within the game.

Graphic design can make a game or a toy look beautiful, but it can be argued that it is not the reason why anyone likes to play with it, but rather the affordance[21] of play it offers, and maybe that should then be the most, or the only visible element.  That is at least one way to approach the design, that suits well a graphically challenged developer and seems now to be fashionable as the example apps mentioned here show.

Should the development of Phosom proceed in the spirit described here, I would then be taking it closer to that of a non-game mobile app, ignoring the elements that define games[22], or at least leaving their implementation up to the players, within the facilities provided.  This may be a natural progression, as it aligns well with my background as a traditional software and mobile app developer, with little gaming experience (I did finish Prince of Persia and all episodes of Commander Keen back in the days of DOS[23]!  -and I was the proud owner of an Atari 2600 Video Computer System[24]).  That background of skimpy gaming experience can indeed be considered as an asset[25] and that is a perspective I will try to embrace.

 

Technology

Phosom is a service dependent on a backend, implemented in Java, running on Google App Engine which offers an API using Google Cloud Endpoints.  The interface is implemented in HTML5 / JavaScript / CSS using the jQuery Mobile framework, for both the web and smartphone versions.  The mobile app versions are compiled with Apache Cordova, the open source engine behind PhoneGap, using the tooling support of NetBeans.  Images are stored and uploaded directly to Google Cloud Storage and their similarity is computed by a servlet, running on an Google Compute Engine instance, that currently uses the OpenIMAJ toolkit.

Image comparison methods

When first contemplating the feasibility of the idea of creating a game based on image similarity analysis, I searched for readily available libraries to handle the function of image comparison and settled on two libraries to try out:  LIRE[26] and OpenIMAJ[27].  Preliminary tests with those libraries indicated that some kind of play could be facilitated with the image comparison features they offer, computing distance vectors between images based on their histograms.

The fact that both those libraries are implemented in Java lead to Google App Engine for Java[28] being chosen as an environment for the backend.  With the development process under way it became apparent that those libraries could not be used within the Java Runtime Environment provided by App Engine, due to its sandbox restrictions[29].  The possibility of comparing the images on the (mobile device) client was explored, using JavaScript and the HTML5 canvas element[30], but the browser’s Same-origin policy[31] made that difficult when communicating between App Engine and Cloud Storage on different domains.

As a quick solution, a simple servlet using the OpenIMAJ library was created and run in the lightweight Winestone servlet container on a nano compute instance at GreenQloud[32].  The resources of that inexpensive instance are very low and so the image comparison took a long time to run.  After receiving a $2000 starter pack for the Google Cloud Platform[33], the image analysis servlet was moved to a more powerful and expensive Compute Engine instance, with a better response time as a result.  This starter pack, with its six month expiration time, is one motivation to continue the development of Phosom and see where it goes.

Patterns

Programming the backend I used a traditional object oriented approach, with object relationships in hierarchies, probably due to my software development background.  With that approach, the relatively simple model behind the game quickly became confusing and now I have learned that the Entity component system (ECS) design pattern is often regarded as a better fit when programming computer games[34].

For the future development of Phosom outlined above I would like to refactor the underlying data model, from being centered around a game entity to being focused on a photo entity instead, and in that process a move to ECS could be in order.  In that same process I would like to consider using hammer.js[35] instead of the jQuery Mobile UI framework currently in use, for the simplified, minimalistic user interactions I have in mind.

Source control

The client code is hosted in source control at:  https://github.com/bthj/Phosom-client

Code for the image analysis servlet is in source control at: https://github.com/bthj/phosom-image-analysis

The backend server code is at:  https://github.com/bthj/Phosom-server

 

Conclusion

Given the widespread use of mobile devices and interest in photography, across all demographics, an opportunity to play with a combination of those may be welcomed.  Phosom would not be the first opportunity to play with photography on mobile devices, but could provide a novel angle to approach that play from.  Also it can ignite philosophical thoughts about originality and authenticity, while players use their visual memory to scout their surroundings for reminiscent images.

 

The desktop prototype of Phosom can be played at:  http://phosom.nemur.net/

and the mobile prototype for Android can be downloaded at:  http://phosom.nemur.net/PhosomClient-debug.apk

 

 

IT University of Copenhagen
Game Design course – instructor:  Miguel Sicart
autumn 2013
Björn Þór Jónsson (bjrr@itu.dk)


[1] Klaus V. Meier:  An Affair of Flutes: An Appreciation of Play, p. 8 & 10.

[2] ITU students play Ninja:  https://www.facebook.com/photo.php?fbid=10151930076531834&l=9cebdcb440

[3] “Google Goggles is a downloadable image recognition application…”  http://en.wikipedia.org/wiki/Google_Goggles

[4] “…the open mode, is relaxed… expansive… less purposeful mode… in which we’re probably more contemplative, more inclined to humor (which always accompanies a wider perspective) and, consequently, more playful.” – John Cleese on Creativity:  http://youtu.be/tmY4-RMB0YY?t=7m40s , transcript: https://github.com/tjluoma/John-Cleese-on-Creativity/blob/master/Transcript.markdown

[5] When my brain decides it’s time for great ideas – The Oatmeal:  http://theoatmeal.com/pl/brain/ideas

[6] “Game mechanics are methods invoked by agents, designed for interaction with the game state.”  - Miguel Sicart: Defining Game Mechanics.  http://gamestudies.org/0802/articles/sicart

[7] “A white-label product…is…produced by one company…that other companies…rebrand to make it appear as if they made it.”  - http://en.wikipedia.org/wiki/White-label_product

[8] “In multiplayer games, other players are typically the primary source of conflict.”  “We like to see how we compare to others, whether it is in terms of skill, intelligence, strength, or just dumb luck.”  - Tracy Fullerton: Game design workshop, 2nd ed., p. 77 & 313.

[9] The playability of the desktop prototype was compared to that of GeoGuessr:http://geoguessr.com/

[10] “What does the player need to know?:  Where am I?  What are the challenges?  What can I do/what am I doing?  Am I winning or losing?  What can I do next/where can I go next?  - Miguel Sicart: “User Interface and Player Experience”, lecture slide in Game Design-E2013, ITU, Copenhagen, October 21:  http://itu.dk/people/miguel/Design_Lectures/UI.pdf#page=30

[11] “Metagaming is a broad term usually used to define any strategy, action or method used in a game which transcends a prescribed ruleset, uses external factors to affect the game, or goes beyond the supposed limits or environment set by the game.  Another definition refers to the game universe outside of the game itself.”  - http://en.wikipedia.org/wiki/Metagaming

[12] Andy Nealen et al.: Towards Minimalist Game Design, p. 1 & 2.

[13] “Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery – celebrate it if you feel like it. In any case, always remember what Jean-Luc Godard said: “It’s not where you take things from – it’s where you take them to.”“ -Jim Jarmusch  http://www.goodreads.com/quotes/131591-nothing-is-original-steal-from-anywhere-that-resonates-with-inspiration

[14] “The metagame, essentially, refers to what everyone else is playing.” -Jeff Cunningham:  What is the Metagame?  https://www.wizards.com/magic/magazine/Article.aspx?x=mtgcom/academy/19

[15] Linda A. Hughes:  Beyond the Rules of the Game:  Why Are Rooie Rules Nice?  Annual Meetings of The Association for the Anthropological Study of Play (TAASP), Fort Worth, Texas, April, 1981, p. 189.

[16] “What challenges are developers and designers facing creating apps for touch devices after 30 years of ‘mouse and buttons’.”  Teaching Touch – Josh Clark:  http://www.youtube.com/watch?v=US_bznxIQPo

[17] “Snapchat is a photo messaging application.” - http://en.wikipedia.org/wiki/Snapchat

[18] “Vine enables its users to create and post short video clips.”  -http://en.wikipedia.org/wiki/Vine_(software)

[19] “Mindie is a new way to share life through music video…” - http://www.mindie.co/

[20] “Rando is an experimental photo exchange platform for people who like photography.”  http://rando.ustwo.se/

[21] “Affordances provide strong clues to the operations of things” – Donald A. Norman: The design of everyday things.

[22] “…we…identify seven elements in games:  1. Purpose or raison d’être  2. Procedures for action.  3.  Rules governing action.  4. Number of required players.  5. Roles of participant.  6. Participant interaction patterns.  7. Results or pay-off.”  - E. M. Avedon:  The Structural Elements of Games.

[23] “DOS, short for Disk Operating System…dominated the IBM PC compatible market between 1981 and 1995…”  http://en.wikipedia.org/wiki/DOS

[24] Atari 2600  http://en.wikipedia.org/wiki/Atari_2600

[25] ““Students who know every game often have preconceptions about what games are … I have to find ways to make them see that games are an aesthetic form that hasn’t been exhausted. …[it] is sometimes more difficult than starting from scratch with someone who’s maybe a casual gamer or just curious“ – Faye”  - José P. Zagal, Amy Bruckman: Novices, Gamers, and Scholars: Exploring the Challenges of Teaching About Games.  http://gamestudies.org/0802/articles/zagal_bruckman

[26] “Image similarity search with LIRE”  http://blog.mayflower.de/1755-Image-similarity-search-with-LIRE.html

[27] Open Intelligent Multimedia Analysis toolkit for Java:  http://www.openimaj.org

[28] Google App Engine for Java:  https://developers.google.com/appengine/docs/java/

[29] Google App Engine JRE restrictions:  https://developers.google.com/appengine/docs/java/#Java_The_JRE_white_list

[30] A basic image comparison algorithm using average hashes, implemented in JavaScript, was considered:  https://github.com/bitlyfied/js-image-similarity

[31] Same-origin policy:  http://en.wikipedia.org/wiki/Same-origin_policy

[32] Compute Qloud:  http://greenqloud.com/computeqloud/

[33] Google Cloud Platform Starter Pack:  https://cloud.google.com/developers/starterpack/

[34] When designing a game, an object-oriented approach may lead to “deep unnatural object hierarchies with lots of overridden methods” – “Anatomy of a knockout”  http://www.chris-granger.com/2012/12/11/anatomy-of-a-knockout/

[35] Hammer.js – A javascript library for multi-touch gestures:  http://eightmedia.github.io/hammer.js/

rotatengine.js

Project report done in a Game Engines class at ITU.
This site’s template is a bit outdated and a PDF version
may look better, which can be downloaded from the
Google Drive source document,
but there the embedded videos are missing.

 

Rotatengine is a JavaScript framework intended to facilitate the creation of games, for mobile touch devices, that require the player to spin around in circles, in different directions, to reach various goals.  As the player turns around, holding the device in front of him or her with arms stretched out, the game’s content moves accordingly as if it is attached to a cylinder or a sphere, within which the player is standing.

 

Motivation

Young children often spin around as a form of play, which they perform for the sheer joy of it and maybe the resulting dizzying effect as well[1].  Many forms of dance involve spinning around to various degrees (!) at different moments in time, and people usually dance for their enjoyment.  Spinning around can also be a part of religious acts, whether they be Tibetan[2] or Islamic[3].  The act of spinning in circles has even been promoted as a means towards weight loss[4].

So the impulse to spin around, for different reasons and in various contexts, seems to be quite fundamental in us humans.  Usually the act is related to fun and play, and rotatengine.js is based on the idea of encouraging, and maybe structuring, object play[5] that requires or offers spinning around.

Many types of games can be conceived, based on this frame of play and implemented with this kind of a game engine.  One kind can include textual elements that are arranged in a circle and the player must tap them in the correct order to organize them into a coherent goal.  Another could present the player with images that she must tap on when certain conditions are met, like when the image aligns with another fixed image or text or sound.

Here are outlined more specifically a few game ideas:

  • Rotabet:  The player is presented with the letters of the alphabet in a random order, spread out in a circle around him, as if he is standing within a cylinder where the letters are painted on the wall and his view into that world is through the mobile device screen.  By rotating the device in different directions and angles, the player sees different portions of the cylinder wall and has the goal of tapping the letters in the correct alphabetical order.  As the player taps the letters they are arranged sequentially in a fixed position on the screen, so he sees the progression towards the goal:  The letters of the alphabet being arranged in correct order.A game like this utilizing rotatengine.js can then offer various gameplay mechanics, such as penalizing incorrect choices (taps) by cropping off the already accomplished result, rewarding for so and so many correct choices in a row by then allowing an incorrect choice without penalty, and giving an overall score based on how quickly the goal was reached.  Levels can be made increasingly difficult, for example by varying the required player response time; at first removing the choices (letters) as they are chosen, to make the level easier as it progresses, and then at later levels, leave all choices in to make the level constantly difficult.  Fonts could change with level progression, with at first a large and readable sans-serif font, then later on, less (quickly) readable script fonts.This could be considered as an educational game, where the player gets practice and fluency with the alphabet.  Rotabet is actually the initial idea that inspired the creation of rotatengine.js
  • Rotaquote, a quotes game:  The player sees the name of a known person fixed on the screen and then a few words scattered around him or her in random order.  Here the goal would be to tap the words in an order that would arrange them into a (famous) quote that is attributed to the named person.
  • Similar to Quotabet, a game could be created that is based on poetry instead of quotes.  Here the player would have to assemble (known) poems, line by line.  Each part of the poem (paragraph) could form one level of the game.  If the player gets so and so many errors, she will for example have to start at the previous level.  The goal is to assemble the whole poem.
  • Form grammatically correct sentences:  Similar to the quotes and poetry games, the player is presented with a soup of words she has to arrange into any of possibly several grammatically correct sentences.
  • Anagram:  Player is given the task of assembling a given number of anagrams from a set of letters she can spin around, for example:  emit, item, mite, time.
  • Palindrome:  Given a random sequence of letters, the player has to arrange them into a palindrome, for example:  abba, radar, kayak.
  • Rotatag – match pictures with tags:  Various photo hosting services, like Flickr or Instagram, offers users to (#) tag pictures with descriptive words.  Those services also offer public programming interfaces (APIs) and a game could use them to pick a few random pictures each time to spread around the player, then pick a tag from one of the pictures to place in a fixed position on the screen.  The player is then to guess from which picture the tag came from by tapping on it.
  • Match a picture to a written word:  Around the player, on the walls of the cylinder, are visual representations of various objects and beings, such as a chair or a duck.  A word appears on the screen in a fixed position and the player is to tap on the visual that matches that word.  So if the word “horse” appears, then a player must spin around until he sees a picture of a horse and tap it, with a resulting encouragement in the form of a cheering sound or some in game value, such as a trophy.This type of a game could be suitable for young children learning to read or those learning a new language.
  • Tap a graphic matching with the sound you just heard:  Five visuals are spread around the player and at regular intervals, one of five matching sounds is played and the player must in time rotate to the matching graphic and tap it.  So for example, if the player hears a goat screaming, she must rotate to the position where the drawing of a goat is located and tap it in time.  Progressing levels decrease the required response time and the goal is to stay playing as long as possible without an error or too long a response time.This type of game is inspired by the audio game Bop It[6], originally implemented in specialized hardware (which this author has tried) and is now apparently available in an iOS version[7] and a clone is available for Android[8].  But those touch-device implementations don’t require full body movement, like the game proposed here does, though from their description they do seem to require some device movement by recognising hand gestures (possibly using Dynamic Time Warping (DTW) and Hidden Markov Models[9], which were considered for the implementation for rotatengine.js but deemed not a good fit, as they seem best for sensing spontaneous, short lived movements but the engine discussed here requires continuous monitoring of movement and position – more on that below).

 

Implementation

Rotatengine is dependent on accessing the device’s sensors in order to position the game world (cylinder or sphere) according the players’ movements.  While the engine could be implemented for one particular platform, for example iOS, in code native to it and thus gaining the highest possible speed, the decision was made to try and target more than one, if not all platforms by using JavaScript, HTML5 and CSS3, to be run in each platform’s web view.

JavaScript is increasingly gaining access to the graphics acceleration hardware on the platforms it runs.  Though the performance of applications implemented in JavaScript and HTML is still, and probably always will be, less than that of native applications, the graphics requirements of rotatengine.js are quite modest and so it might not gain much from a native implementation – investigation of that possibility is currently outside the scope of this project.

Visual rendering and interaction

At the current stage of implementation, focus has been on the part that handles rendering of elements in the desired manner and the interaction with those rendered elements by the player’s circular movements.

Choice of technology

The game world provided by rotatengine.js must in some ways give a three dimensional illusion, where the player is to sense that he is the pivot around which game elements rotate as he spins in circles.  Modern desktop browsers provide the WebGL 3D graphics programming interface (API), which uses the HTML5 canvas element and provides code execution on a computer’s Graphics Processing Unit (GPU).  That could be an obvious choice for an engine that manages content in 3D.  Mobile devices have lacked support for WebGL[10], though the most recent versions of web views on some platforms are now supporting it, with iOS being a notable exception[11].  So to reach the widest range of mobile devices, it is worthwhile to consider other options for 3D content rendition.

The CSS3 specification includes a module that defines 3D transforms that “allows elements styled with CSS to be transformed in two-dimensional or three-dimensional space”[12] and CSS3 3D Transforms are supported on most of the recent mobile platforms[13].  Given that wide support and the simplicity of arranging various types of HTML elements – letters and images, as in the game ideas listed above – with the same set of CSS rules, it seems to be a good choice to base the visual implementation of rotatengine.js on CSS3 3D transforms.

Interactive animation mechanics

A couple of approaches have been considered to manage the objects in the game world as the player interacts with it.

The first one considered is inspired by the Cover Flow[14] graphical user interface, where off-screen game objects would wait in line, decoupled from the animation mechanism, until next on screen  Then they would be coupled with the animation and flow across the screen, until decoupled again at the other end.  The objects would be incrementally skewed to either side and translated in distance from the viewer, as they flow across the screen to simulate a cylindrical / circular 3D effect.  This approach could have the benefit of allowing virtually infinitely many game objects, that would wait in the off screen queues for the player to pass by them while he rotates, and then more than one full circle could be required to display all objects.

rotaimage10

The Cover Flow interface[15]

Another approach is to arrange the elements on a circle using the trigonometric functions sine and cosine.  This is the method currently implemented, where each item is assigned an amount of radians by dividing the amount of radians in a full circle (2 * π) by the number of game elements.  Then the x  and - z coordinates are calculated for each element by applying cosine  and - sine , respectively, to it’s radian value multiplied by the integer value of it’s sequential order.  The coordinates are multiplied by a radius which itself is a scaling of the view’s width.  So now the elements are spread evenly around a circle that is roughly double the width of the scene.  Each item is rotated individually by it’s radian value minus  π / 2  to have it facing inside the circle.  The radius value is added to each z  coordinate to have the viewer inside the circle, close to it’s perimeter.  A simplified version of the relevant code is as follows:

       var perspective = viewWidth / 2;
       var radius = viewWidth * 1.2;
       container.css({"transform": "perspective("+perspective+")"});
       items.each(function(i){
            // let's go clockwise, thus fullCircleRadins - ...
            var thisItemRadians =
               fullCircleRadians - (self.radiansPerItem * i) + viewRotation;
            var x = Math.cos(thisItemRadians) * radius;
            var z = - Math.sin(thisItemRadians) * radius;
            var transform =  
                "perspective("+ (perspective) +") " +
                "translateZ("+(z + radius)+"px) " +                
                "translateX("+(x)+"px) " +
                "rotateY("+(thisItemRadians - (Math.PI/2))+"rad)";
            $(this).css({"transform": transform});
       });

Initial testing of this game element rendition was performed in a desktop browser, where interaction input was read from mouse movement and keyboard button presses.  One animation anomaly that became apparent during this testing was that if CSS3 transition-duration was set to high and fast movements were made, the elements would spin around themselves and take a shorter path to their new destination, over the circle’s area, instead of animating smoothly along its perimeter.  Setting the duration speed to 0.1 seconds solved this for most animation speeds, and zero seconds also but then sacrificing a little in animation smoothness.

Here the radius is subtracted from the z value, instead of adding to it, to have the viewer / camera outside the circle with a full view of it.

Here the radius is subtracted from the z value, instead of adding to it, to have the viewer / camera outside the circle with a full view of it.

When most flaws had been ironed out and the elements were animating in the intended manner in a desktop browser with mouse interaction, it was time to proceed to testing on mobile devices with input from their sensors.

Differences between sensors and platforms

Mobile devices commonly offer three types of sensors[16] that provide information on their orientation – magnetometer (compass), accelerometer and gyroscope – and together those sensors can be referred to as an Inertial Measurement Unit (IMU)[17].  Of these, the compass is a straightforward choice for input to the rotatengine.js game world, as it provides information on the device’s heading in degrees.

Magnetometers on mobile devices can be inaccurate when used indoors, due to interference from the building[18].  That would not be problematic for rotatengine.js as it does not need an accurate measurement of heading, but rather a responsive and mostly stable and consistent reading of where the device is pointed.

Apache Cordova[19], the open source engine behind PhoneGap, was used to package rotatengine.js into applications to be run on the Android and iOS mobile platforms.  NetBeans with it’s recent Cordova support was used to manage that process[20].

Rotatengine on the iOS simulator, run through the built-in Cordova support in NetBeans.

Rotatengine on the iOS simulator, run through the built-in Cordova support in NetBeans.

 

A sample application of rotatengine.js running on Android and iOS, receiving periodic heading input via the Cordova API[21], performed correctly but not with the desired responsiveness; when moving around in circles holding the device with hands stretched out, the updates of the game world objects were quite slow and somewhat jittery and equally so on both Android (Nexus 4) and iOS (iPad 2) devices.  See video recordings of tests:

 

Though Cordova’s API offers the option to specify the update frequency from the compass, further debugging showed that updates were in fact being delivered around every 200 milliseconds, even though a 50 ms update interval was requested.  This seems to be a hardware limitation on both platforms tested and a 200 ms interval is unacceptably long for a smooth animation and responsive updates.

A simple attempt was made to extrapolate future changes in rotation with shorter intervals, from the previous update interval, until the next update would be received from the compass.  Those future changes are calculated by dividing the delta of the previous two compass updates by four, and the scene is rotated by that fraction every 50 milliseconds.  Those intermediate updates from extrapolated values did not result in much improvement when run on a device, and as expected, an initial halt of animation is visible when rotation is started or direction is changed and two compass readings are being collected (with their 200 ms interval) for further extrapolation.  See video recording of test:

Testing raw compass sensor data as input to rotatengine.js

Testing raw compass sensor data as input to rotatengine.js

 

Gyroscopes have recently become available as sensors in mobile phones, first in the iPhone4[22] and then in various Android devices.  That kind of a sensor gives information on orientation changes[23], so it was the next option as input to rotatengine.js.  Here, Cordova’s API was bypassed as HTML5 provides direct access to updates from the gyroscope (via callback function attached to window.ondeviceorientation)[24].  The device’s heading is read from one of three values returned from the sensor (alpha) to which the scene is rotated.

The gyroscope on the iOS devices tested, iPhone 4s and iPad 2, proved to be quite stable and it’s raw data readings provided smooth updates of the game objects, even without any CSS3 transition-duration and it’s smoothing effect of animation easing[25].  See test video recording:

The same can not be said about the results from the Android device tested (Nexus 4), where orientation updates were very jittery and unstable, even when the device was being held still, the game objects would jump around constantly.  See test video recording:

Using the gyroscope with the current implementation would be adequate for targeting iOS devices, but if the aim is for a more cross platform solution, then the raw data from the sensors needs to be handled in some manner that both provides stability and responsiveness.  One approach considered was to implement a basic low pass filter[26] on the data returned from the sensors, but having something to filter requires first the accumulation of sensor data to then apply the filter on, and that results in an initial delay and a lack of responsiveness.

A viable approach for the use of sensors to update game object positions in a responsive and stable manner, on multiple platforms, could be to fuse the information from multiple sensors[27] and apply advanced filtering on the data with statistical estimates of the real heading.  The gyroscope and accelerometer can be combined to form a 6-axis inertial sensor (roll, pitch, yaw and vertical, lateral, longitudinal acceleration).  Two prominent methods for integrating gyro and accelerometer readings are the Kalman filter[28] and the Complementary filter, the latter is easier to understand and implement[29] and is said to give similar results[30].   Also, the lighter implementation of the Complementary filter would require less processing power, and that is an important consideration for the battery consumption on mobile devices.

The next step in developing the rotational interaction with game elements in rotatengine.js would be to apply this fusion of the gyroscope and accelerometer with either the Kalman or Complementary filter[31].  The Complementary filter’s simplicity makes it a tempting first choice.

Next steps

Further development of rotatengine.js includes reacting to interaction with game elements by touch, where events would be fired that a game implementation can register for and different element behaviours could be defined for those events, like animating to a different place or disappearing.

Also, the game world could be rendered in a spherical fashion, in addition to the cylindrical one, where navigation of game elements would be done by tilting the device up- and downwards, along with the basic circular motion.  Options for this kind of spherical world could include the definition of how many layers of spheres surround the player and whether the spheres are shrinking towards the player, to allow for a more dynamic and engaging environment.

Modularization

In the first phase of the implementation discussed here, attention to code structure for the project has been minimal and has at most consisted of encapsulating functionality in object literals and function closures[32].  For further development, the module pattern[33] has been considered as a way to organise the code.  For inter-module communication the mediator pattern[34] is a choice.

The module pattern fits well into the Entity component system (ECS)[35] design pattern, which favors composition over inheritance.  When designing a game engine that may handle entities that have many different variations, an object-oriented approach may lead to “deep unnatural object hierarchies with lots of overridden methods”[36].  A game engine based on the ECS pattern, however, “provides the ultimate flexibility in game design” where you “mix and match the pre-built functionality (components) to suit your needs”[37].

With rotatengine.js based on the ECS pattern, a game using it can define behaviour and functionality by referencing different modules, for example specifying how game objects respond when a player taps on them.  Work on the filtered sensor fusion described above should continue within a separate module that can, when implemented, be swapped with the direct, unfiltered single-sensor reading implementation currently in use.

For managing modules within rotatengine.js, the Asynchronous Module Definition (AMD) API[38] has been chosen as it works better in the browser / webview than other JavaScript Object APIs, like CommonJS and “the AMD execution model is better aligned with how ECMAScript Harmony modules are being specified. … AMD’s code execution behaviour is more future compatible”[39].  Work has started on modularising the current implementation, that can be seen in the project’s js/rotatengine directory.

Data schema and automatic content generation interfaces

Each level instance in rotatengine.js contains game objects created from data defined in a JSON configuration file.  For now, the definition of how the data is structured is arbitrary, to have something running as quickly as possible.  But when the engine should be ready for use by a party not involved in it’s development, some means is needed to communicate the required structure of the configuration files.

To define the structure of configuration files for rotatengine.js, the JSON Schema specification[40] has been considered.  To ease the creation of a schema definition, there can be found tools based on that specification, to generate a definition from an example data file[41].

Declaring data files, based on the schema, could be done manually by reading the schema and carefully adhering to it in a text editor.  A specialised user interface for entering the level data according to the schema could help a level designer get up to speed.  User interfaces can be automatically generated from a schema definition and the Metawidget object/user interface mapping tool[42] could be a helpful choice in that regard, with it’s JSON Schema UI Generator[43].  With that tool, the engine could include simple webpages, that allow data entry resulting in a JSON string conforming to the given JSON Schema[44], which could then be saved to a configuration file ready for use[45].

 

Conclusion

rotatengine.js is a specialised engine designed for limited game mechanics, where the player is to spin around in circles and interact with game objects as they rotate around him.  But within that limitation, it is possible to conceive diverse types of games, as the examples above show.  The engine will hopefully lead to a unique variety of play and games.

As of this writing, the runnable code can be initiated by opening the index.html file in the project’s root, or by compiling it into a mobile application with Apache Cordova or Adobe PhoneGap.  As mentioned previously, work has started on modularizing that same code in the js/rotatengine directory, but it is currently not in a runnable state.

The project can be found on the attached optical disk and online at:

https://github.com/bthj/rotatengine.js

 

IT University of Copenhagen
Game Engines course – instructor:  Mark J. Nelson
autumn 2013
Björn Þór Jónsson (bjrr@itu.dk)


[1] “Ilinx is a kind of play” that “creates a temporary disruption of perception, as with vertigo, dizziness, or disorienting changes in direction of movement.”  http://en.wikipedia.org/wiki/Ilinx

[2] First of Five Tibetan Rites involves spinning around “until you become slightly dizzy”:
http://en.wikipedia.org/wiki/Five_Tibetan_Rites#First_Rite

[3] “Sufi whirling is a form of … physically active meditation … spinning one’s body in repetitive circles, which has been seen as a symbolic imitation of planets in the Solar System orbiting the sun”

http://en.wikipedia.org/wiki/Sufi_whirling

[4] “How Spinning Aroundin a CircleLike a 4-year old Child will Skyrocket Your Weight Loss Success“
http://www.weightlossguideforwomen.com/spinreport/SpinReport.pdf

[5] Physical activity with toys in object play:  http://en.wikipedia.org/wiki/Toy#Physical_activity

[6] Bop It audio game / toy:  http://en.wikipedia.org/wiki/Bop_It

[7] Bop It™ for iOS:  https://itunes.apple.com/app/bop-it!/id395307733?mt=8

[8] Tap It! for Android:  https://play.google.com/store/apps/details?id=com.nomnom.tapit&hl=en

[9] A plethora of articles can be found on the subject of using Dynamic Time Warping and Hidden Markov Models for recognising gestures, for example:

“Smartphone-enabled Gestural Interaction with Multi-Modal Smart-Home Systems”

http://tilowestermann.eu/files/Diplomarbeit.pdf

“Online Context Recognition in Multisensor Systems using Dynamic Time Warping”

http://dro.deakin.edu.au/eserv/DU:30044625/venkatesh-onlinecontext-2005.pdf

“Motion-based Gesture Recognition with an Accelerometer”

https://code.google.com/p/accelges/downloads/detail?name=ThesisPaper.pdf

“A Novel Accelerometer-based Gesture Recognition System”

https://tspace.library.utoronto.ca/bitstream/1807/25403/3/Akl_Ahmad_201011_MASc_thesis.pdf

“Gesture Recognition with a 3-D Accelerometer”

http://www.cs.zju.edu.cn/~gpan/publication/2009-UIC-gesture.pdf

“uWave: Accelerometer-based personalized gesture recognition and its applications”

http://sclab.yonsei.ac.kr/courses/10TPR/10TPR.files/uWave_Accelerometer_based%20personalized%20gesture%20recognition%20and%20its%20applications.pdf

“Improving Accuracy and Practicality of Accelerometer-Based Hand Gesture Recognition”

http://www.iuiconf.org/Documents/IUI2013-WS-Smartobjects.pdf

“Using an Accelerometer Sensor to Measure Human Hand Motion”

http://www-mtl.mit.edu/researchgroups/MEngTP/Graham_Thesis.pdf

[10] CocoonJS is an interesting technology that makes up for the lack of WebGL support by bridging into native OpenGL libraries:  https://www.ludei.com/cocoonjs/

[11] Compatibility table for support of WebGL in desktop and mobile browsers:http://caniuse.com/webgl

[12] CSS Transforms Module Level 1:  http://www.w3.org/TR/css-transforms-1/

[13] CSS3 3D Transforms support:  http://caniuse.com/#feat=transforms3d

[14] Cover Flow GUI:  http://en.wikipedia.org/wiki/Cover_Flow

[15] Cover Flow image source:  http://en.wikipedia.org/wiki/File:Coverflowitunes7mac.png

[16] “A Survey of Mobile Phone Sensing”  http://www.cs.dartmouth.edu/~campbell/papers/survey.pdf

[17] Inertial measurement unit:  http://en.wikipedia.org/wiki/Inertial_Measurement_Unit

[18] Interestingly, disturbances in the Earth’s magnetic field within buildings can used to advantage when positioning devices within them:

Startup Uses a Smartphone Compass to Track People Indoors

http://www.technologyreview.com/news/428494/startup-uses-a-smartphone-compass-to-track-people-indoors/

“Indoor Positioning Using a Mobile Phone with an Integrated Accelerometer and Digital Compass”

http://xldb.fc.ul.pt/xldb/publications/Pombinho.etal:IndoorPositioning:2010_document.pdf

„Making Indoor Maps with Portable Accelerometer and Magnetometer”

http://www.ocf.berkeley.edu/~xuanyg/IndoorMap_UPINLBS2010.pdf

[19] “Apache Cordova is a set of device APIs that allow a mobile app developer to access native device function such as the camera or accelerometer from JavaScript“:  http://cordova.apache.org

[20] NetBeans 7.4 can build HTML5 project as native Android or iOS application  http://wiki.netbeans.org/MobileBrowsers#Cordova_2

[21] Cordova API: At a regular interval, get the compass heading in degrees:

http://cordova.apache.org/docs/en/3.2.0/cordova_compass_compass.md.html#compass.watchHeading

[22] Steve Jobs demonstrates iPhone4’s gyroscope capabilities:  http://www.youtube.com/watch?v=ORcu-c-qnjg

[23] “A gyroscope measures either changes in orientation (regular gyro or integrating rate gyro) or changes in rotational velocity (rate gyro)”  - http://electronics.stackexchange.com/questions/36589

[24] DeviceOrientation Event Specification:  http://www.w3.org/TR/orientation-event/

[25] transition-timing-function:  https://developer.mozilla.org/en-US/docs/Web/CSS/transition-timing-function

[26] Example of a low-pass filter for smoothing sensor data:  http://stackoverflow.com/a/5780505/169858

[27] Google Tech Talk:  “Sensor Fusion on Android Devices: A Revolution in Motion Processing”http://www.youtube.com/watch?v=C7JQ7Rpwn2k

[28] “The Kalman filter, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone.“  http://en.wikipedia.org/wiki/Kalman_filter

[29] “Reading a IMU Without Kalman: The Complementary Filter”  http://www.pieter-jan.com/node/11

“Android Sensor Fusion Tutorial”  http://www.thousand-thoughts.com/2012/03/android-sensor-fusion-tutorial/

[30] “I have used both of them and find little difference between them. The Complimentary filter is much easier to use, tweak and understand. Also it uses much less code…“ -http://www.hobbytronics.co.uk/accelerometer-gyro

„In my opinion the complementary filter can substitue the Kalaman filter. It is more easy, more fast. The Kalman filter is the best filter, also from the theorical point of view, but the its complexity is too much….“ - http://letsmakerobots.com/node/29121

“The Balance Filter – A Simple Solution for Integrating Accelerometer and Gyroscope Measurements for a Balancing Platform”  http://web.mit.edu/scolton/www/filter.pdf

[31] An interesting practical application of the Kalman filter is the Android app Steady compass, which has the description that “sensor data is treated by a Kalman filter in order to obtain superior stability in readings”:

https://play.google.com/store/apps/details?id=com.netpatia.android.filteredcompass

[32] “JavaScript Patterns & Grokking Closures!”  http://www.unicodegirl.com/javascript-patterns-and-closure.html

[33] “JavaScript Module Pattern: In-Depth”  http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html

The Module Pattern:  http://addyosmani.com/resources/essentialjsdesignpatterns/book/#modulepatternjavascript

[34] The Mediator Pattern:  http://addyosmani.com/resources/essentialjsdesignpatterns/book/#mediatorpatternjavascript

[35] Entity component system:  http://en.wikipedia.org/wiki/Entity_component_system

[36] “Anatomy of a knockout”  http://www.chris-granger.com/2012/12/11/anatomy-of-a-knockout/

[37] “Goo Engine Hello World Tutorial”  http://www.gootechnologies.com/learn/engine/tutorials/hello-world/

[38] AMD:  https://github.com/amdjs/amdjs-api/wiki/AMD

[39] “Why AMD?”  http://requirejs.org/docs/whyamd.html

[40] JSON Schema:  http://json-schema.org

[41] JSON Schema.net - http://www.jsonschema.net

[42] Metawidget:  http://metawidget.org

[43] JSON Schema UI Generator:  http://blog.kennardconsulting.com/2013/04/json-schema-ui-generator-metawidget-v33.html

[44] Generate UI from JSON:  http://blog.kennardconsulting.com/2013/07/generate-ui-from-json.html

[45] An HTML5 saveAs() FileSaver implementation:  https://github.com/eligrey/FileSaver.js

Loading, Editing, and Saving a Text File in HTML5 Using Javascript:http://thiscouldbebetter.wordpress.com/2012/12/18/loading-editing-and-saving-a-text-file-in-html5-using-javascrip/

 

Appendix

 

Other smaller projects in the Game Engines class:

Project 1:  Platformer game engine

Plafgine

For the first programming assignment in Game Engines I’ve implemented a platformer engine in JavaScript and used the Canvas element in HTML5.

The main components of the engine are four pseudo-classes / functions (JavaScript only has the notion of functions):  Player, Platform, GameLoop and GameOver.  Those classes and supporting variables / data are encapsulated in another function called Plafgine (plat-former-engine) for closure.

At the top of plafgine.js are a few configuration variables to define the level, which could be factored into another file:

  • defaultPlayerPosition:  Defines the size of the main player and where it is positioned initially.

  • enemyPositions:  Positions for the enemies and their sizes.

  • platformDefinitions:  Placement of the platforms.

  • defaultEnemyAutomation:  This is maybe an interesting experiment using the dynamic nature of JavaScript to have the behaviour of enemies configurable by plugging in a function that implements their movement.

There are no real physics in this platformer and it rather implements pseudo-physics by starting a jump at a fixed speed and then decreasing it on each game loop, until the jump speed reaches zero, then a fall speed is incremented until collision with a platform or the ground happens.  Those are the jump, checkJump, checkFall, fallStop functions within the player object.

Different instances of the same implementation of the Player object are used for both the main player and the enemies (NPC), with their configurations differing in setting whether they are automated and then with the added behaviour-function mentioned above.  Some sort of class inheritance could have been a good idea here.

The collision detection is as inefficient as can be where player collisions are checked against all platforms and all other players on each game loop.  Spatial segmentation of what to check against would of course be better for any reasonably sized level.

Control of the main character is handled by registering pressed keys into a state variable and then reading those states on each game loop (in the player implementation) and moving the character accordingly.

Camera movement is implemented by keeping the main character still and moving all other game world objects in the opposite direction when the character has reached a certain threshold on the screen.  That threshold is in fact centered on the screen so the character is pretty much always in the horizontal center.  There are glitches in the implementation of this camera movement that can almost always be reproduced when jumping from the highest platform; then the player doesn’t fall completely to the ground, but that can be fixed by jumping again! – the cause of this should be investigated in another iteration.

Timing is used to determine when to update player sprites based on their activity; when a predefined amount of time has elapsed the portion of the sprite to render is updated.

Same goes for the lives bookkeeping, only when a set interval has elapsed can the count of lives be decreased, so all lives don’t disappear instantly when multiple characters collisions are fired.  So if the main character hits an enemy he loses one life and loses one again if he keeps hitting an enemy when the set interval has elapsed.  Unless the main player hits the enemy from the top – jumps on top of him – then the enemy gets killed.

Then the GameLoop function / class calls itself repeatedly via setTimeout until all lives have been lost – then GameOver is called – and in each round it updates player positions, either from input or automation, checks collisions, checks if to update any pseudo-physics and then draws all players and platforms.

The game seems to run smoother in WebKit based browsers like Chrome or Safari, rather than Firefox for example.

Code:  https://github.com/bthj/Game-Engines—projects/tree/master/Game%20Engines%20-%20Project%201%20-%20Platformer%20game%20engine

 

Project 2:  Wireframe renderer

Implementing the 3d wireframe renderer was pretty much straightforward after reading through the given overview of the 3d transform process[1] for the second time and realizing that what’s needed is basically computing three matrixes, multiplying them together and using the resulting matrix to multiply with each vertex as a column vector, as shown in the given pseudocode.

The implementation consists of a HTML5 file, index.html, that includes jQuery Mobile for easy inclusion of a few UI widgets and then there is the wire frame rendering code in 3dWireframeRenderer.js

The function getCameraLoacationTransformMatrix implements what’s described in section 3.1.1 of the overview, Setting the camera location,

function getCameraLookTransformMatrix returns the matrix described in section 3.1.2, Pointing and orienting the camera,

and the projection matrix from section 3.2 comes from the function getPerspectiveTransformMatrix

The test meshes are in meshdata.js and I got them from http://threejs.org/editor/ and it’s export function.  One of the biggest difficulties was deciphering the faces array in the JSON export from there but then I found some documentation[2] on where the vertex indexes are located and the function getTriangleFaces does the job of extracting the vertices for each face.

When I had the function renderWireframe (basically like the given pseudocode) drawing vertices (I have them cover four pixels for clarity) and connecting them with lines, I had some difficulty finding a good combination of near and far values and camera Z position.  Adding sliders for those values in the UI helped, but, near clipping happens quite far away from the camera as it seems – I haven’t found a combination of near, far and camera Z position that allow the object to come near the camera without clipping, except, if I reverse the near and far values, for example set near to -300 and far to -10, and the camera Z position to 150, then the object (cube) renders happily close to the camera; is that a feature of the transformation matrixes or a bug in my implementation?  I don’t know…

The camera movement could be connected to the arrow / wasd keys and mouse but to see clearly the interplay between camera XYZ and the near and far planes is of most interest here so I’ll let those sliders do.

I tried getting the width and height from given FoV, near plane position and aspect ratio, as discussed in the overview and that didn’t play well with my UI so I abandoned that, but what I tried can be seen in the function getWidthAndHeightFromFOV.

Code:  https://github.com/bthj/Game-Engines—projects/tree/master/Game%20Engines%20-%20Project%202%20-%203d%20wireframe%20renderer

 

[1] “Overview of transforms used in rendering”  https://blog.itu.dk/MGAE-E2013/files/2013/09/transforms.pdf

[2] three.js JSON Model format 3.1:  https://github.com/mrdoob/three.js/wiki/JSON-Model-format-3.1

 

Project 3:  Pathfinding

For this project I started by reading an article on the web titled A* Pathfinding for Beginners for a text on how the algorithm proceeds but then I based the implementation pretty much verbosely on the pseudocode in the Wikipedia entry for the A* algorithm, along with supporting functions.

The implementation can be run by opening the index.html file in a recent browser (I’ve tried Chrome and Firefox) and pressing the Start button.  The chasing agent is represented by the letter C and the goal or target is represented by T.

When the target is moving the run can end without the target being reached when the open set becomes empty (line 145 in pathfinding.js), but the run can also end where the target has been reached.  It could be worth looking at the D* algorithm for this case.

The target moves randomly one step at a time horizontally, vertically or diagonally, except when it’s the next neighbor to the chasing agent (C), then it tries to choose a move that results in it not being the chaser’s next neighbor, which is not always possible (when the target is on the grid’s perimeter – see the last condition in the do…while loop at line 198).  It’s possible to have the target fixed in it’s starting position by unchecking the checkbox (Target (T) wanders and tries to avoid) in the interface.

Code:  https://github.com/bthj/Game-Engines—projects/tree/master/Game%20Engines%20-%20Project%203%20-%20Pathfinding

Nefna

Lokaverkefni til Baccalaureus Scientiarum gráðu í tölvunarfræði
http://hdl.handle.net/1946/15389

Vorið 2012 komumst við Edda Lára að því að við ættum von á barni.  Nánast sjálfkrafa kom upp sú hugmynd að útfæra smáforrit til að fletta og finna íslensk mannanöfn, þar sem ég er forritari og leysi verkefni með því að skrifa forrit og hef lagt áherslu á appskriftir undanfarið.  Hugmyndin er ekki beint frumleg þar sem áður hafa komið fram forrit fyrir íslensk mannanöfn eins og nafn.is, ungi.is, og Bjarni Þór vinur minn útfærði eitt slíkt óbirt ásamt því að í smáforritabúðunum má finna haug af öppum fyrir mannanöfn önnur en íslensk.  En það vantaði smáforrit með íslenskum mannanöfnum og ég tók að mér að útfæra eitt svoleiðis og kalla það Nefnu.

Einn af fyrstu valkostunum sem er staðið frammi fyrir þegar app skal skrifað er fyrir hvaða stýrikerfi það á að vera eða hvort það eigi að vera fyrir öll.  Það má ná til allra stýrikerfa með því að nota vefmálin HTML/JavaScript/CSS, eins og var gert í Skyldleiknum, en þetta nafnaapp ákvað ég að útfæra fyrir iOS stýrikerfið sérstaklega, sem iPhone, iPad og iPod touch keyra.  Ein ástæða fyrir því er, að ég hef undanfarið kynnt mér iOS forritun í Objective-C og finnst það skemmtilegt.  Önnur er að könnun sem ég lagði fyrir í tengslum við námskeiðið Frá hugmynd að veruleika benti til að meirihluti væntanlegra notenda hefði mestan áhuga á að nota appið í þessu umhverfi.  Svo næst aðeins meiri hraði og mýkt með því að útfæra sérstaklega fyrir eitt stýrikerfi.  Pæling var að útfæra appið líka með veftækni og bera saman útfærslurnar en það er enn ógert og ekki víst að ég gerið það nokkurn tíma þar sem notkunartölur benda til að nánast öllum markhópnum séð náð – kannski er fólk að kaupa sér iTæki sérstaklega til að geta notað Nefnuna.

Fyrsti verkþátturinn í útfærslu appsins var að afla gagna fyrir það.  Upplýsingar um öll samþykkt nöfn frá mannanafnanefnd eru birtar á vefnum og ég gerði skriftu til að skrapa þau í gagnagrunn.  Hagstofan birtir upplýsingar um tíðni nafna á vefnum en þar sem ég vildi ekki hamra á leitarvefformi fyrir tíðni hvers nafns sendi ég fyrirspurn um hvort ég gæti fengið upplýsingarnar á töfluformi, svipað og er gert hjá manntali og almannatryggingum Bandaríkjanna, en í svari var mér einfaldlega bent á vefsíðuna svo ég gerði líka skriftu til að sækja upplýsingarnar um tíðni þangað og læt hana ganga vel um vefinn með því að hafa hlé á milli beiðna.

Við útfærslu appsins var einfaldleikinn hafður að leiðarljósi og einn helsti eiginlegi fítusinn er að auðvelda samsetningu tvínefna, sem var útfært með því hafa einskonar skanna yfir miðjum nafnalistanum og hnappa sem segja til um hvort nöfn sem renna undir skannann færast sem fyrra eða seinna nafn á nafnspjald sem birtir samsetningarnar.  Útfærsla á þessum eiginleika reyndist nokkuð auðveld í iOS en hefði sjálfsagt verið snúnari með veftækni.  Þegar smellt er á nafn birtist skjár með merkingu og uppruna þess og Edda Lára hafði öflun þeirra upplýsinga á sinni könnu og leitaði þar víða fanga, á vefnum, í bókum og öðrum öppum.

Appið vann ég sem lokaverkefni í tölvunarfræði við Háskóla Íslands og fékk tíu fyrir.  Eftir að appið fór í loftið tók Edda Lára að sér kynningarmál sem skilaði sér í góðri umfjöllun í Pressunni og Bleikt.is, Fréttablaðinu og Vísi.is – daginn sem Bleikt fjallaði um appið, 29. maí 2013, notuðu rúmlega átta hundruð manns Nefnuna og sami fjöldi þegar Fréttablaðið og Visir.is voru með sína umfjöllun 11. júní.  Í heildina notuðu 3.258 manns appið í júní og í júlí voru þeir 1.986.  Árlega fæðast um fjögur þúsund börn á íslandi og þá að meðaltali 333 á mánuði sem gefur með mikilli einföldun rúmlega sexhundruð áhugasama foreldra á mánuði í nafnaleit –  auðvitað er þetta rúmlega níu mánaða ferli en tölurnar gefa til kynna að stór hluti mögulegs markhóps sé þegar að nota Nefnuna þó hún fáist aðeins fyrir eitt stýrikerfi.

Við tókum okkur góðan tíma í að finna nafn á piltinn sem kom í heiminn 11. desember 2012 og skírðum hann Kára þá um vorið sumardaginn fyrsta25. apríl 2013.  Þó nafnið hafi ekki beint komið úr Nefnunni og ekki sé það tvínefni þá notuðum við appið allavega til að skoða möguleikana . )

Skyldleikur

Nú á vormánuðum var efnt til keppni um Íslendingaapp meðal háskólanema og þó ég hefði lítinn tíma aflögu var það nokkuð lokkandi að skrá sig til keppni þar sem ég hafði tekið upp þráðinn í tölvunarfræðináminu með stefnu á útskrift um vorið og var því skráður við Háskóla Íslands.  Helst langaði mig að útfæra lista yfir vinsælustu nöfn í ætt viðkomandi með það í huga að tengja þann eiginleika við Nefnu, smáforrit fyrir íslensk mannanöfn sem ég vann sem lokaverkefni til B.Sc. gráðu.  Þessa hugmynd nefndi ég við fulltrúa keppninnar og nú hefur verið útfærður listi yfir vinsælustu nöfnin í vefviðmóti Íslendingabókar.

Í tæknilegum forsendum keppninnar var lagt upp með að lausnirnar yrðu útfærðar fyrir Android stýrikerfið sérstaklega.  Undanfarið hef ég kafað nokkuð í iOS forritun og útfærði þjónustuapp Símans fyrir það stýrikerfi og einnig Nefnuna.  Svo ég var ekki tilbúinn til að gefa mér tíma í að kafa í native Android forritun en með langan bakgrunn í vefforritun með JavaScript / HTML / CSS, sem má nota til að útfæra smáforrit, ákvað ég að senda fyrirspurn á keppnisstjórn um hvort mætti senda inn lausnir sem byggðu á slíkri cross-platform veftækni og úr varð að slíkar lausnir voru samþykktar og bætt við keppnisskilmálana.  Þá ákvað ég að slá til og skráði mig til keppni.  Keppnisstjórn lagði áherslu á að í hverju liði væru allt í senn tæknimenn, útlitshönnuðir og markaðsmenn.  Ég skráði mig sem einstakling og var skipað í lið með Einari Jóni Kjartanssyni úr Listaháskóla Íslands og Hlín Leifsdóttur úr HÍ.

Eftir setningu keppninnar sat ég við eldhúsborðið heima og velti fyrir mér möguleikanum á að gera einhverskonar leik byggðan á Íslendingabók þar sem ég stefni á nám í leikjagerð ásamt Eddu Láru.  Þó ég hafi hug á þessu námi þá er ég lítill leikjaspilari og hef takmarkaða reynslu af leikjaforritun en finnst þetta skemmtilegt viðfangsefni og við Edda Lára höfum meðal annars áhuga á að sameina bakgrunna okkar í að útbúa skemmtilegt námsefni í formi leikja, en hún kennir ensku við Fjölbrautaskólann í Ármúla.  Þess vegna velti ég því fyrir mér hvort hér væri tækifæri til að spreyta sig á leikjagerð og varð strax ljóst að ættliðirnir í ættartrjánum sem Íslendingabók byggir á gætu auðveldlega staðið fyrir eitt borð í einhvers konar leik.  Þessi hugmynd heillaði meira en þær sem höfðu komið fram áður og eftir að hafa velt henni með Eddu Láru ákvað ég að útfæra spurningaleik þar sem í hverju leikjaborði kæmu fram spurningar um skyldmenni úr einum ættlið og sem fyrirmynd að framsetningu hafði ég leikinn Song Pop.

Viðmót Skyldleiksins er byggt á jQuery Mobile og efni í spurningar er unnið með köllum í forritunarviðmót (API) Íslendingabókar með JavaScript þar sem jQuery léttir undir.  Bein köll forritunarviðmótið eru ekki möguleg þegar leikurinn er hýstur sem vefsíða eða vefapp vegna öryggismála (Same origin policy), svo vefþjónninn sem hýsir leikinn tekur á móti fyrirspurnunum og handlangar þær yfir til Íslendingabókar.  Vefþjónninn er útfærður ofan á node.js og hýstur hjá Nodejitsu.  PhoneGap Build er notað til að pakka leiknum í smáforritspakka fyrir app-búðirnar – iTunes App Store og Google Play – og í því umhverfi eru öryggismál ekki til trafala og bein AJAX köll í API Íslendingabókar möguleg.  Frumkóði Skyldleiksins er hýstur í útgáfustýringu á GitHub og hann var skrifaður í Brackets ritþórnum, sem er enn á frumstigum í útfærslu og hafði sína kosti og galla, helsti kosturinn að hafa virkan yfirlestur kóðans með JsHint (JsLint er of anal) en ég notaði ekki live preview fítusinn sem ritþórinn flaggar helst þar sem ég notaði ofangreindan vefþjón fyrir samskipti við vefþjónustuna.

Nokkrum dögum fyrir skil kom Einar Jón með tillögur að grafík fyrir útlit leiksins sem heilluðu mjög og ég nýtti síðustu stundirnar í að koma grafíkinni fyrir í leiknum, sem gerði hann mun meira aðlaðandi.  Þá var komið að því að setja leikinn í loftið og upp hófst einhverskonar keppni um athygli á samfélagsmiðlum þar sem keppnisstjórn lagið mikið upp úr markaðssetningu sem yrði tekin með í mati á innleggjum í keppnina.  Þó okkur gengi ágætlega að afla áhangenda (læk-a) fyrir Skyldleikinn varð strax ljóst að við áttum ekki séns í þá sem voru öflugastir á þessum vettvangi vinsældakeppni.  Meðal annars útbjó ég kynningarmyndband sem fékk þónokkrar spilanir, Einar Jón útbjó kosningaáróðursmynd og Hlín póstaði um samfélagsnetið þvert og endilangt.  Tveir fjölmiðlar leituðu til okkar um viðtöl fyrir greinar sem birtust í Séð & Heyrt og The Reykjavik Grapevine.

Úrslitin urðu þau að Skyldleikurinn hafnaði í öðru sæti og þegar Kári Stefánsson afhenti okkur verðlaunin þótti mér vænt um að heyra hann segja að sér fyndist þetta skemmtilegasta innleggið í keppnina.  Strákarnir sem unnu voru skemmtilega duglegir að koma appinu á framfæri í erlendum fjölmiðlum sem náði hámarki þegar Jimmy Kimmel sendi frá sér skets sem grínast með sifjaspellsspillisfítusinn þeirra og merkilegt nokk skilaði þessi umfjöllun mikilli umferð inn á Skyldleikinn sem náði hámarki daginn sem grínmyndbandið gekk um netið, þá heimsóttu um sjöhundruð manns leikinn og flestir voru virkir í að spila hann.  Í framhaldinu hefur verið stöðug virkni í spilun leiksins, lengi vel nokkur hundruð einstakar heimsóknir á dag, sem hefur fjarað hægt út og þegar þetta er skrifað eru heimsóknirnar um sextíu.  Rúmlega sexþúsund hafa náð sér í leikinn í App Store og tæplega eittþúsund á Google Play.  Það hefur verið gleðilegt að heyra utan af sér að fólk hefur virkilega gaman að þessu og væri gaman að bæta eiginleikum við leikinn, til dæmis fjölbreyttari spurningum og kviku ættartré sem var upphaflega inn í myndinni að útfæra.

Það er eitthvað sérstakt við það að spila leik sem spyr spurninga um þig sjálfan.

bthj.is í skýjunum

bthj.is er kominn úr sumarfríi!  Hingað til hefur vefurinn og nágrannar hans, eins og skeytla.bthj.is og alaska.is, verið hýstur í heimahúsi á ævagamalli vél sem hefur hökt mikið undanfarið, líklega úr sér gengin.  Prófaði núna að setja upp sýndarvél af minnstu gerð á GreenQloud.com og hún reynist duga vel til að hýsa þetta dót.

Flutning netþjónsins ætlaði ég að afgreiða áður en ég flutti til Kaupmannahafnar en tími vannst ekki til þess svo ég skildi vélina eftir í lamasessi.  Fluttur út komst ég ekki inn á vélina á Íslandi en gat þó gengið frá arftaka hennar í skýi þar sem afrit af því mikilvægasta var tekið með CrashPlan þaðan sem ég gat sótt það.

Það kom til greina að nota sérhæfða hýsingu eins og á WordPress.com en mér þykir ágætt að hafa aðgang að Linux vél á línunni þar sem ég get sett upp hvaða þjón sem er til að prófa og keyrt skriftu þegar mér líst svo á, eins og þegar ég safnaði gosmyndum hér um árið.  Skeytlan keyrir á Tornado og alaska.is á Django.  Eitt og annað node.js fikt hef ég átt til og líklega verður meira um það.

Leikvöllur með skeljaraðgangi er semsagt fínn og það er vissulega mögulegt að keyra svona sýndarvél í mörgum öðrum skýjum , eins og hjá Amazon EC2 og Google Compute Engine en minnsta sýndarvélin hjá GreenQloud, Nano, er ódýrasti kosturinn sem ég hef fundið og verður það vonandi áfram.  Það spillir ekki fyrir að netumferð innan Íslands er frí þar og svo auðvitað græna orkan og allt það.

Uppfærsla 2013-08-02:  Þessi skínandi nýi þjónn tók upp á því að hrynja á um sólarhrings fresti og ég þóttist vera fórnarlamb DDOS árása en eftir að hafa loksins fundið smá vísbendingar komst ég að því að mögulega var skortur á vixlminni vandamálið og ég bind vonir við að tilraun til úrbóta leysi vandann.  Annars var ég farinn að hafa hug á að skipta út Apache vefþjóninum út fyrir nginx, sem krefst mun minni gæða, og það getur vel verið að ég láti verða af því á einhverjum tímapunkti, þegar nenna og nauðsyn gefa tilefni til.

Uppfærsla 2013-08-17:  Vefþjónninn hefur keyrt hrunlaust síðan víxlminninu var bætt við en öðru hvoru hefur hann verið undir álagi þar sem ferli að nafni check-new-release étur upp öll tilföng vélarinnar í nokkuð langan tíma.  Fyrir tæpum mánuði kastaði ég spurningu út í netið vegna þessa og nú barst svar sem ég fylgdi og dugar vonandi til að losna við þetta óþarfa álag.

Uppfærsla 2013-08-25:  check-new-release ferlið hefur gert vart við sig þrátt fyrir lausnina hér að ofan en helsta vandamálið núna hefur verið hve hægur alaska.is vefurinn hefur verið til svars, að meðaltali rúmar þrjár sekúndur fyrir síður sem eru ekki biðminni og reyndar líka lengi fyrir þær sem koma beint úr memcached.  Svo augun fór að berast að Apache sem ku vera mikill minnishákur og hljómar þá ekki heppilegur vefþjónn á sýndarvél með 256MB vinnsluminni, líka sú hegðun hans að kasta í gang nýju ferli fyrir hverja vefbeiðni með keyrsluumhverfi fyrir skriftur vafið inn í hvert þeirra.  Svo ég varð að prófa nginx sem hefur fyrirfram skilgreindan fjölda verkferla í gangi og spinnur þræði út frá þeim; hafði ekki trú á að munurinn yrði mikill en er gapandi hissa að sjá muninn eftir að ég er búinn að setja hann upp fyrir WordPress og Django með FastCGI - frá því að alaska.is svari á rúmum þremur sekúndum hefur mousup viðburðurinn varla átt sér stað núna þegar vefbeiðnunum hefur verið svarað.  Frábær græja þessi nginx virðist vera!

Uppfærsla 2013-09-07:  Já, þetta er ótrúlegur stöðugleika- og hraðamunur á vefnum eftir skiptin yfir í nginx.  Samkvæmt monitor.us fór meðalsvartíminn úr rúmum þremur sekúndum og lélegum uppitíma niður í rúmar 0,2 sekúndur og hundraðprósent uppitíma.

Django með Apache + wsgi uppsetningu – 20. ágúst:

Monitoring Location : All
Test Name Type Tag Uptime(%) Avg Resp Time(ms) Failures(#)
alaska.is/teikningar/474_http http ALASKA 51.85 3079.28 13
alaska.is_http http ALASKA 25.93 3003.82 20

Django með nginx + fastcgi uppsetningu – 3. september:

Monitoring Location : All
Test Name Type Tag Uptime(%) Avg Resp Time(ms) Failures(#)
alaska.is_http http ALASKA 100 251.22 0
alaska.is/teikningar/474_http http ALASKA 100 244.16 0

 

Volcanic time lapse spanning two months

After cleaning up all the still images from the volcano webcam for the eruptions in Fimmvörðuháls and Eyjafjallajökull in 2010 I wanted to assemble a continuous time lapse video from all that mssaterial and now I’ve finally done that and the result is a movie length video covering those two months of eruptions. Well actually two videos from wide and narrow angles, that may be played together:


The source files can be found on archive.org

(I decided not to upload the 18GB DV files generated from the still images with avconv and converted them to MP4.)

Nú er að setja popp í skál…

W3C HTML5 Games course completed

W3C course Game development in HTML5 completed:

Orðasafnasafn

orða(safna(safn))Orðasafnasafn er leitarviðmót fyrir íslenska orðabanka:

oss.nemur.net

Viðmótið gerir leit handhæga frá einum stað í þeim mörgu opnu orðabönkum sem eru hýstir á vefjum stofnana. Hönnun viðmótsins hentar jafnt fyrir borðtölvu- sem snjallsímaskjái og því aðgengilegt við sem flestar aðstæður.

Þetta gæluverkefni spratt fyrst og fremst af eigin löngun til að hafa svona tól við hendina og vonandi reynist þetta fleirum gagnlegt og til þess fallið að opna enn frekar aðgengi að þeim upplýsingum sem orðabankarnir geyma.

Nýkominn með svokallaðan snjallsíma og spjaldtölvu í hendur, eftir nokkurra ára tryggð við einfaldan spjallsíma, langaði mig að hafa svona leitarmiðlara ávallt aðgengilegan.  Reyndar virðist ég hafa byrjað að pæla í þessu fyrir fjórum árum samkvæmt breytingayfirliti í útgáfustýringu verkefnisins, svo hefur pælingin sofnað þangað til ég tók upp þráðinn aftur núna og nýtti tækifærið til að prófa jQuery Mobile rammann sem viðmótið byggir á.

Bakendinn er forritaður í Python og byggir á Google App Engine þar sem vefurinn er hýstur.

Inkscape prófaði ég svo til að gera einkennismyndina með.