Developing an Affective Computing AI to assess its applicability to modern computer games — Dissertation Project 2017/2018


The first week of term has started of by creating a few concept ideas.
The lecture didn’t fully help in realising the scope, so further enquiries are being made to lecturers whether the initial concept, of recreating top level athelete’s training machines in VR can yield the same results as their physical counterparts, would be technologically advanced enough to be used as an investigative piece for the honours project.
A few different lecturers have been reached out to find who would be interested in the project. Further research into training types is currently being made, with the DynaVision D2 board as the example project type being displayed to lecturers in help of explaining the project.

DynaVision D2 board


A meeting with Euan Demster and email chains has concluded what I initially expected, that the project was not technologically advanced enough to be suitable. Meaning that the initial concept shall be carried on as a side project out with university, with the aim of releasing the application later on.

Research into types of AI learning are now being undertaken, as well as the possible applications these could have in the gaming community on a solo and multiplayer environment.
Further study into uses is being carried out currently.


After what has felt like an astoundingly short time, classes for this week have already came and gone. The field of AI is what I have decided I want my main focus to be on, however currently I have three rather different ideas in the subject. These are as follows,

  • Comparing the efficiency to believable performance output of background NPC
  • Using an ART 2 (ART 1 if this proves too difficult) network to teach a system rather unsupervised how to play a game, against another AI that should also learn as it goes
  • Using affective computing to try influence the decisions, or follow the decisions a person shall make when in different emotional states

A meeting with David King is scheduled for tomorrow (Wednesday 13th), to discuss these ideas with him as well as get his take on the feasibility of each.


The meeting with David King was very helpful in establishing a clearer description of the projects I wanted to create, as well as the justification for each idea. After discussing the ideas with him, I was able to decide on two to keep researching and bring forward. The first being the comparison of background NPC logic types as well as trying to make them have a wider range of understanding and responses to a stimuli such as hearing a gun shot. As well as looking into the input types of affective computing, to see how exactly a player can be led to an action or respond to a choice depending on the emotion they’re feeling.

David King was able to suggest getting in touch with another lecturer named Robin Sloan about a project they previously did involving facial recognition. Sadly this project wasn’t exactly as helpful as I’d hoped in using it for determining emotion, however Robin did leave me with a large amount of helpful resources to keep going on my path of researching emotion based facial recognition.

I was also given information on a past dissertation from David that involved the use of using a players pulse rate to keep them immersed by altering the virtual world of a horror game they were playing.

The next step for me is to research into input methods to determine emotion, as well as the limitations involving around them. As well as study the different types of AI logic techniques to give more believable reactions at an efficient rate that could be compared.


After a week of research, not only has the final dissertation project type been decided, Affective Computing, but also a helpful API that can be used through unity to determine emotion using facial recognition. This is a very helpful start, since the hard coding of this technique would otherwise take an obscene amount of time.

The name of the dissertation is currently causing problems, since I first need to decide exactly what I want to test with Affective Computing. Currently considering another meeting with David King in order to discover more applications that could be feasible tests on the subject.


After sampling a few names, the title of the project was created as “To develop an affective computing AI to assess its need in modern day computer games”. After the title was created, the aims and objectives of the project seemed to flow a lot more easily.

The next week shall be spent in another country, so work on the project shall be slow. However, for the remainder of this week research into the Affectiva software for facial (and emotional) recognition shall be completed. A test application in the Unity editor just simply taking the emotions in shall hopefully be created shortly as a prototype and proof of concept that this is viable.

A computer camera has been purchased that should arrive tomorrow, that should be of a high enough caliber that it should easily aid in the facial recognition requirements. However should this not be the case then further research into possible cameras may be required.


After a rather unproductive week out of the country, a final dash to get back to work has led me to discover a game named “Nevermind”. This game advertises itself as “The first biofeedback adventure game”, which has a nice ring to it. The premise is it is a horror game using the Affectiva unity plug-in discussed before, or a heart rate sensor, to tell how scared the player is. The more scared, the more difficult the game becomes, but should a player calm their self in the face of their own fear, then the game as seen as forgiving by making the level easier to face. I feel this is a game I’ll reference a lot in the coming weeks of further study into the area of Affective Computing.

Additional research into multiple sensory methods has been taken as well, with speech recognition being the main backup to facial recognition to consider. However, being Scottish the idea of speech recognition already gives me some level of fear given its rather notorious level of failure to the Scottish accent. In the spirit of optimism a search for a free API to complete such things is the next step for research.


Sadly after a week of testing and searching for a speech API, the only ones that seem suitable involve pre recorded audio as a necessity. This means the likely hood of it being used is a lot lower than I’d hoped.

On a more positive note, a demo application in unity using the Affectiva plug-in is now operational, showing the web camera’s feed, as well as the emotions detected and their resulting values. Work on deciding the design the level should follow is now taking place, whether this be to use a skyrim level, or to create a level from scratch, as well as better defining the ways in which the application could use the intake of emotion to respond more correctly to a given stimuli.
Although early days in the project, this demo application already provides a start point for building example code as part of the feasibility demo at the end of the semester.


This week has mainly brought work on the project proposal document, to highlight that the project is both feasible as well as worth investigating into. In this way a lot more research has been attempted into finding past work which has used affective computing in wide spread consumer products/software.

The level designs are now well underway, with the decision of tailoring a self made level that shall demand interaction with the companion AI over the use of a current Skyrim level or equivalent being used. However this concept shall still be used currently as a backup plan if modelling a generic AI is taking too long and takes away time from implementing an affective agent as a result.

The next step is to refine and polish the current proposal document then begin pseducode of the generic AI, and the additional operations the affective computing principles shall add on top of this. Level designs should also begin to be grey boxed in the unity game engine.


Within the main emphasis of work was on the project’s proposal document. It seen the completion of a first draft, as well as revisions later on in the week to match given feedback by my personal supervisor, David King. Currently a complete second draft has been finished, with light polishing on the overall document to occur before it’s due submission this coming Tuesday (24th).

More level design work is under way, currently only a single puzzle has been fully decided on. This puzzle shall involve the user having to order the AI to collect and place an object on a button, to keep a door open for both the player and AI to progress through.

A very generic pseducode that skims over the generic companions operations has been created. However before implementation of this, a more thorough breakdown of each step must be created to build the AI correctly.


Greyboxing of the original level design has now taken place in the Unity Engine. This level design is very simple and is intended to get users to understand all of the controls available to them within the forth coming demo. Simplistic mechanics, including the player movement system have been implemented into the project.

Further development on the AI’s generic pseducode is the last task of this week. Par the supervisor meeting with David King tomorrow afternoon. Starting next week the generic AI should begin implementation, as well as polishing of the user’s currently implemented movement controls.


I realised that predicting what task should next be executed was taking more time than should be required. For the time taken to be reduced, a simple gantt chart was created to cover the broad topics that should be faced within the time scale of the project.

This chart is hoped to increase productivity, despite an expansion of each section being required. It should hopefully also help in determining whether the project is currently on track, behind in some regard, or ahead of schedule. Showing the point further development may be something to consider.


The player’s controls have been completed, with a minor initial polish on the core mechanics. New implementation focused on object interaction, as well as AI commands, with the UI to match were completed.

Raytracing through unity’s engine allowed for an easy system to gain the exact point coordinates of a floor mesh, where an AI shall be able to use through A*. This shall be how the main movement system of the AI will be calculated. A placeholder function through Unity’s Engine may be used to simulate movement currently.

A hectic week from other coursework has led to very minimal work on this project. However by the end of the week, the AI’s implementation should have started, with an emphasis on the movement system being in place by early next week.


The main focus of this week is to implement the movement and very base functionality of the RPG-like companion. Although A* is hoped to be implemented into the project for movement, currently Unity’s nav mesh agent is being used. This is for two reasons. The first being that it allows the project to go ahead without the costly time of implementing A*, arguably one of the bigger sections of the work. The second reason is that although A* shall be used in the final version, the movement itself shows all the needed requirements for its upcoming feasibility demo. This will allow the criteria to be met for the demo, while more time is given to implementing the remainder of the base AI’s features.

The downfall of using Unity’s navmesh, is that objects must be set to static to achieve the mapping system. Since static objects can’t then be moved, this means objects that are movable, like the interactable cubes in the first level, will ‘break’ the navigation in a sense. A* is hoped to fix this oversight, since it can recalculate the map whenever an object is placed/moved.

The next task to complete for this week, is the implementation of the AI’s control system. Meaning the instructions it is given by the players.


A lot of time has been taken away from this project, due to other coursework commitments this week. However, a fair amount of progress has been made regardless. 
We’ll start with the search for literature: A couple more academic papers were found for the developing literature review. Including one paper by Johansson, A. in 2012 named “Affective Decision Making in Artificial Intelligence: Making Virtual Characters With High Believability”. I’d like to highlight this paper since it gives an interesting concept I had not previously encountered, named Emotion Maps. The concept of emotion maps simply refers to a different way of pathfinding for an AI, relevant to their past experience in areas. For example should an AI have ‘memories’ of being attacked in one street, they would be less likely to follow that road to a destination again due to fear being experienced there. This form of pathfinding fascinates me since it gives a much more human appearance by the AI.

Next I’ll quickly cover the applications improvements. Since last week, the basic RPG like AI has been almost fully implemented. (At least from a core point of view) The AI can now interact with object’s and move according to the player’s commands. Meaning the AI can pick up objects and place them down wherever commanded. The test level is almost fully complete now as well. Which is a very simplistic map following the below level design:

Very simplistic level design for first iteration

The wall in the centre of the design should be partly transparent and appear almost like glass, so the user can issue the AI commands, without being able to interact physically with the objects themselves.

A final push in other coursework for the remainder of this week shall hopefully allow for a decent amount of time to work on the project before a feasibility demo upcoming on the 7th December.


With another module deadline yesterday, work on this project was a secondary objective. So at the mid point of the week, not much has progressed. Although a brief polish of example code, has made the emotion displaying code ready for the feasibility demo.

The objective for the next two days is to find academic papers that could be useful to refer back to, during future work of this project. Although some background work into research of implementing A* into the Unity Engine shall occur, since this is not a necessity for the upcoming feasibility demo deadline, this shall only be discussed briefly in the remainder of the week. Unless a breakthrough suddenly occurs.


As the week comes to an end, so does the majority of preparation for the feasibility demo. With a developing literature review being completed with two more academic papers. One of which is named "Affective Videogames and Modes of Affective
Gaming: Assist Me, Challenge Me, Emote Me" by Kiel Mark Gilleade, Alan Dix and Jen Allanson. This paper discusses a number of different ways in which Affective Computing can change a game. The concept of 'Assist Me' was already chosen to be used, however the study mentioned finds that offering aid to players only really becomes helpful to players who play videogames rarely. For this reason it may be worth looking into the possibility of using or adapting one of the other two mentioned methods.

Since the last post, a flowchart detailing slight changes in the affective agent has been created. This shall be another artifact discussed at the upcoming demo. A presentation has been created with screenshots of the four artifacts to work as discussion points, to smooth and transition between each showcased artefact.

The remainder of the time before the feasibility demo shall be used to polish delivery of the presentation. As well as making sure each choice is justifiable for use in the project. Looking forward to the Christmas break, A* shall hopefully be implemented, along with a polish of the generic AI code, as proposed by the Gantt chart. With some work on the literal dissertation to be undertaken as well.


Since the feasibility demo was concluded, a very light amount of work has been completed. The main area of focus has been redoing the movement system, that initially used Unity’s own navigation mesh. This has now been fully removed from the application, and has begun to be replaced by a personal made A* script.

Currently work on the A* script has been limited to building the map with the starting objects in place. Now that this is done, the next step is to complete the A* algorithm and allow it to find a path the AI can then use (if one is available). Movement shall now be achieved through use of the Rigidbody attached to the AI. Where velocity shall be applied to create a direction in which the agent shall move.

Once the A* algorithm is complete, work shall swiftly move on to designing levels in which the player must interact with an AI. On the side a first draft of introduction to the dissertation shall be written.


Due to an unforeseeable personal issue, work over the festive period has been very slow. So until the last three days, there would have been no more improvements to have mentioned within this journal.

However in the past three days of work, it has been discovered that the previously thought faults of non moving navigation mesh features, has a solution. By changing the moving objects from static, and allowing them free movement, you may add a “Nav Mesh Obstacle”. This obstacle allows certain parts of the navigation area to be updated in real-time, for any appropriate changes required in the maps data.

Thanks to the Obstacle feature, the time lost that would have gone into implementing A* can be passed over, leaving the work not as far behind as previously thought. In order to reach the current goal in the gantt chart previously shown, it is important that a previously made pseducode is exapanded for the affective AI use. It is also essential that writing of the introduction of the dissertation begins in the next few days.


The first week into semester two has now come to an end. The main focus for this week was to get back on track and update David King (returning from medical leave) of the progress on the project. A meeting with him on Wednesday brought him up to speed on the current development level of the project.

Since that meeting, a document has been created that displays which emotions the Affectiva plug-in is able to look for, and what these could be used for by the AI. It is hoped that by gaining a rough idea of each outcome an emotion has, it can be implemented efficiently using a fuzzy rule set on a block of emotions that have similar outcomes. I.e. not all shall be assessed at once, just those that have similar feedback, which could then be used overall to determine the action followed by the AI.

A start has also been made on the first draft of the dissertation’s introduction. A first draft is hoped to be completed by Thursday at the latest. However the main focus for next week shall be to begin giving the AI the ability to understand emotion, as well as a final document explaining all possible outcomes of the emotions a user could be expressing.


In a final push for this week before the Global Game Jam begins, the project has been updated. Currently the project is now split into two scenes, one with the same basic capabilities of the example level. The other has now included the affectiva plugin for emotion recognition. It also has a text overlay displaying emotion values for a face found by an attached camera.

With this split the application has started to show a clear division between the generic AI and affective counter-part. This week also brought the completion of the first draft of dissertation work with a completed introduction section sent to my supervisor for review.

Work for next week shall now consist of adjusting the dissertation as appropriate to feedback and starting the literature review section of dissertation. For the application, the rule set for fuzzy logic should be set as appropriate, from what has been previously documented as the emotions possible outputs.


This week has brought a completed second draft of the dissertation introduction, with a large amount of work focused on the literature review.

A brief look at fuzzy logic rule sets has given a number of rules. Currently these seem to scale quite massively due to the number of emotions being handled. Work into finding a better solution is taking place, but should a better alternative not be discovered or created, then a key emphasis on efficiency shall be essential.

The supervisor meeting with David King this week has helped me to set out the goals I seek to achieve from the next two weeks before the graded progress meeting. The primary focus of the upcoming week will be to complete the first draft of the literature review, followed by the implementation of the fuzzy logic, as well as the speech system the AI should use. Work on two more level designs should also be considered in order to allow grey-boxing of the levels to take place.


Looking back on the fuzzy rules, it has been decided to scrap this approach. Fuzzy logic shall still be used, however now it shall be completed with the valence value gained from the player’s face value. This greatly decreases the number of rules required from around 20+ to a more manageable 9.

The graph the rules give also seem more logical since it now appears to be smoother, much like fuzzy logic should offer as a successful transition stage.

Surface viewer of the fuzzy rules around valence

However this is now bringing just more questions of whether fuzzy logic is even truly needed, since the output currently (although fuzzy) would end up giving similar results dependent mostly on the input of the valence. For this reason it is currently being considered to scrap fuzzy logic from the project entirely, since it yields no real benefit at this point and would just make the project less efficient. The only argument for keeping it is to hold the complexity level up, but this seems against the point of the application itself. A Rule Based System, or even simplistic switch statement may be a better fit.

If it is indeed scrapped, at least the algorithm and argument behind it’s use can make for some work in the later sections of the written dissertation.
Speaking of the literal dissertation, the past week has brought with it a first draft of the literature review. Currently awaiting feedback to create the second draft. The speech section of the application has been implemented and successfully tested. So the AI can now display text to the player while playing audio for each appropriate interaction. The prompt to when this shall be called is still awaiting work.

Before the progress meeting, it is planned to begin writing the methodology section of the dissertation. As well as completing one additional level design (non-greyboxed) and finishing the implementation of at least one reaction according to the player’s emotions (such as telling a joke or reacting in surprise).


With the progress meeting taking place tomorrow, final work for this deadline is being polished today. The removal of fuzzy logic has sped up development and all speech reaction code is now in place. Upon a surprise reaction, the AI shall also jump backwards slightly to show shock. This was more than what was previously expected to have completed by this point, so going into the meeting at this point in the application gives confidence.

A first draft of the questionnaire for users who shall later test the project has been created, but it feels like this requires more iteration. Helpful feedback shall hopefully be given in the planned mock test runs that shall allow this to be altered. These mock trials are hoped to begin next week while final work on the affective AI should start to take place.

Level designs are currently being worked on, however levels that stress the use of the AI are more difficult than expected to create. For this reason, work on this has been slower than anticipated.

Finally, the methodology section of the dissertation has been researched into by reading over past (high graded) dissertations. As such, a generalised format of the topic headings has been generated which should allow for easier writing due to the breakdown being in place currently. A first draft of methodology is hoped to be created for next Wednesday (21/02/2018), but shall take a back step if the application requires more work than currently expected.


As week five comes to a close, a summary of the events shall be detailed here. The progress meeting went exceptionally well, with positive feedback on the work done. Helpful advice for the literature review second draft was also given. A discussion took place about the use of fuzzy logic where it was eventually determined that leaving out seemed the optimal choice. While the only real push for including it from both of us was just because we both liked fuzzy logic, though that was less than optimal a reason to include it. Overall the grade for this meeting was an A+, indicating the progress was up to standard and to the point I’d planned to be at.

Going forward, next week shall be focused on completing a first draft of the methodology section, as well as a second draft of the literature review. The application should also have the vast majority of planned reactions put in place for the affective NPC.


With the date pushing past the halfway marker of week six a few updates feel worth mentioning. An upcoming coursework has divided attention, since prioritising this early on allows for any problems or questions to be fixed and answered with enough time before the deadline. For this reason, work on the dissertation project has been slower than usual. With the main updates to the project being focused around the written section. Introduction 3rd draft is now complete, along with a 2nd literature review draft, followed by a near finished 1st draft of the methodology section. The completion of the methodology section shall be the last written work for the week.

An email from David King in reply to my first draft questionnaire has led to questions being changed to appear more open ended to users. This should give the likert scale questions better results, than the currently very “yes/no” approach some were hitting.

The application itself has gone through no changes currently this week. Within engine the possible dialogue options has been increased, but the source code itself remains the intact. It is hoped to polish the current NPC interactions and include at least one more option before the week is done.

Finally, with mock trials hoping to take place soon, it seems like an overlooked task must be completed. At least one of the additionally created level designs must be implemented and tested before the week is out.


In this week another supervisor meeting was held with David King and the rest of his fourth year students. Sadly due to weather conditions it was cut short, but a lot of useful advice on the now completed methodology first draft was given. Work to update this shall take place in the next few days, primarily after Wednesday. Since an approaching deadline has priority over the dissertation redraft. Thankfully the dissertation is fully on schedule, with further sections not being possible to complete until testing is finished.

Since testing externally is the next big section to tackle, the following week shall mainly be focused on completing the application and level designs. This is to ensure that enough time for the rest of the project is in place to work on the future dissertation sections.

Updates of the progress shall be posted next week as they arise.


Redrafts of all dissertation sections have taken place, which has been emailed on to David King for review. This marks the fourth completed draft of the dissertation so far. An email was received informing students about a graded assignment which involved giving supervisors either a full draft of one section, or parts of two or three sections. Since such a high amount of work has already been completed for this, that assignment can be fully looked over and allow for full focus on completion of the AI polishing and final level implementations.

Once again a coursework out with the dissertation modules has resulted in split focus, so work on the application has been subpar this week. However it is hoped that the ‘out with' coursework should be completed on Monday at the latest, allowing for some time to add to the application before a supervisor meeting on the Friday (17/03).


The application has had a number of updates this week. The most important of which most likely being the running order and the generation of an output code. The output code determines the order in which companion you play with. At the end of both level completions, this code is then displayed to a user and they are asked to input it into a questionnaire. Obviously this is just for book keeping, to make results comparing a lot easier without explicitly telling the user which AI companion was which.

While having random testers play the levels, a number of small bugs were seen to be present. These included: 
1. A movement bug for both companions;
2. A speech bug for the affectively enabled companion;
3.An interaction bug with moving boxes for both companions.
Work to fix the above stated bugs shall take place over the coming weekend.

A final level shall also be implemented into the running order of the application. After this is successfully done and debugged, users shall begin being asked to attend testing sessions to begin evaluating results.


All bugs mentioned in the last post have now been dealt with. 
The movement bug was simply down to a strange occurrence with a non static object. The speech bug was handled by simply adding in a ‘reset’ section for when forced dialogue was needing used.Finally the interaction bug surrounding the movement of boxes was fixed by temporarily disabling the NavMesh Obstacle component on the interactable boxes.

A level design was polished then implemented into the application as well, which involves numerous before seen problems, but combined in a way that may not appear apparent to a first time user straight away. The level is also seen as quite long ‘physically’ to ensure the user needs to spend a decent amount of time with the AI even if they know the level already.

A level reset function has also been added. This is to make sure that should a player accidentally trap themselves in a level, it can be re-began instantly as opposed to closing and restarting the full programme.

Final tests and considerations are now being made to the application, while a test pool of users are being asked to form to begin testing early next week. (Tuesday the 27th onward)


A few overlooked bugs surfaced when getting an outsider to the project to test the application. Thankfully I was able to rearrange with most of the decided testers for time slots the following week, allowing me time to fix the noticed bugs. The tester who broke the application has also been kind enough to offer to test a few more times before my final testing shall now begin, in order to try find any bugs that I, as the developer, would probably not ever think to check out for since the application was very much so designed to do what I expected it to.

The supervisor meeting with David King this week as put a push on the final project completion, with David asking for a completed first full draft of dissertation presented to him for the end of the following week. Whether this is feasible or not shall more than likely come down to how quickly the testing goes.

It is aimed to have all testing completed by the end of Thursday at the latest, and bench-marking data from Unity’s profiler to be recorded by the same date.


Testing is finally completed. Numerous graphs have been created and are now in place within a fully completed results section of my dissertation. It is seeming unlikely that a first full draft shall be created by Monday to hand to David King, however it is aimed to be complete the Monday night. Hopefully this shall give him time to look over the document and give feedback the following day at the supervisor meeting, however this may be highly optimistic.

The application should now be fully complete and ready for submission, which is due 17 days from now (24th April). Before this however, a few questions shall be asked to David King to make sure the correct content is submitted, as opposed to throwing in everything that may just be disregarded anyway.

A discussion and conclusion section are clearly the next work to take place within the project. A breakdown of the discussions points are hoped to be achieved within the hour after some brief research of previous dissertation structures.


A hectic weekend followed the last update post. Due to this a completed first draft of the full dissertation was completed on time for late Sunday night, allowing a copy to be present for my supervisor early Monday morning. Work since then has been limited, since awaiting feedback on the draft needless to say could not be immediate, what with the large word count and multiple students attempting to send similar work to him. Upon collection of the feedback later in the week, it became apparent that some sections required more polishing than others. Since this is the case, a lot of time has been spent polishing the earlier sections. Which leaves almost a full week of time available to polish the later (more rushed) sections of the dissertation before a draft must be submitted for review to David King.

The deadline for the application to be submitted is now 10 days from now. Research into the final requirements and how to submit the application are now being considered.