top of page


Assignments – 35% Individual and 40% Group

1: Ethics – Individual (5%) – due at the start of class September 22 2021 on Canvas

You will complete the TCPS2 online tutorial on the ethics of conducting studies with people.

2: Controlled Experiment – Individual (15%) – due at start of class October 13 2021 on Canvas

You will conduct a controlled experiment assessing one or more factors within a design. You will create an experimental plan including hypotheses and variables.  You will then run the study, analyze the data using inferential and descriptive statistics, and write a report outlining the findings.

3: Affective Evaluation – Individual (15%) –  due at start of class November 3 2021 on Canvas

You will conduct an affective evaluation of a video game (e.g., NHL 2019) using the cued debrief recall method and the intrinsic motivation inventory (IMI) questionnaire.  You will assess emotion and affect during gameplay.  You’ll analyze the study data and produce a report outlining the findings.

4: Evaluation Project – Group (40%) due midnight December 8 2021 on Canvas

You’ll choose a real world design evaluation problem (e.g., Company X wants to evaluate Product Y), create a study plan that selects an evaluation method based on a company’s needs, critical assesses the method and provides rationale for why it would be used rather than other methods, and documents all stages of the study.  You will then conduct the study and produce a report for the company.

Midterm – Individual (25%) —  November 10 2021 in Class

Bonus — Individual (2%) due midnight December 1 2021 , by email to Instructor Hanieh Shakeri [].

Assignment 1 - Ethics - Individual (5%)


When conducting design evaluation studies with people as participants, it is very important to follow ethical practices.  The Government of Canada has created guidance for conducting studies with humans that you need to follow when working or being a student at Simon Fraser University.  It is called the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans or TCPS2 for short.  Even once you are finished your time at SFU, it is advised that you continue to follow approved ethical standards when conducting studies.

For this assignment you must complete the TCPS2 course online at this address individually:

The tutorial will likely take you two to three hours to complete if you go through all of the modules properly and read the content. Upload your certificate on Canvas before the assignment due date.

Assignment 2 - Controlled Experiment - Individual (15%)

This assignment is a hands-on exercise on quantitative evaluation. Its immediate purpose is to give you experience conducting a controlled experiment, performing a simple statistical analysis, interpreting the results, and considering its implications to design decisions. Its other purpose is to provide you with enough knowledge of the experimental process to help you understand and appreciate the interaction design evaluation and HCI literature that uses this methodology.

Imagine you have been asked by a company to evaluate their product where they are interested in knowing if Interface Style A is better than Interface Style B.  You will plan and perform a controlled study and create a study report detailing the findings of your study and recommendations for system design.

You will complete the assignment individually.


1. Plan the Study:

  • Decide on a Design to Evaluate: Choose a design that you can evaluate and compare to a different design. For example, imagine driving a telepresence robot using one of two techniques: a mouse and keyboard vs. a gaming controller.

  • Hypotheses and Variables: You need to decide on hypotheses and null hypotheses, as well as independent, dependent, and control variables. If you choose not to run the study in person, you will need to ensure you can measure your dependent variables remotely. For example, you could run the experiment with a screen share to measure task time and error, or you could have your participants record data related to dependent variables and report it back to you.

  • Pre-Test Questionnaire: include questions that ask the user about their general demographics, computer skills, how often they have used user interfaces like those in question, and what they think of the user interfaces in question.

  • Post-Test Questionnaire: include questions that ask the user how they felt about the user interfaces, what was easiest to do, what was hardest, what suggestions they would have for improving the interface.

  • Representative Tasks: Participants will use both interfaces. You need to design a series of tasks for them to do using the system. Try to construct tasks such that they will take about 10-20 minutes per participant.

2. Perform the Study: Normally, you’d run around 20 people in a study like this but we will reduce that number because it is a class assignment. You must run the study with at least 6 people.  In total, this shouldn’t take more than a couple of hours to do if you plan your tasks and data collection appropriately.

3. Analysis: Analyze your findings using known statistical methods that are taught in class.

4. Report: Create a report that details your findings.

Deliverables — Report

Your report should be approximately 4-8 pages of 1.5 spaced text, font size 12 in Times New Roman or equivalent, 2 cm top/left/right margins with page numbers at bottom. This does not count tables, figures and appendices or any references should you cite any. It should include the following sections:

1. Introduction: describe the situation you are studying and why


2. Study Methods:

  • Participants: describe your participants’ demographics briefly.

  • Hypotheses: describe your hypotheses and rationale for them.

  • Method: explain that you performed a controlled study and describe your study steps including the tasks. Use figures to illustrate the tasks.

  • Environment: describe where you conducted the study and how you controlled the environment.

  • Data collection: explain how you collected your data.

  • Validity: describe any potential concerns with validity; talk about how you ensured validity.

  • Questionnaires: place your completed questionnaires in an appendix.

3. Results: Describe your data and statistical tests. Include any graphs to show your descriptive statistics.

For your statistical testing, report:

  • What data we are running the t-test on.

  • The result (p-value).

  • Whether you can reject the null hypothesis and what this means.

For example (M is mean, SD is standard deviation):

“An independent-samples t-test was conducted to compare memory for words in sugar and no sugar conditions. There was a significant difference in the scores for sugar (M=4.2, SD=1.3) and no sugar (M=2.2, SD=0.84) conditions; p = 0.02. Thus, we can reject the null hypothesis at a 95% confidence interval (p < 0.05). These results suggest that sugar really does have an effect on memory for words. Specifically, our results suggest that when humans consume sugar, their memory for words increases.” (Source)

4. Discussion and Conclusions: describe what the implications are from your results. What should be do moving forward based on the results of your study? What design implications does it present?

5. Appendix 1: include all raw data from your study. You need to Include your Excel spreadsheet twice. This might include printing out multiple data sheets/tabs within Excel depending on how you structure your spreadsheet:

– Show the Excel spreadsheet with the data output (e.g., the mean, standard deviation, p values)

– Show the Excel spreadsheet with formulas visible (to show formulas, see the instructions on this page)


6. Appendix 2: include your pre and post-test questionnaires

Upload the assignment on Canvas on the due date in the course calendar before the beginning of class. Failure to upload on time will be considered late.


Example Instructions To Participants

Example Pre and Post-Study Questionnaire (yours should be more detailed)

Post-Study Questionnaire (System Usability Scale)

Example Data Analysis

Marking Guide

Assignment 3 - Affective Evaluation - Individual (15%)

You are going to evaluate a video game’s user interface for emotional and affective experiences. Your goal is to find out how participants felt during and after game play. You will use this information to suggest improvements to the design of the game.

The assignment must be done individually; however, you will need to recruit a player, who may or may not be a classmate, and there is one portion where you must pair up with a classmate to analyze the data (reliably). You should be able to do all this work remotely if needed.


The Location

Ideally you will run this as a (potentially remote) field study. This means that you should evaluate the player in any environment where you might naturally find them playing these kinds of games. For example, you could host a video game party with several friends playing and then conduct the evaluation of one player in their living or party room. If this is not possible due to COVID-19, you can run the study remotely with your single participant playing a video game on their laptop in their own home space. You will need to ask them to set up their camera (for video) to record themselves playing if you will not be there. You should be explicit in your report whether you ran the study in person, remotely as a field study with several players (one of which you will capture data for), or remotely with one player.


The Evaluation Methods

You will be evaluating the participant’s affective experiences using two methods:

1. A simple post-play questionnaire called the Intrinsic Motivation Inventory (IMI) which measures the overall emotional experience of enjoyment of the game or activity.

2. The cued-recall debrief method, which is a first person, video-based method of reviewing and talking about the participant’s play session with the participant after the play session in order to elicit information about the short term affective states the player experienced during the game or activity.


Study Protocol

A study session should look like this for each participant.  You will run the study with just one participant.

1. Play: One participant will play a game for about 15 minutes. Their experience will be video recorded in first person – the video should capture what the participant is seeing. That is you/they will need to place a camera directly behind them or mounted on their head to capture what they see while they play.

2. IMI Questionnaire: After they are done playing, the participant is to fill out the IMI questionnaire using the enjoyment, perceived competence, perceived choice and pressure/tension subscales.

3. Cued Debrief Recall: Immediately after the questionnaire part is the review or debriefing part. The evaluator (you) and the participant will sit together and review the video of the play session and “debrief” or talk about what the participant was feeling during the play session. The participant will talk about this as they watch the video of the session. You may need to use prompts (like in think aloud) to encourage them to talk about what they were feeling. The debrief part should also be video or audio recorded. This means you will need to find a quiet place where you can replay your video and discuss the session. Note that you need one camera to play the video (or alternative play back display) and a second camera to video record the interview (focusing on audio).

When you are done the session you will have a video of the participant playing (shot in first person) and a video of the participant talking about their play (shot in third person).


Data Analysis

1. IMI Enjoyment Questionaire

The Intrinsic Motivation Inventory (IMI) is a multidimensional measurement device intended to assess participants’ subjective experience related to a target activity in laboratory (or contrived field) experiments. The questionnaire assesses participants’ interest/enjoyment, perceived competence, perceived choice and felt pressure and tension while performing a given activity, thus yielding four subscale scores for our purposes.

The interest/enjoyment subscale is considered the self-report measure of intrinsic motivation. Although the overall questionnaire is called the Intrinsic Motivation Inventory, it is only the one subscale that assesses intrinsic motivation, per se. As a result, the interest/enjoyment subscale often has more items on it that do the other subscales. The perceived choice and perceived competence concepts are theorized to be positive predictors of both self-report and behavioral measures of intrinsic motivation. Pressure/tension is theorized to be a negative predictor of intrinsic motivation.  You will score the IMI questionnaire using the procedure described in class.

2. Affective Content of Cued Recall Debriefing Sessions

After the play-debrief sessions are finished, you will analyze the debrief video/audios (i.e., not the video of playing). To do this you need two evaluators – thus, you need to pair up with someone else from the class for this part of the assignment.

Two evaluators will each review each debriefing session to listen for affect comments. They will record these comments on a data sheet.  Have two evaluators individually (i.e., separately) review each participant’s debriefing session. Focus on what the participant said (i.e., audio only). Your goal is to write down (on the data sheet) any comments the participants make which contain affective words or contain affective expressions. Count (add up) the total number of positive, neutral and negative comments.

3. Inter-rater Reliability on Cued Recall Debrief Affect Comment Coding

For each session/participant, compare the results of the two evaluators/raters. Did you find the same kinds of affective statements? Did you find the same number of each kind? Did you find the same overall number of positive, neutral and negative affect comments?

You also need to calculate the inter-rater reliability value for each session. The inter-rater reliability or R value indicates how similar the results from the two evaluators are. For each session your R should be better than .90. If it is not, have the two evaluators work together to reach a better consensus.


Deliverables — Report 

Your report should be approximately 4-8 pages of 1.5 spaced text, font size 12 in Times New Roman or equivalent, 2 cm top/left/right margins with page numbers at bottom. This does not count tables, figures and Appendices or any references should you cite any. It should include the following sections:

1. Introduction: describe the situation you are studying.

2. Methodology: describe your study methods.

  • Participants: describe your participants’ demographics.

  • Method: explain that you performed a field study and describe your study steps.

  • Data Collection: explain what data was collected and how.

3. Results: describe your data and the generalized results.

  • What does your analysis of the IMI/Enjoyment questionnaire data tell you about each of the player’s experiences with game play?

  • What does the affective content analysis of participant’s cued recall debriefing session tell you about each of the player’s experiences with game play?

  • For each participant, were there any parallels or inconsistencies between the questionnaire and the cued recall debrief results?

4. Discussion and Implications: describe what the implications are from your results. What should be do moving forward based on the results of your study? What design implications does it present?

5. Conclusion: conclude your report by summarizing the study and its findings.

6. Appendices: include all raw data from your study.

Submit on Canvas before the due date listed in the course timeline. Failure to submit it on time means it will be considered late.



Cued Recall Coding Sheet

IMI Scoring

IMI Scoring Instructions

Interrater Reliability Example

Marking Guide

Assignment 4 - Evaluation Project - Group (40%)

You will work with an industry or community partner to conduct a real world evaluation of one of their products.  This will involve you meeting with the partner and exploring their needs for design evaluation.  You will think carefully about the design challenge posed to you by the company or organization and decide on the best method for evaluating the design.  The evaluation methods that you can consider are:


  • Heuristic Evaluation

  • Usability Study (think aloud, constructive interaction, general observational study)

  • Controlled Experimental Study (IV can be device, UI, group or task)

  • Affective Evaluation

You will be graded based on the quality of your work as well as the complexity of the design evaluation you conduct.  For example, conducting a fairly straightforward heuristic evaluation of a single user application will likely not generate a strong grade.  On the other hand, a field deployment of a new user application that requires multiple site visits and interviews will likely generate a stronger grade given the complex nature of the study.

You will work in a group of four people.



The following steps are presented in the course calendar based on when they should occur:

1. Industry or community partner:  You will first need to find an industry or community partner that you can work with.  You will have to meet with them to explore their needs and what design you can evaluate.

2. Study Plan:  You will draft a study plan that document how you could conduct a study.   This should include all components, including the questionnaires, study tasks, interview questions (if needed), etc.  You should also write one page that explains why you would not choose other methods to conduct the study.

2. Perform the Study: Conduct the study with an agreed upon number of participants, as detailed in class.

3. Analysis: Analyze your findings using known methods that are taught in class.

4. Report: Create a report that details your findings.



You will provide a detailed report in PDF format that is 8-10 pages (not including Appendices), 1.5 spacing, 12 font, Times New Roman (not including appendices, tables and figures). The audience is your client but also think about the designers the client might show report to to make the changes. The report should include the following sections:

1. Introduction: describe the product you are evaluating and goals for your evaluation

2. Study Methods

Evaluation Method: explain the evaluation method you used and why you used it to address goals

  • Participants: describe recruiting, number and key characteristics of your participants

  • Procedure: describe exactly what participants did in study, including description and justification for your tasks.

  • Constructs & Data Collection: List 3-5 constructs, define each, explain how you collected data for each and when in study procedure you collected data (consider a figure of your study session timeline to illustrate timing of tasks/data collection points example). 

  • If you used them — Questionnaires: briefly describe them; place your questionnaires in an appendix. Add references to sources for your measures as needed to help client understand your measures (e.g. SUS, IMI). 

  • Reliability & Validity: describe any potential concerns with reliability & validity; talk about how you ensured reliability and/or validity.

  • Data Analysis: describe how you analyzed your data for each construct.

3. Results: describe the main themes from your results.  You should bring together your different findings from constructs’ as relevant to make a case for your results (what’s working, main problems). Link your problems to data (e.g. use a quote or description of behavior or heuristic to support your description of each problem). What did you learn by conducting the design evaluation that the client needs to know?

4. Discussion and Conclusions: describe the implications from your results. What should be done moving forward based on the results of your study? How should the user interface be redesigned as a result of your study?  Show mockups of potential redesign ideas explaining your solutions. 

5. Appendix 1: include all raw data from your study which has been de-identified (e.g. remove participant name). For example, include excel sheets with task time or step count or error count, or heuristic evaluation data sheets etc … you can provide a working link to data (e.g. an excel sheet or Miro board)

6. Appendix 2: include any pre and post-test questionnaire data for all participants, etc. which has been de-identified (e.g. remove participant name)

7. Appendix 3: include consent forms. Do NOT include these in report to client because they cannot be anonymized. 

Bonus - Individual (2%)

You can earn up to 2% bonus by participating in designated research studies within SIAT as a learning experience to broaden your understanding of research in interactive arts and technology. You can earn a 2% bonus for participating in a study that is 30 minutes or longer, or 1% for a shorter study (for a max of 2%). All studies during the pandemic (social distancing) are online.

The studies are posted in the study management system: (Links to an external site).

You need to create an account. In your profile, you can select a default course to apply credits to, or you can decide later.

Beware, that there is a limited number of slots for each study, and it is quite unpredictable when the new studies will be ready. If you see the study that interests you, go and do it.

Once you participate in a study, complete a User Study Reflection form and email it to the course instructor and TA.

bottom of page