(CPSC444-BLOG-4)The One-stop Marketplace Platform You Need
- amogh sinha
- Mar 11, 2024
- 6 min read
Updated: Mar 13
Revised goal(s) of experiment
The objective of our initial study was to delve into the dynamics of the student seller marketplace, with a particular emphasis on the types of items being sold, the duration of listings, and the preferred communication methods among student sellers. Having previously focused on qualitative research, we are now pivoting towards enhancing the usability of potential interfaces. This shift marks a transition from merely understanding marketplace behaviors to exploring features and usability aspects of potential interfaces.
In our previous phase, we engaged 6-8 participants, who have had some prior experience selling items on a secondary marketplace, in interviews, striving for thorough data collection via audio recording and note-taking. These interviews explored several key aspects of the selling process, allowing us to uncover insights about habits, tendencies, and general trends. Additionally, we employed a pre-interview questionnaire to deepen our understanding.
Armed with this data, we developed Lo-Fi prototypes of potential interfaces that embody the insights gleaned from the initial study. The aim of our subsequent study is to identify and refine key design features and concepts that will enhance user navigation within a potential application interface. After feedback on our prototype, coupled with a deeper understanding of the project, prompted us to focus more on the interface itself, leading to the development of medium-fidelity prototypes that incorporate these changes.
Moreover, we have zeroed in on two critical functionalities: the ability for a user to add a new listing and to find a specific listing. These are fundamental tasks that any effective interface should facilitate for its users. Further we also aim to incorporate other several side tasks that are important to the usability of potential solutions while keeping the focus on the critical functions.
Experiment method
Participants:
We aim to recruit 5-8 participants to conduct this study. Potential participants would be screened prior to the experiment being conducted. With this number of participants we feel we are also able to recruit people who have differing levels of technical experience with interfaces. As students currently in university using our vast network of peers around campus we would ask people to formally participate in our studies. The reason we want people of varying skill levels is to ensure the platform is able to meet the needs and requirements of most basic digital device users. This will help us understand which design choices lead to intuitive interface design. Keeping with our previous experiment we would still aim to find people with item selling experience but this time around it is not as crucial to understanding the interface during the experiment this time around.
Conditions:
There are two main interface designs that each participant will be presented with, Design 1 “Quicklist” and Design 2 “Boxed”. Design 1 takes inspiration from stock trading platforms, or any live marketplace application, having two main sections to the home interface with a scrolling menu on the left smaller tab and an enlargement of the selected item on the right tab. This design heavily focuses on a vertical user menu where a large amount of information can be scrolled through quickly. On the other hand design 2 takes more of an inspiration from traditional marketplace platforms, having a more segmented design leading people to other pages of a potential interface. Having more click away and click to expand sections than Design 1. The tasks we will assign during experimentation will record how quickly users are able to complete the given task as well as the number of errors. At the end of the experiment we will also ask participants to complete a post experiment questionnaire for further data collection around the interface’s ease of use/enjoyability.
Experimental Tasks:
Participants will select between 4 different task sets, each focusing around the central tasks of adding a new listing and finding a listing. The other tasks will differ across each task setting, requiring participants to do a variety of tasks from deleting listings to updating existing listings. The task sets will be split like follows:
Task set 1:
Find a sports equipment related listing
Edit one of the features of the listing
Create a new sports equipment listing
Task set 2:
Find an electronic related listing
Delete the listing
Create a new electronic related listing
Task set 3:
Create a furniture related listing
Find the price of another furniture related listing
Change the price of the listing created to match the other existing listings price
Task set 4:
Find a car related listing
Locate the current best offer price
Add additional platforms through the edit listing feature and update listing
Formal Experiment Design:
The experiment should follow a two-factor design, Design 1 and 2 will be explored within subjects. Participants will be split into two skill groups, experienced and inexperienced to examine between subjects.
Procedure:
This experiment should be conducted by two experimenters and one participant.
One of the experimenters should have a stopwatch along with some form of note taking apparatus.
Experimenters will follow a standardized script throughout the process to ensure experiment remains the same for all participants
Participants will be walked through the experiment and then asked to sign a consent form.
A mobile phone with the first interface is then presented to the participant. Audio and potentially video data will be recorded
Participants will be asked to choose randomly from the four task sets to ensure no tampering or prior expectations.
The first interface is displayed and the participant has 30 seconds to familiarize themselves with the interface.
Every time the participant completes a task in the set, the observer would record down the time taken to complete the tasks and the number of errors.
Once they have completed the set they will then have to select another task set from the remaining sets. To ensure the user doesn’t learn from the instruction pattern and apply it to the second interface
Complete steps 7-8 using the second design interface.
To keep the ordering balanced half the participants will get Design 1 as the first interface and the other half gets Design 2 first.
After the task set is completed using both interfaces the participant would then complete a follow-up questionnaire to understand user experience/satisfaction.
Questionnaire:
Overall, which design did you find preferable to use?
Based on your experience with both the designs, how would you rate your satisfaction (based on ability to complete tasks) for DESIGN: QUICK LIST?
Based on your experience with both the designs, how would you rate your satisfaction (based on ability to complete tasks) for DESIGN: BOXED?
The experimenters will ask for any further comments about the interfaces.
Apparatus:
Following a “Wizard of Oz” black box approach, either a slide deck or live figma would need to be created with both interfaces. In addition to recording equipment, note taking equipment, a stopwatch and a device to complete the questionnaire on.
Independent Variables:
Interface (Design 1 or 2)
Participant technology experience level (experienced or inexperienced)
Dependent Variables:
Time in seconds to complete given task on each interface
The number of errors made while using said interface
Interface preference and user satisfaction
Hypotheses:
H1 → Using Design 1 would take less time to complete the task set than Design 2
H2 → Using Design 2 would take more time to complete the task set than Design 1
H3 → Design 1 would be preferred over Design 2 for a large number of listings
Planned Statistical Analysis:
We will employ a combination of statistical tests to analyze the experiment's results.
A two-way repeated measures ANOVA will assess the main and interaction effects of interface design (1 vs. 2) on task completion time (seconds), number of errors, interface preference score, and user satisfaction score. This analysis will treat all participants as a single group and focus on the impact of encountering both interfaces within the same session.
A separate between-subjects ANOVA will be conducted to explore the influence of participant technology experience level (experienced vs. inexperienced) on the same dependent variables. This analysis will compare the performance of experienced and inexperienced participants regardless of the interface design they used.
This two-pronged approach will provide a comprehensive understanding of how interface design and experience level affect user performance and satisfaction in our experiment.
Expected Limitations:
Having only a medium fidelity prototype to use for experimentation the interface would mean our functionality would be limited. Due to time constraints we would only be able to implement functionality for key features and would need a background controller to move the slides (or have a set order based on where the user clicks). Further the input information that users may be required to input will be pre-written for them to copy from directly as there is no space to accompany all types of input at the moment.
Comentarios