Visualizing Purchase Processes


During a 12-week summer internship, I conducted design research for the buyer experience team at Atlassian. This included 25 stakeholder interviews as well as 3 iterative approaches to communicate customer perspectives in novel, actionable ways.



Contributions


Interviewing
Observation
Synthesis & Ideation
Task Analysis
Illustration

Collaborators


Nicole Tollefson
Shona Reed
Panna Cherukuri



Background


Atlassian is an enterprise software company with a unique self-service sales model. This means pricing is transparent, and customers are able to make purchases directly on the website, without having to haggle with a sales person. This approach helps to lower product prices, as the company does not bear the cost of employing a traditional sales force.

Despite these advantages, some customers could benefit from the guidance of a contact person while making the decisions required to purchase Atlassian’s software. The purchasing process is complex, dependent on company size (or number of licenses), geographic market, and whether the customer is new or returning – not to mention the various product bundles and deployment options.



Overview


Given Atlassian’s growth and success, there had been a longstanding sense of “tribal knowledge” that the self-service model was sufficiently easy to use. Recent work to better understand buyer journeys had revealed major pain points, however, and was informing strategic decisions for the upcoming fiscal year. I was asked to deliver an artifact for internal stakeholders that would help keep the buyer’s perspective top of mind while making everyday decisions. Thus, the primary research questions were:

  1. How might we generate customer empathy while considering stakeholder needs?
  2. How might we communicate the buyer experience in a clear and compelling way?



How might we generate customer empathy while considering stakeholder needs?


Reviewing secondary research
To gain a better grasp of organizational context, I began by reviewing prior visualizations that had been well socialized, as well as existing buyer journey work done by internal product teams. Though all teams had started with the same template, the resulting artifacts took divergent forms, spanning three slide decks, nine Trello boards, and too many intranet pages to count.

These existing journeys were difficult to digest for those not already “deep in the weeds.” Furthermore, they were based on various types of data sources – including surveys, experiments, customer interviews, and internal workshops – and did not consider the overall Atlassian purchasing process. To make sense of all this qualitative data, I created a few Trello boards of my own, looking for overarching – and when possible, product-agnostic – themes.



︎Synthesizing common pain points in the purchasing process. Each colored bar indicates a product label; cards with multiple colors thus indicate a common theme across products.


Cross-functional stakeholder interviews
As I was diving into these buyer journeys, I also conducted a listening tour with 25 stakeholders across design, research, development, product management, marketing, and customer advocate roles. During this process, I came to see that that while people are motivated to improve the buyer experience, their priorities are focused on their own workflows or products rather than the overall customer experience.

These conversations helped to identify four major requirements for any type of customer-centric messaging or artifact: brevity, flexibility, actionability, and credibility. Furthermore, the interviews led to an epiphany: while Atlassian’s mission is to unleash the potential of every team,” many stakeholders didn’t realize that even before product use, purchasing is also a team effort, with input from legal, security, and procurement departments.



︎Requirements identified from interviews with 25 cross-functional stakeholders.

︎A key takeaway from conducting interviews.



How might we communicate the buyer experience in a clear and compelling way?


Pain points as comics
In an attention-scarce environment, brevity seemed both a novel and necessary contrast to the documentation that already existed. One approach I pursued was to create comics, inspired by webcomicname’s three-panel comics with a blob as the protagonist, always ending with “Oh no.” These succinct visual reminders of customer pain would ideally be propagated in relevant everyday communication channels such as intranet pages and chat rooms.

Furthermore, using humor to communicate pain points would help to diffuse any blame or shame that people might feel when pain points are constantly emphasized, promoting empathy over negativity. While not specifically actionable (nor well illustrated, for that matter), the format offers a simple, visual reminder of customer perspectives that can be easily shared – and ultimately aims to shift culture at a grassroots level.


︎Illustrating how purchasers often face internal resistance to adopt new software.



︎Customers often struggle to evaluate Atlassian’s complex products, within the short timeframe of the allotted trial period.



︎At the time of this project, Atlassian only accepted a limited number of currencies, creating friction for its international customers.


Sports field visualization kit
Another more flexible approach I took was inspired by board games and the World Cup, both of which I observed employees engaging with at the office. To highlight the complexity and players of the purchasing process, I wanted to leverage Atlassian’s existing emphasis on team work to reframe customer-centric messaging in a way that would be relatable to internal stakeholders.

I imagined movable components on a sports field background to show the buyer experience in a quick and engaging way, during meetings or onboarding sessions. A digital version could be stitched together into a GIF, then – like the comics – saved and shared in common communication channels.

︎An early outline of the journey on a sports field, with buyers crossing from left to right, across the purchasing phases: Identify Need, Research, Trial, and Convert.



︎An example GIF of a visualization kit prototype in use on a whiteboard.


Top tasks benchmarking survey
Revisiting the identified requirements, however, I realized that both of these approaches were brief or flexible, but not necessarily actionable or credible. Additionally, I had reservations about whether either format would live on after the duration of my summer internship, once I had returned to my graduate program.

At this point, I luckily became aware of the Research & Insights team’s new benchmarking initiative, using Jerry McGovern’s top tasks method. This approach would be credible, given that it is customer led, and it would dentify actionable priorities. It would also ensure longevity beyond my internship as the Research and Insights team intended to continue the work over the next few years.

I thus redirected my efforts to develop a top tasks survey, revisiting existing documentation to generate a master list of tasks; I also returned to internal stakeholders I had previously interviewed to gather feedback in adjusting language and granularity of tasks.
 

︎Evaluating buyer tasks with relevant internal stakeholders, both in person and remotely.



Outcomes


I was unfortunately not able to see the top tasks survey through to completion before my internship ended. However, my work in the final few weeks did generate a master list of tasks that the Research and Insights team would use to administer the top tasks survey; this would ultimately meet the goal of identifying a brief, actionable, and credible list of customer priorities. After my departure, I was also informed that my comics were discovered by the Head of the Design Studio, who found them resonant and had them printed on posters for every office.



︎Master list of top tasks generated, rendered on Mural.

︎Printed poster of comics in the San Francisco office.




Takeaways


This project was valuable in learning not to be afraid to question a prompt, as well as to exercise rigor in validating any existing data. Given that the project was largely self-directed, I also gained experience in managing expectations as well as seeking assistance when necessary. Furthermore, despite the desire to demonstrate the value of my work as an individual, I came to understand the importance of folding efforts into larger-scale initiatives to ensure longevity of impact.
Mark




Auditory Data Design


Informed by an interdisciplinary literature review, we developed an analytical framework of data sonification practices as well as a voice user interface (VUI) representing data from the U.S. Census and the American Community Survey. We then conducted usability test sessions with this VUI to evaluate the potential of data exploration via conversational interfaces, as well as to present recommendations for future work.



Contributions


Task Analysis
UX Writing (Voice Design)
Usability Testing
Literature Review
Framework Development

Collaborators


Michelle Carney
Peter Rowland

Mentors
Marti Hearst
Steve Fadden



Background


According to National Public Media, one in six Americans own a smart speaker as of January 2018, and this population has more than doubled in the course of one year. With the rise of both virtual assistants and software-embedded devices, audio-first interactions are becoming more prevalent in daily life. However, there are not yet industry standards for sharing data via sound experiences – particularly through emergent smart speaker interfaces.



Overview


This project was motivated by open data initiatives, particularly those run by government agencies. In anticipation of the 2020 U.S. Census, we began evaluating existing web-based data exploration tools, conducting cognitive walkthroughs of the U.S. Census’s 2010 website, interactive maps, and “Profile America” audio stories. While this informed a general understanding of data exploration, we imagined a future in which one would be able to make sense of data exclusively by ear. With support from the Berkeley Center for New Media, we focused on the following research question: how might audio enable us to understand complex datasets?



︎Experience map of existing census data exploration tools.



How might audio enable us to understand complex datasets?


Literature review
We first conducted an in-depth literature review of prior work related to “data sonification,” or using non-speech audio to convey information. Our analysis was initially based on “auditory graphs,” shaped by visual analogs (i.e., histograms, scatter plots, pie charts, etc). Spanning across various disciplines – including human-computer interaction, accessibility, music, and art – the range of papers demonstrated great variety: some demonstrated more creative intent with memorable, musical designs, while others focused on accurate, scientific representations. Papers also demonstrated differences in whether the researchers mapped sounds to real or simulated data.

Observed differences in the data sonification literature manifested in two primary areas:

  1. the rigor – or absence – of experimental procedures, and
  2. the quality of stimuli used – whether researchers used abstract MIDI sounds or audio mapped to semantic meaning (e.g. using the sound of rain to communicate precipitation).


Framework development
For our framework, we thus plotted selected papers along two dimensions: Objective vs. Subjective Sonification Approach on the vertical axis, and Abstract vs. Functional Data on the horizontal axis. Key papers were grouped by the following representational characteristics: Trends, Clusters, Spatial Relationships, and Distributions.



︎Conceptual framework for existing auditory data representations.


Voice prototype
While reviewing the literature, however, we discovered that prior research often lacked associated audio files, since much of it is decades old. We therefore reproduced three different sonification methods, using the softwares Ableton Live and Audacity:

  1. an audio choropleth map proposed by Zhao et al, using pitch to represent population data by state and region from the 2010 Census,
  2. an audio line graph inspired by Brown and Brewster, using timbre to represent employment data by age group from the 2015 American Community Survey (ACS), and
  3. an audio pie chart designed by Franklin and Roberts, using rhythm to represent employment data from the ACS again, but by education level.

Using the prototyping software Invocable (formerly Storyline), we used these auditory data designs to develop our own voice user interface, an Alexa skill called “Tally Ho.”

︎Summary of recreated data sonification methods.



︎Summary of corresponding visual representations.



︎Overview of the conversational prototype.


Usability testing
With this VUI, we conducted in-person usability testing to evaluate the potential of auditory data exploration via a contemporary, conversational interface. We sought and recruited a five-person sample that would demonstrate varying degrees of familiarity with: smart speakers, musical knowledge, and census data. During each moderated session, the participant was presented each of the three different audio representations in a randomized order, then asked follow-up questions about initial impressions, perceived difficulty, and user expectations.


︎ Conducting usability testing on the Alexa-enabled Amazon Tap.




Outcomes


Our prototype used pitch, timbre, and rhythm to represent data points, category differences, and overall trends: users were able to hear these distinctions and interpret them mostly correctly after hearing a scripted explanation from the VUI. (More documentation, including the full report and audio files, can be found on the Berkeley School of Information project page. )

Our results suggest that users generally enjoyed the experience of hearing data – finding it “cool,” “fun,” and even “powerful” – but also had difficulty remembering insights as passive listeners. Even with repetition, participants lacked precise recall; 4 out of the 5 participants noted that it was “confusing” to remember what they had just heard.




Takeaways


Given that the average human primarily relies on capabilities of sight first and sound second, it is unsurprising that VUIs tend to require a greater cognitive load and more training than traditional visual interfaces. Among the participants who already owned smart speakers, the devices were mainly used to perform simple tasks like playing music and setting alarms. Understanding sonified data was an entirely new type of experience. One way to overcome this novelty factor would be to conduct a longitudinal study to assess changes in both performance and enjoyment of the experience over repeated interactions.

With the growth of conversational interfaces – in tandem with the rise of ubiquitous computing – we remain optimistic about the way forward in designing auditory data representations. As Hermann and Ritter suggest, humans are “capable [of] detect[ing] very subtle patterns in acoustic sounds, which is exemplified to an impressive degree in the field of music, or in medicine, where the stethoscope still provides very valuable guidance to the physician.” Continuing to develop new interaction patterns for auditory data exploration will benefit not only those who are visually impaired or limited in numeracy skills, but also anyone who is curious about making sense of data through alternative means.
Mark




Life in a Minute


“Life in a Minute” is a critical making installation that facilitates playful, tangible interactions to encourage reflection on personal time allocation. With support from the Berkeley School of Information and the Berkeley Center for New Media, the project was exhibited in the gallery of the 2018 Ethnographic Praxis In Industry Conference (EPIC).



Contributions


Hardware Prototyping
Interaction Design
Observation
Literature Review

Collaborators


Dylan Fox
Varshine Chandrakanthan

Mentors
Kimiko Ryokai
Noura Howell



Background


In an era when both time and attention are increasingly scarce, there is a need for design that encourages active, thoughtful reflection. Concerned about the social implications of technology, our aim with “Life in a Minute” was to make time tangible, probing people to think critically about the value of time and to reveal underlying attitudes.

The metaphor of financial spending is an act that directly represents people’s priorities: we hypothesized that feelings of time scarcity are a particularly fertile ground for arriving at cultural knowledge, given that time is inherently finite. This project thus addresses a novel intersection of time as a theme, reflective design, and tangible user interfaces.



Overview


The participant’s lifetime is embodied in pennies, translating the average worldwide 71.5-year lifespan into 715 pennies. Over the course of 60 seconds, participants “spend” these pennies that represent their lifetime by allocating them into five different life target jars – Career, Community, Education, Family, and Play: these include servo-powered lids that open and close at random, adding a carnivalesque quality to the experience.

When time is up, players receive a receipt as evidence that embodies the “lifetime” spent and enables reflection after the experience has ended. The receipt is printed by an Arduino-powered thermal printer, with coin allocation weights mapped to years in a lifespan.


︎ Overview of the installation setup, including both the participant and an attendant.


︎An attendant deposits coins onto a conveyor belt for the participant. 

 ︎The participant allocates coins picked up into jars of “life target” areas, whose servo-powered lids open and close at random.

︎When a minute has elapsed, each “life target” jar is weighed and recorded.

 ︎ Each participant receives a receipt of how they “spent” their time, with coin allocation weights mapped to years in a lifespan.




Outcomes


Rich feedback from 25 participants suggest the value of “Life in a Minute” as a cultural probe to uncover attitudes toward time scarcity and decision-making processes. For instance, some participants “spent” their coins as soon as they were dispensed; others waited at the end of the conveyor belt to collect as many coins as possible, delaying their time allocation. Another key contrast was in relation to the life target jars, opening and closing at random: some people waited for certain jars to open, while others allocated opportunistically to any jars available. Asking the participant about such behaviors afterward effectively sparked self-reflection and candid dialogue about their motivations, conscious or unconscious.

The game-like design of the interaction also layers in a sense of fun, occasionally inducing a competitive mindset. One participant remarked, “People with bigger hands have an advantage.” Another participant was observed peeking at another’s receipt, asking, “How did you do?” But the most unexpected method of engagement with the system involved pairs playing together, adding in a new layer of observable interaction. Among one duo, we overheard the following question, “What’s our strategy?” To which her partner replied, “To waste as little time as possible.” When asked why she chose not to play solo, another participant remarked, “Life is easier with a partner.”



︎ Exhibiting in the gallery of 2018 Ethnographic Praxis In Industry Conference.

 ︎ One visitor’s response to the project at EPIC.




Takeaways


Given the quality of such data, we propose that this work may be repurposed as a projective interviewing method. A key improvement would be to allow participants to self-identify categories of personally valuable “life targets,” as self-designated categories would downplay any prescriptive intention and permit greater interpretability for the participant. Additionally, the receipt or idealized “life summary” artifact could be compared to actual behavioral evidence, similar to Anderson et al.’s “ethno-mining” approach: for example, participants could be asked to map their real online behavioral data with broader “life targets.”

We offer “Life in a Minute” as a flexible, novel critical probe, enabling participants to reveal subjective perspectives and to make meaning about the value of time. By introducing an interactive installation, we open a space for thinking more broadly about new methods of collecting data which may more accurately mirror real sentiment and lived experiences.
Mark



Reimagining Mobility


Over the course of three months, I led three rounds of research to reimagine the future of accessible, autonomous transportation for the Ford Research & Innovation Center. This included 12 interviews to understand various stakeholder perspectives as well as eight generative research sessions to iterate and co-create a service design prototype.



Contributions


Market Research
Interviewing
Synthesis & Ideation
Usability Testing
Project Management

Collaborators


Corten Singer
Nancy Yang
Takara Satone
Reece Clark



Background


Examining extreme users can lead to solutions that are not only more inclusive, but also better for everyone. Consider curb cuts: despite being initially designed to comply with the American Disabilities Act, one study that found nine out of ten “unencumbered pedestrians” go out of their way to use a curb cut. Furthermore, America is aging, and 40 million Americans currently live with a disability, according to the U.S. Census Bureau’s 2015 American Community Survey. Unsurprisingly, Americans aged 65+ years make up the majority of this population, which is projected to double by 2050.



Overview


Looking beyond cars, the Ford Research & Innovation Center is taking on a user-centered, systems-level approach to designing for mobility. I had the opportunity to conduct design research for Ford through the Jacobs Institute for Design Innovation. Over the course of three months, I worked with an interdisciplinary team to reimagine what the future of mobility might look like in the next 10 to 15 years, focusing on the following challenges:  

  1. How might we incorporate accessibility in the design of future transport?
  2. How might we design solutions that can benefit a broader population as well?


︎Observations of wheelchair users on buses in Oakland, California.



How might we incorporate accessibility in the design of future transport?


Market research
We began our process with analysis of secondary data, to better understand the current ecosystem. Although the number of wheelchair users is predicted to grow, it is a deeply underserved market. From crutches and canes to walkers and wheelchairs, mobility-enhancing products exist but are standard issue – there is a lack of innovation in the space. This in spite of the fact that cars and wheelchairs can be comparable in cost: the price of a power wheelchair can go as high as $30,000, according to the Archives of Physical Medicine and Rehabilitation.

Public transit services meet accessibility standards, but only because it is required by law. While legislation helps advocacy groups and nonprofit organizations improve access to assistive services, they are rarely well funded and still require extra work from mobility-impaired individuals. In the private sector, ride sharing services often don’t have amenities for wheelchairs; however, large technology companies are pioneering initiatives – like Microsoft NeXT and Apple Accessibility – to improve access for the mobility impaired. Although they have deep pockets and wide reach, going to market remains a slow process due to liability risks.


︎ Our secondary research identified the mobility impaired as a growing but underserved market.

︎Emergent themes from analyzing the current experience ecosystem include time, expenses, regulation, community, and public awareness.


In-depth interviews
Informed by market research, we proceeded to conduct 12 in-depth interviews with relevant stakeholders. We spoke to a group diverse in age, ethnicity, and degree of impairment: this included nine wheelchair users and three able-bodied caregivers for patients in wheelchairs (one of whom was also a patient’s spouse). A few key insights emerged:

  1. Society is inattentive to people in wheelchairs, both in physical space and in regard to resource allocation. “Every day I am yelling at a driver for pulling some boneheaded move,” said one interviewee. Another participant shared a story about waiting for the bus for over an hour: “You can only fit so many chairs in there, at most two at a time,” he explained. “If it’s already full, you just have to wait for the next bus.”
  2. Paradoxically, wheelchairs both enable greater mobility and reinforce limitations in regard to users’ sense of self. Despite the sticker price, hardware is limited in choice, lacking features of empowerment such as safety lights, thicker wheels, and longer battery life. “To not be at the mercy of anyone else would be amazing,” said one wheelchair user. “I haven’t been in a car alone for over 15 years.”


︎A sampling of the stakeholders interviewed, both in person and remotely.



How might we design solutions that can benefit a broader population as well?


Participatory design
To engage relevant stakeholders in participatory design, we asked caregivers of wheelchair patients to create collages responding to the prompt, “What do you hope the future of mobility will look like?” Participants described their collages with words like “helpful,” “comfortable,” “fast,” and “convenient.” We additionally asked wheelchair users to write love letters to idealized mobility products and services. These letters introduced new concepts of flexibility and personalization – and emphasized the emotional weight of mobility. One participant wrote, “Words cannot express how much you mean to me because without you, I’d go nowhere in my life.”


︎Love letters to idealized mobility products and services, written by wheelchair users.


Furthermore, I rode around the Berkeley campus in a wheelchair myself, as a “walk-a-mile” exercise. I was struck by a visceral claustrophobia in halls and doorways; in a wheelchair, it was evident that the world was not designed for me. I had to consult building maps to identify accessible exits, and ask for help when accessible door buttons were broken. Anything but the smoothest sidewalks felt terrifying; I was constantly worried about tipping over. I also understood how interviewees described feeling invisible firsthand. Sitting far below everyone else’s eye-level, I would find myself unknowingly unacknowledged.


︎Riding a wheelchair around the Berkeley campus as an empathy-building exercise.


Ideation, sketching, and storyboarding
Equipped with research insights, we used the Luma Institute’s Creative Matrix to identify possible solutions. After a democratic discussion, we went on to create storyboards for our 10 most promising ideas. To design solutions with crossover potential – meaning, not just for wheelchair users – we used a “speed dating” approach among non-wheelchair users to get feedback on our storyboards. Even on more wildcard concepts, like a public wheelchair charging station, we heard helpful ways to their broaden appeal, such as, “Maybe add chargers for other devices (like phones),” and “Everybody’s battery dies. It would be good to have this, especially at night when you don’t want a dead phone.”

Based on participants’ top votes and client feedback, we developed a prototype of a hybrid service incorporating a few of the concepts. This prototype was guided by 2 design principles: community – making transit a time to promote connection with others – as well as inclusivity, or normalizing accessible enhancements that enable mobility for everyone.



︎Evaluating feedback from "speed dating" storyboard concepts.



Outcomes


Our research culminated with Alula, a transit solution named after the bird's "thumb," which enables it to fly. In the context of autonomous public transit, there is an absence of drivers who make bus rides accessible for people living with mobility impairments: drivers both deploy the ramp and advocate for wheelchair users. With Alula, we consider an end-to-end transportation system that remains accessible, even in the absence of bus drivers. Key features are centered around our design principles:

Community
  1. Parklet bus stops. Waiting is more pleasant when bus stops are community gathering areas.
  2. Passenger notifications. Alula promotes social accountability by notifying passengers to make room for people that need extra space.
  3. Personalized messaging. Based on user input into a mobile application and GPS location, the passenger receives personalized reminders when it’s time to disembark – enabling conversation and peace of mind that they won’t miss their stop.

Inclusivity
  1. Modular bus seats. Flexible seating permits passengers to make room for people who are boarding the bus.
  2. Automatic ramps. Deploying the bus ramp at every stop prevents “other”-ing those with mobility impairments, making accessibility the norm.
  3. Priority seating requests. With a simple photo upload, all passengers can apply for priority seating to accommodate both long-term disabilities and temporary requests. Image detection enables immediate approval to support those needing extra space, while a long-term request goes through a more thorough human review.


︎Once the user inputs a destination, the app suggests nearby parklet bus stops.


︎Existing passengers are notified of priority seating requests and can make room with modular seats.


︎Ramps are automatically deployed at every stop, making accessibility the norm.



︎Personalized messaging and device charging promote passenger peace of mind.



Takeaways


Designing an end-to-end transportation experience was educational, leading me to think of possible improvements in future work. I learned the importance of communicating how the whole system works all together: for instance, attempts to get feedback through UserTesting.com led to confusion among participants, as the app alone didn’t fully communicate the service. This process additionally provided the valuable experience of working with a client, emphasizing the importance of storytelling to communicate an overall solution. 
Mark