Q+A W/ Victoria Sosik | Scaling Research Impact @ Google Maps and Beyond
We recently had Victoria Sosik, Director of UX Research at Verizon, on the Learners Recap Podcast. We discussed her work scaling research impact while working on Google Maps in 2019 through a variety of research programs, as well as how she reported insights to ensure the success of such programs.
Scaling Research Impact: Case Studies from Google Maps
The Q+A below is based around Victoria’s talk that was recently released from the Learners Vault, called Scaling UXR Impact: Case Studies from Google Maps. In the original talk, she answered questions like:
- How can lean UXR teams deliver combined impact beyond just the sum of their parts?
- How do you make sure you are reporting insights at the right time to the right people?
Meet Victoria Sosik
Victoria earned her PhD in Human-Computer Interaction from Cornell University and has authored numerous scientific publications. She thinks a lot about what it means to be an impactful researcher and an impactful UX research team. She is currently the Director of UX Research at Verizon, where she leads a team of researchers working across a wide variety of consumer and business products and experiences. Prior to Verizon, she spent six years scaling research impact and growing the team at Google Maps.
How did you deal with the pressure of reporting on immediate impact during that first trip you mentioned where you took the team to Delhi?
I think we tried to very much build it into how we thought about the deliverables and outcomes of the trip. So we were very much thinking about—what can we be delivering in real time, and how can we show the insights coming out of it?—as we were there. So we had a command center based in New York that was trying to split the timezones and capture across and share out some digests.
That was one of the reasons for the experiment sprint as well, because that was something where we were like, okay, one of the most immediate things we can do is to try a couple of experiments to just address some of the lower hanging fruit and something that we can do quickly to see what happens. The experiment sprint allowed us to quickly come up with ideas of simple things we can try and experience to see what happens, so that was one of the more short term pieces. And of course there was impact that went years and years beyond because a lot of the research was foundational that we were gathering from that trip. We were just trying to be really intentional about creating activities and designing insights that would span the here and now and down the line.
Can you provide a little more detail on how you were dripping out insights while things were happening?
Yeah, and I’ll be honest—this particular project was from so long ago that I’m trying to remember the exact details. If I recall, we were trying to do almost like, daily—the initial goal was to do daily digest-type summaries. I think this actually fell apart in the moment just because we were in places where connectivity wasn’t great, so even making sure that we could get everything uploaded in the way we initially thought we were going to try to—we did run into some logistic hurdles. The idea was that we could send out almost like, a couple of top highlights across all four of the cities we were at on the daily.
What it turned more into was trying to do some quick-turn decks with the different slices of what we were looking at after we returned, like a “by city” deck and a “by topic” deck. I think that’s where we ended up sharing more out. But if you’re in a position where you can coordinate across, I think the daily digest is a great idea.
Can you speak a bit to what some of the blockers to building credibility for research within the organization are? This is something you’ve talked about a lot across all of your case studies.
Some of the blockers for building credibility—I think it’s a little less about the speed in my opinion, and more about connecting what we’re finding to real business impact. I think sometimes we stop short of making that connection for people and really doing that work to show that this insight really then lays the groundwork for these things that had impact on our bottom line, whatever the bottom line may be or whatever the metrics the company cares about. That’s actually more what I think about, because if you can make that point, then they can understand and they’re willing to invest the time.
And you know, you can start small—you don’t need to jump right into things that take a ton of time. This urban jungle project was far from the first project we did as a research team. We had been building credibility as we went, and that allowed us to get the buy-in for something as massive as that (urban jungle project).
When tailoring your insights to different teams and different levels, what did it look like when you were at Google Maps? What were the differences in tailoring insights to executives vs. teams on the ground?
I think it comes down to putting yourself in your audience’s shoes. What kinds of decisions are they making on a daily basis? What are the things that are top of mind for them, because it’s either they’re being held accountable to, or it’s something that they said is a key part of their strategy? And then just aligning what you’re sharing directly at that level.
That’s what it comes down to in many ways, when you’re thinking about sharing insights at different levels. It can be inspiring to share some really big picture, white space-type work for teams that are in the middle of executing on a specific project. But realistically, if they’re working against roadmaps to deliver on thing X, those insights may not be super actionable for them at the moment. Yet an executive who is in the process of planning for 2022 is thinking exactly in that space, and that kind of work could inform their work at the moment.
When you were describing the optimal window of time to have impact after a large project, you said that “being too close to the planning” would lead to lower impact. How close is too close? When did the half planning or yearly planning occur so that you could predict when it was best to plan these large trips?
I mean, trial and error—100%. You may be hearing this, and being like, “Planning cycle? What planning cycle?” So there may not be a defined planning cycle and you need to do a little digging and feeling out, like, at what stages do you see new projects popping up, for example.
If you have a planning cycle, then it’s, okay, what are the key activities in it? Is there a time when project leads or directors or whoever the people in your organization that are a part of the processes—when do they need to provide input? Because generally the input they need to provide are ideas and things they want to do for the next year. So when does that input need to happen? And then you back it up a couple of weeks from there, and ask them, “How do you plan to go about that? Is there a way the research team can best support you in that process?” So don’t be afraid to ask, and think about the key moments. In reality, it will always probably be a little earlier than you think, but you really hone it from trial and error in your organization.
Special thanks to everyone who asked questions!
Want to join our next Q+A live? Check out the Learners calendar!