It’s “Connect Season” again at Microsoft, a biannual-ish period when Microsoft employees are tasked with filling out a document about their core priorities, key deliverables, and accomplishments over the past year, concluding with a look ahead to their goals the next six months to a year.
The Connect document is then used as a conversation-starter between the employee and their manager. While this process is officially no longer coupled to the “Rewards” process, it’s pretty obviously related.
One of the key tasks in both Connects and Rewards processes is evaluating your impact— that is to say, what changed in the world thanks to your work?
We try to break impact down into three rings: 1) Your own accomplishments, 2) How you built upon the ideas/work of others, and 3) How you contributed to others’ accomplishments. The value of #1 and #3 is pretty obvious, but #2 is just as important– Microsoft now strives to act as “One Microsoft”, a significant improvement over what was once described to prospective employees as “A set of competing fiefdoms, perpetually at war” and drawn eloquently by Manu Cornet:
By explicitly valuing the impact of building atop the work of others, duplicate effort is reduced, and collaboration enhances the final result for everyone.
While these rings of impact can seem quite abstract, they seem to me to be a reasonable framing for a useful conversation, whether you’re a Level 59 new hire PM, or a Level 67 Principal Software Engineer.
The challenge, of course, is that measurement of impact is often not at all easy.
When writing the “Looking back” portion of your Connect, you want to capture the impact you achieved, but what’s the best way to express that?
Obviously, numbers are great, if you can get them. However, even if you can get numbers, there are so many to choose from, and sometimes they’re unintentionally or intentionally misleading. Still, numbers are often treated as the “gold standard” for measuring impact, and you should try to think about how you might get some. Ideally, there will be some numbers which can be readily computed for a given period. For instance, my most recent Connect noted:
While this provides a cheap snapshot of impact, there’s a ton of nuance hiding there. For example, my prior Connect noted:
Does this mean that I was less than half as impactful this period vs. the last? I don’t think so, but you’d have to dig into the details behind the numbers to really know for sure.
Another popular metric is the number of users of your feature or product, because this number, assuming appropriate telemetry, is easy to compute. For example, most teams measure the number of “Daily Active Users” (DAU) or “Monthly Active Users” (MAU).
While I had very limited success in getting Microsoft to recognize the value of my work on my side project (the Fiddler Web Debugger), one thing that helped a bit was when our internal “grassroots innovation” platform (“The Garage”) added a simple web service where you could track usage of any tool you built. I was gobsmacked to discover that Fiddler was used by over 35000 people at Microsoft, back then over one out of every three employees in the entire company.
Hard numbers bolstered anecdotal stories (e.g. the time when Microsoft’s CTO/CSA called me at my desk to help him debug something and I was about to guide him into installing Fiddler only to have him inform me that he “used it all the time.”)
When Fiddler was being scouted for acquisition by devtool companies, I quickly learned that they weren’t particularly interested in the code — they were interested in the numbers: how many downloads (14K/day), how many daily active users, and any numbers that might reveal what were users were doing with it (enterprise software developers monetize better than gem-farming web gamers).
A few years prior, my manager walked into my office and noted “As great as you make Fiddler, no matter how many features you add or how great you make them, nothing you do will ever have as much impact as you have on Internet Explorer.” And there’s a truth to that– while Fiddler probably peaked at single-digit millions of users, IE peaked at over a billion. When I rewrote IE’s caching logic, the performance savings were measured in minutes individually and lifetimes in aggregate.
Unfortunately, there’s a significant risk to making “Feature Usage” a measure of impact– it means that there’s a strong incentive for every feature owner to introduce/nag/cajole as many people as possible into using a feature. This often manifests as “First Run” ads, in-product popups, etc. Your product risks suffering a tragedy of the commons effect whereby every feature team is incentivized to maximize user exposure to their feature, regardless of the level of appropriateness or the impact to users’ satisfaction of the product as a whole.
When a measure becomes a target, it ceases to be a good measure.
Goodhart’s Law
When trying to demonstrate business impact, the most powerful metric is your impact on profitability, measured in dollars. Sadly, this metric is often extremely difficult to calculate: distinguishing the revenue impact of a single individual’s work on a massive product is typically either wildly speculative or very imprecise. However, once in a great while there’s a clear measure: My clearest win was nearly twenty years ago, and remains on my resume today:
Saving $156,000 a year in costs (while dramatically improving user-experience– a much harder metric to measure) at a time when I was earning around half of that sum was an incredibly compelling feather in my cap. (As an aside, perhaps my favorite example of this ever was reading the OKRs of the inventor of Brotli compression, who noted the annual bandwidth savings for Google and then converted that dollar figure into the corresponding numbers of engineers based on their yearly cost. “Brotli is worth <x> engineers, every year, forever.”)
Encouraging employees to evaluate their Profit Impact is somewhat risky, however– oftentimes, engineers are not interested in the business side of the work they do and consider it somewhat unseemly — “I’m here to make the web safe for my family, not make a quick buck for a MegaCorp.” Even for engineers who accept the deal (“I recognize that we only get to spend hundreds of millions of dollars to give away this great product because it makes the company more money somewhere“) it can be very uncomfortable to try to generate a precise profitability figure– engineers like accuracy and precision, and even with training in business analysis, calculation of profit impact is usually wildly speculative. You usually end up with a SWAG (silly wild-ass guess) and the fervent hope that no one will poke on your methodology too hard.
A significant competitive advantage held by the most successful software companies is that they don’t need to bother their engineers with the business specifics. “Build the best software you can, and the business will take care of itself” is a simple and compelling message for artisans working for wealthy patrons. And it’s a good deal for the leading browser business: when the product at the top of your funnel costs you 9 digits per year and brings in 12 digits worth of revenue, you can afford to not demand the artisans think too deeply about the bottom line.
Of course, numbers aren’t the only way to demonstrate impact. Another way is to tell stories about colleagues you’ve rescued, customers you’ve delighted, problems you’ve solved and disasters you’ve averted.
Stories are powerful, engaging, and usually more interesting to share than dry metrics. Unfortunately, they’re often harder to collect (customers and partners are often busy and it can feel awkward to ask for quotes/feedback about impact). Over the course of a long review period, they’re also sometimes hard to even remember. Starting in 2016, I got in the habit of writing “Snippets”, a running text log of what I’ve worked on each day. Google had internal tooling for this (mostly for aggregating and publishing snippets to your team) but nowadays I just have a snippets.txt
file on my desktop. Both Google and Microsoft have an employee “Kudos” tool that allows other employees to send the employee (and their manager) praise about their work, which is useful for both visibility as well as record-keeping (since you can look back at your kudos for years later). I also keep a Kudos
folder in Outlook to save (generally unsolicited) feedback from customers and partners on the impact of my work.
Even when recounting an impact story, you should enhance it with numbers if you can. “I worked late to fix a regression for a Fortune 500 customer” is a story– “…and my fix unblocked deployment of Edge as the default browser to 30000 seats.” is a story with impact.
Another challenge with storytelling as an approach to measuring impact is that our most interesting stories tend to involve frantic, heroic, and extraordinary efforts or demonstrations of uncommon brilliance, but the reality is that oftentimes the impact of our labor is greater when competently performing the workaday tasks that head off the need for such story-worthy events. As I recently commented on Twitter:
We have to take care not to incentivize behaviors that result in great stories of “heroic firefighting” while neglecting the quiet work that obviates the need for firefighting in the first place. But quantifying the impact is hard– how do you measure the damage from a hurricane that didn’t happen?
My most recent Connect praised me as having done “a great job of being our last line of defense” which I found quite frustrating– while I do get a lot of visibility for fixing customer problems that have no clear owners, my most valuable efforts are in helping ensure that we fix problems before customers even experience them.
Related to this is the relationship of speed to impact— the sooner you make an adjustment, the smaller the adjustment needs to be. Flag an issue in the design of the feature and you don’t have to update the code. Catch a bug in the code before it ships and no customer will notice. Find a bug in Canary before it reaches Beta and developers will not need to cherry-pick the fix to another branch. Fix a regression in stable before it ships to Stable and you reduce the potential customer impact by very close to 100%.
Similarly, any investment in tools, systems, and processes to tighten the feedback loop have broad impact across the entire product. Checking in fix to a customer-reported bug quickly only delights if that customer can benefit from that fix quickly.
Unfortunately, because speed reduces effort (a faster fix is cheaper), it’s too easy to fall into the trap of thinking it had lower impact.
A key point arises here– impact is not a direct function of effort, which is only one input into the equation.
A friend once lamented his promotion from Level 63 to 64 noting “It’s awful. I can’t work any harder.” and while I’ve felt the same way, we also both know that even the highest-levelled employees don’t get more than 24 hours in the day, and most of them retain some semblance of work/life balance.
We’re not evaluated on our effort, but on our impact. Carefully selecting the right problems to attack, having useful ideas/subject matter expertise, working with the right colleagues, and just being lucky all have a role to play.
At junior levels, the expectation is that your manager will assign you appropriate work to allow you to demonstrate impact commensurate with your level. If for some reason something out of your control happens (a PM’s developer leaves the team, so their spec is shelved) the employee’s “opportunity for impact” is deemed to be lower and taken into account in evaluations.
As you progress into the Senior band and beyond, however, “opportunity for impact” is implicitly unlimited. The higher level you get, the greater the expectation that you will yourself figure out what work will have the highest impact, then go do that work. If there’s a blocker (e.g. a partner team declines to do needed work), you’re responsible for figuring out how to overcome that blocker.
Amid the Principal band, I’ve found it challenging to try to predict where the greatest opportunity for impact lies. For the first two years back at Microsoft, I was unexpectedly impactful, as (then) the only person on the team to have ever worked as a Chromium Developer– I was able to help the entire Edge team ramp up on the codebase, tooling, and systems. I then spent a year or so as an Enterprise Fixer, helping identify and fix deployment blockers preventing large companies from adopting Edge. Throughout, I’ve continued to contribute fixes to Chromium, investigate problems, blog extensively, and try to help build a great engineering culture. Many of these investments receive and warrant no immediate recognition– I think of them as seeds I’m optimistically planting in the hopes that one day they’ll bear fruit. Many times I will take on an investigation or fix for a small customer, both in the hope that I’m also solving something for a large customer who just hasn’t noticed yet, and because there’s an immediate satisfaction in helping out an individual even if the process doesn’t seem like it could possibly scale.
As you move up the ranks, one popular way to increase your impact is to become a manager. As a manager, you are, in effect, deemed partly responsible for the output of your team, and naturally the impact of a team is higher than that of an individual.
Unfortunately, measuring your personal contribution to the team’s output remains challenging– if you’re managing a team of star performers, would they continue to be star performers without you overhead? On the other hand, if you’re leading a team of underachievers, the team’s impact will be low, and there are limits to both the speed and scope of a manager’s ability to turn things around.
As a manager, your impact remains very much subject to the macro-environment– your team of high performers might have high attrition because you’re a lousy manager, or in spite of you being a great manager (because your team’s mission isn’t aligned with the employee’s values, because your compensation budget isn’t competitive with the industry, etc).
Beyond measuring your own impact, you’re now responsible for the impact of your employees– assigning or guiding them toward the highest impact opportunities, and evaluating the impact of the outcomes. You’re also responsible for explaining each employee’s impact to the other leaders as a part of calibrating rewards across the team. Perhaps unfortunately for everyone, this process is mostly opaque to individual contributors (who are literally not in the room), leaving your ICs unable to determine how effectively you advocated on their behalf beyond looking at their compensation changes.
One difficult challenge is that, “One Microsoft” aside, employee headcount and budgets are assigned by team. With the exception of some cross-division teams, most of your impact only “counts” for rewards if it’s for your immediate peers, designated partner teams, or customers.
It is very hard to get rewarded for impact outside of that area, even if it’s unquestionably valuable to the organization as a whole.
Around 2009 or so, my manager walked into my office and irreverently noted “You’re an idiot, you know.” I conceded that was probably true, but asked “Sure, but why specifically?” He beckoned me over to the window and pointed down at the parking lot. “See that red Ferrari down there?” I nodded. He concluded “As soon as you thought of Fiddler, you should’ve quit, built it, and had Microsoft buy you out. Then you’d be driving that instead of a Corolla.” I laughted and noted “I’m no Mark Russinovich, and Microsoft clearly doesn’t want Fiddler anyway.” But this was a problem of organizational alignment, not value– Microsoft was using Fiddler extremely broadly and very intensely, but because it was not well-aligned with any particular product team, it received almost no official support. I’d offered it to Visual Studio, who made some vague mention of “investing in this area in some future version” and were never heard from again. I offered to write an article for MSDN Magazine, who rejected me on the grounds that the tool was “Not a Microsoft product” and thus not within their charter, despite its broad use exclusively by developers on Windows. Several leads strongly implied that my work on Fiddler was evidence that I could be working harder at “my day job.”
Ultimately, I won an Engineering Excellence award for Fiddler, for which I got a photo with Bill Gates, an official letter of recognition, and $5000 for a morale event for my “team.” Lacking a team, I went on a Mediterranean cruise with my girlfriend.
Of course, there have been many non-official rewards for years of effort (niche fame, job opportunities, friendships) but because of this lack of alignment with the organization, even broad impact was hard for Microsoft to reward.
Our CEO once famously got in trouble for suggesting that employees passed over for promotion should be patient and “karma” would make things right. While the timing and venue for this response was not ideal, it’s an idea that has been around at the company for decades. Expressed differently, reality has a way of being discovered eventually, and if you’re passed over for a deserved promotion, it’s likely to get fixed in the next cycle. In the other direction, one of the most painful things that can happen is a premature promotion, whereby you go from being a solid performer with level-appropriate impact to underachieving versus expectations.
I spent six long years in the PM2 band before we had new leaders who joined and recognized the technical impact I’d been delivering on the team; I went from 62 to 63 in five months.
In hindsight, I was probably too passive in evaluating and explaining my impact to leaders during those long years, and I probably could have made my case earlier if I’d spent a bit more energy on doing so. I had a pretty dismissive attitude toward “career management” and while I thought I was making things easier on my managers, the net impact was nearly disastrous– quitting in disgust because “they just don’t get it.”
How do you [maximize|measure|explain] your impact?
-Eric