Sunday, September 6, 2015

How Deloitte Revamped its Performance Management System ?

Sources and Acknowledgements:
Most of the below text is adapted (directly and indirectly) from the below URLs. So all credit to the author of the below articles for the upcoming text.

Motivation behind the Deloitte's change:
1. Like many other companies, we realize that our current process for evaluating the work of our people—and then training them, promoting them, and paying them accordingly—is increasingly out of step with our objectives.
2. In a public survey Deloitte conducted recently, more than half the executives questioned (58%) believe that their current performance management approach drives neither employee engagement nor high performance. They, and we, are in need of something nimbler, real-time, and more individualized—something squarely focused on fueling performance in the future rather than assessing it in the past.

Shortcomings of the traditional system as observed by Deloitte:
1. Employees thought that the existing process was fair, however management didn't. Internal feedback demonstrates that our people like the predictability of this process and the fact that because each person is assigned a counselor, he or she has a representative at the consensus meetings. The vast majority of our people believe the process is fair. We realize, however, that it’s no longer the best design for Deloitte’s emerging needs: Once-a-year goals are too “batched” for a real-time world, and conversations about year-end ratings are generally less valuable than conversations conducted in the moment about actual performance.

2. But the need for change didn’t crystallize until we decided to count things. Specifically, we tallied the number of hours the organization was spending on performance management—and found that completing the forms, holding the meetings, and creating the ratings consumed close to 2 million hours a year. As we studied how those hours were spent, we realized that many of them were eaten up by leaders’ discussions behind closed doors about the outcomes of the process. We wondered if we could somehow shift our investment of time from talking to ourselves about ratings to talking to our people about their performance and careers—from a focus on the past to a focus on the future.

3. The most comprehensive research on what ratings actually measure was conducted by Michael Mount, Steven Scullen, and Maynard Goff and published in the Journal of Applied Psychology in 2000. Their study—in which 4,492 managers were rated on certain performance dimensions by two bosses, two peers, and two subordinates—revealed that 62% of the variance in the ratings could be accounted for by individual raters’ peculiarities of perception. Actual performance accounted for only 21% of the variance. This led the researchers to conclude (in How People Evaluate Others in Organizations, edited by Manuel London): “Although it is implicitly assumed that the ratings measure the performance of the ratee, most of what is being measured by the ratings is the unique rating tendencies of the rater. Thus ratings reveal more about the rater than they do about the ratee.”

4. We also learned that the defining characteristic of the very best teams at Deloitte is that they are strengths oriented. Their members feel that they are called upon to do their best work every day. We wanted to spend more time helping our people use their strengths—in teams characterized by great clarity of purpose and expectations—and we wanted a quick way to collect reliable and differentiated performance data.

What kind of changes in Performance Management System were embraced by Deloitte?:
1. what we’ll include in Deloitte’s new system and what we won’t- It will have-
   a. no cascading objectives,
   b. no once-a-year reviews, and
   c. no 360-degree-feedback tools.
2. We’ve arrived at a very different and much simpler design for managing people’s performance. Its hallmarks are speed, agility, one-size-fits-one, and constant learning, and it’s underpinned by a new way of collecting reliable performance data.
3. At the end of every project (or once every quarter for long-term projects) we will ask team leaders to respond to four future-focused statements about each team member. We’ve refined the wording of these statements through successive tests, and we know that at Deloitte they clearly highlight differences among individuals and reliably measure performance. Here are the four:
   a. Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus [measures overall performance and unique value to the organization on a five-point scale from “strongly agree” to “strongly disagree”].
   b. Given what I know of this person’s performance, I would always want him or her on my team [measures ability to work well with others on the same five-point scale].
   c. This person is at risk for low performance [identifies problems that might harm the customer or the team on a yes-or-no basis].
   d. This person is ready for promotion today [measures potential on a yes-or-no basis].
4. In effect, we are asking our team leaders what they would do with each team member rather than what they think of that individual.
5. Our design calls for every team leader to check in with each team member once a week. For us, these check-ins are not in addition to the work of a team leader; they are the work of a team leader. If you want people to talk about how to do their best work in the near future, they need to talk often.

Additional comments:
1. This is where we are today: We’ve defined three objectives at the root of performance management—to recognize, see, and fuel performance. We have three interlocking rituals to support them—the annual compensation decision, the quarterly or per-project performance snapshot, and the weekly check-in. And we’ve shifted from a batched focus on the past to a continual focus on the future, through regular evaluations and frequent check-ins.
2. Deloitte's previous performance management:
Objectives are set for each of our 65,000-plus people at the beginning of the year; after a project is finished, each person’s manager rates him or her on how well those objectives were met. The manager also comments on where the person did or didn’t excel. These evaluations are factored into a single year-end rating, arrived at in lengthy “consensus meetings” at which groups of “counselors” discuss hundreds of people in light of their peers.

Around what time-frame were the changes brought in:
Likely 2015

Image source:

No comments: