Multisource feedback assessment (MSFA) is a common tool in management development and performance research. It is based on the belief that the underlying multisource assessment leads to valid inferences about a person's behavior. The typical data analysis strategy used is limited, somewhat flawed, and preoccupied with interrater correlations. This article aims to show that the differences in perspectives and criteria among raters should be an integral element of the assessment by considering each trait-rater perspective combination as a dyad represented by a latent variable. A more comprehensive approach to the analysis requires checking factorial invariance before making any comparison between self and other rater assessments. This strategy is illustrated by applying the proposed approach to the measurement of a specific trait - competencies included in the Emotional and Social Competencies Inventory, a 360º survey method used in competency research. Data come from a sample of MBA students enrolled in a leadership development course offered in a Spanish business school. Results show that: a) the observed differences in the distribution of the underlying competencies measured by external raters and by self-assessments can be attributable to different meanings attached to those competencies; b) the item's reliability depends on the criteria of homogeneity of the rater group; c) the loadings within the dyads provide reasons for improving item wording or selection; and d) some competencies can be better assessed by one specific category of raters than by another. Theoretical and practical implications are discussed.

ESADE

Back to home

Batista Foguet, Joan M.; Saris, Willem Egbert; Boyatzis, Richard; Serlavós Serra, Ricard

Why multisource assessment and feedback has been erroneously analysed and how it should be

Multisource feedback assessment (MSFA) is a common tool in management development and performance research. It is based on the belief that the underlying multisource assessment leads to valid inferences about a person's behavior. The typical data analysis strategy used is limited, somewhat flawed, and preoccupied with interrater correlations. This article aims to show that the differences in perspectives and criteria among raters should be an integral element of the assessment by considering each trait-rater perspective combination as a dyad represented by a latent variable. A more comprehensive approach to the analysis requires checking factorial invariance before making any comparison between self and other rater assessments. This strategy is illustrated by applying the proposed approach to the measurement of a specific trait - competencies included in the Emotional and Social Competencies Inventory, a 360º survey method used in competency research. Data come from a sample of MBA students enrolled in a leadership development course offered in a Spanish business school. Results show that: a) the observed differences in the distribution of the underlying competencies measured by external raters and by self-assessments can be attributable to different meanings attached to those competencies; b) the item's reliability depends on the criteria of homogeneity of the rater group; c) the loadings within the dyads provide reasons for improving item wording or selection; and d) some competencies can be better assessed by one specific category of raters than by another. Theoretical and practical implications are discussed.
More Knowledge
Why multisource assessment and feedback has been erroneously analysed and how it should be
Batista Foguet, Joan M.; Saris, Willem Egbert; Boyatzis, Richard; Serlavós Serra, Ricard
16th EURAM Annual Conference 2016
European Academy of Management (EURAM)
Bruselas (Belgium), 01/06/2016 - 01/06/2016

Related publications

Back to home