August 23, 2012
Ways to improve peer-nominations.
Marks, P.E.L., Babcock, B., Cillessen,A.H.N., & Crick, N.R. (in press). The effects of participation rate on the internal reliability of peer nomination measures. Social Development.
This is a summary on a topic which is most relevant to those people who are actually trying to collect data on peer-aggression and peer-victimization. The article is one which has been accepted for publication, but has not as yet been published.
The focus here is on whether peer-nominations are reliable ways to assess the social behaviours and relationships of children. Self-reports are often used for these purposes, but t has been argued that these are subject to biases which may make us question their reliability. For example, people may be unwilling to admit to bullying others but may be all to ready to report that they themselves intervene when they see others being bullied. There are also other statistical issues relating to shared-method variance that are important (I won’t go into these here, but see here for more information). So, researchers often use peer-nominations instead, on the understanding that children and young people will more honestly report on what their peers are like than what they themselves are like.
However – peer nomination procedures are, of course, based on getting reports from peers. And when we conduct research we rarely get 100% participation rates. So the question here is whether participation rates influence how reliable the peer nomination are. Put another way, are these measures poor ways to assess children and young people’s social behaviours and relationships if too few students choose to take part in a research study?
These researchers recruited 642 young people aged approximately 10-11 years old from 10 elementary schools in the USA. They collected peer nominations of friendship, liking/ peer acceptance, popularity, overt aggression, and prosocial behaviour.
Not surprisingly, reliability (Cronbach’s alpha) was higher when participation was higher. However, overt aggression and prosocial behaviour were the most reliable, and liking and friendship were the least reliable. The authors muse over whether this is to do with the difference between observable concrete behaviours (e.g. hitting someone) and things that require much more personal judgement (whether someone is a friend or not). I’d add to this that if this line of reasoning is correct, then it may be the case that indirect aggression is less reliable than overt aggression since indirect aggression can be less obvious to observers.
The authors also note that asking for unlimited nominations, rather than say a ‘top-3’ most aggressive peers, is best. In addition, having more nominations (e.g. two peer nomination items assessing prosocial behaviour rather than one) was also associated with greater reliability.
Based on the authors’ expertise as well as their results, they suggest that participation rates of 40% may be sufficient for good reliability relating to measurement of overt aggression and prosocial behaviour. However, for friendships reliability could be problematic even when participation rates were over 85%. Perhaps friendships need to be operationalized in terms of specific behaviours and this could help to improve reliability.