Getting real about evaluating public relations

SimonGeneral

Reading PR Week‘s campaigns section this week has wound me up a touch about evaluation in public relations.

I can’t find a copy of the article online tonight, but it’s a case study about Aviva‘s sponsorship of a world record attempt to sail around Britain and Ireland by an all female crew.

The objectives for the campaign are listed as:

  • To increase awareness of Aviva within the UK and support the name change campaign
  • To strengthen the association between Aviva and Dee Caffari (the lead sailor)
  • To extend beyond the traditional sailing audience

Reading these I found myself wondering if they’re actually real objectives. For a start they’re not specific about what they’re aiming to achieve empirically – PR and marketing students will be very familiar with the concept of SMART objectives, which these definitely aren’t.

The article then goes on to describe a decent campaign and what the agency actually did.

But it’s when it comes to the measurement and evaluation section that it all falls apart. I know it will have been subbed down, but the headlines from the evaluation from the campaign were:

….100 pieces of coverage were generated over a one-month period…one third of these were on TV and totalled 42 minutes of UK airtime.

The BBC covered the world record attempt from departure to completion, and covered the completion live with seven pieces on BBC One’s Breakfast programme and six on the BBC news channel.

In total 12 pieces of national print coverage appeared in various publications…seventy-one per cent of all coverage featured a logo or photograph.

Eh? Surely evaluation needs to address the reasons why you did the campaign in the first place?

Leaving aside it would be hard to know whether the stated objectives were ever achieved because of their lack of SMART-ness, how does that pass for evaluation? It looks at some of the impacts of the campaign (ie coverage) but doesn’t even come close to considering the outcomes (ie what happened as a result of the coverage).

In the second opinion part of the feature where an external PR person takes a critical look at the campaign, it’s claimed there was a 93:1 return on investment on the campaign.

I’m really struggling to see how given these objectives and evaluation that focusses on coverage as this one seems to how anyone could come up with a credible return on investment figure. I’m guessing the ugly head of advertising value equivalent (AVE) has probably popped up somewhere though.

I’m being a bit narky about this one – I’m sure the campaign was very nice, but it does wind me up a bit because it’s so familiar – woolly and unclear objectives and misleading or non-existent evaluation are too common in public relations.

In a time when budgets are under ever-increasing scrutiny, public relations professionals have got to be able to frame campaign objectives measurably and in terms that link PR success to organisational goals, whether they are commercial, behavioural or reputational. And they’ve got to ensure they have methods in place to provide valid empirical evidence about whether the campaign has delivered those objectives.

Without both those things, I fear public relations will continue to struggle to justify its activities and be accountable for what it contributes to the organisation’s activities.