Social media and lodging performance

Chris Anderson from the School of Hotel Administration at Cornell recently released a report through the Center for Hospitality Research (CHR), summarizing the results of a series of studies he’s conducted to determine “The Impact of Social Media on Lodging Performance”.  Because of the relationships with the CHR, Chris was able to bring together data from three CHR research partners (ReviewPro, STR and Travelocity), and two other data providers (comScore and TripAdvisor).  This allowed him to provide a unique perspective on how social media moves markets.

As our loyal readers know, I am keenly interested in this area, and have done some research in it myself.  This research is quite complimentary to what Breffni Noone and I have focused on.  We studied consumer reaction to user generated content (UGC) and price in our work, while Chris is looking at UGC and hotel performance.

Chris’s series of studies can be summarized in three findings:

  1. The percentage of consumers who use reviews on TripAdvisor is increasing steadily
  2. If a hotel can increase their aggregate user ratings by one point (e.g. 3.3 to 4.3), they could increase their price by 11.2 percent before impacting occupancy
  3. A 1% increase in a hotel’s reputation score (as measured by ReviewPro’s Global Review Index ™), leads to a 0.89% increase in ADR, a 0.54% increase in occupancy and a 1.42 percent increase in RevPAR.

Chris will discuss his methodology and results in more detail in our upcoming CHR/SAS webcast to air in February 2013.  You can also download his paper from the CHR.  Instead of repeating what you can get from him, I’ll give you my perspective on his results.  As always, we’d love to hear your thoughts as well, so we look forward to your comments, questions and ideas!

The first point I would like to make is that in Chris’s study, even though he uses the word “reviews”, with the exception of the work on TripAdvisor, he is actually working with user ratings (i.e. the quantitative metric, generally 1-5).  He used the hotel’s aggregate user rating at the time of purchase in his Travelocity study (along with the number of reviews), and ReviewPro’s GRI ™ is the result of an algorithm that rationalizes quantitative metrics across all major OTAs and review sites to come up with an indexed score (see http://www.reviewpro.com/product/global-review-index).  Obviously, numbers are easier to work with than unstructured text, so it makes sense to use them in this context.  This does raise a couple of questions in my mind:

  1. In my research with Breffni, we found that consumers relied much more strongly on the unstructured text reviews than on the quantitative user ratings.  While the ratings were significant, they were much less so, and when the reviews and ratings conflicted, consumers relied on the reviews.
  2. Some research has shown that the quantitative score that a reviewer provides is often inflated, and also frequently not correlated well to the review that they write (see this CHR report by Rachierla, et. al for an interesting analysis of this paradox).

What does this mean for Chris’s study?  Probably nothing major, but given how much press “reviews” get, I thought it was worth pointing out.  When you are looking at influence on markets versus influence on individual purchases/booking behaviors, then it could be argued that user ratings are more of an indication of the aggregate, historical market perception of the hotel, and therefore a better metric to be used for the purposes of tracking overall performance (in particular in the third study).  However, if one makes that argument, the second point is of concern, and probably bears further thought and research.  If a high rating is strongly correlated with a positive review (also assume that even the positive review contains no details that would “turn off” a prospective purchaser), then Chris’s results hold.  If, as some research has shown, ratings are typically inflated, or are not easily interpreted objectively (i.e. my definition of what makes a four may differ greatly from someone else’s), then it might be important to see the same KPIs in this study compared with review sentiment. Again, this certainly doesn’t mean that results are not valid, it is just a point that managers should be aware of as they decide how this research applies to their business.

This brings me to my second point regarding how hoteliers should apply the results of this study in their environments. There is no doubt that this study reinforces the point that hoteliers must continually monitor UGC, and use what they learn to maintain and improve customer service.  However, I would argue hoteliers must think before they rush to raise prices based on this research.  The Travelocity study showed the impact of moving from a lower to a higher rating. The STR performance data showed relationships between UGC and KPIs.  Since the research was based on historical data, this suggests that many hotels with higher UGC tend to already be commanding a premium price in the market.  Hoteliers need to verify that the opportunity is there for them to raise price.  For example, if you are already at that higher rating, your price may be where it needs to be already.  Our research has shown that consumers prefer to pay a lower price, all things being equal, but they will pay more if one hotel is clearly rated better or has better reviews.  It also showed that hotels with bad UGC would see no benefit from lowering price.  Both studies imply that in order to price effectively, revenue managers (and hotel executives) must understand not only their price, demand and value proposition, but that of the competition.

Chris makes the excellent point that better ratings lead to more pricing power.  How hoteliers chose to use that pricing power depends not only on their position in the market versus the competition, but also their long term business strategy and goals.  Are there branding, market share or future development considerations?  What about your plans for attracting business that isn’t directly influenced by user generated content (contract, groups, wholesalers)? How would a price change impact that?  Are there loyalty or marketing implications?  Later this year in the blog, we plan to spend some time talking about how companies can use price as a strategic lever, not just a tactical revenue-maximizing tool.  This study provides support for the importance of including analytics derived from UGC in that strategic discussion.  It should (and will) spark some interesting conversation among hotel departments, as the implications for each individual property are debated!

I hope you will tune into our webcast in February to hear more from Chris about this study!

tags: Hospitality Analytics, Social Media

4 Trackbacks

  1. [...] between positive UGC and pricing power, whether you look at it from a consumer behavior or performance perspective.  However, revenue managers must think before they rush to raise prices. We strongly urge revenue [...]

  2. [...] do not value a mid-range as compared with a low-range value.  This finding adds a nuance to the recent study from the Cornell Center for Hospitality Research, which found an 11.2% increase in pricing power [...]

  3. [...] do not value a mid-range as compared with a low-range value.  This finding adds a nuance to the recent study from the Cornell Center for Hospitality Research, which found an 11.2% increase in pricing power [...]

  4. [...] for Hospitality Research at Cornell.  We spoke with Chris Anderson about his research project on Social Media and Lodging Performance, had a A Primer on PEAD from Pamela Moulton to set up her research on hospitality stock performance [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <p> <pre lang="" line="" escaped=""> <q cite=""> <strike> <strong>