Phil Nottingham

About Phil Nottingham

Phil is the Marketing Strategist at Wistia, a video hosting and analytics platform which helps businesses manage, customise and measure video content across their websites. Phil is passionate about the use of creative video content to improve the way companies speak to their customers and he regularly speaks around the world about video strategy and technical marketing.

Prior to working at Wistia, Phil was a Senior Consultant and at Distilled, where he became a renowned expert on video SEO and video marketing strategy. Phil has consulted for some of the biggest and smallest brands in the world including Red Bull, The Financial Times, Tesco, Simply Business, Travelex and Thoughtworks. Phil trained in theatre practice at the Central School of Speech and Drama in London and worked in film, theatre and broadcast technology before joining the world of marketing in 2010.

In his Learn Inbound talk, Phil will take a deep look into how the best and worst marketing strategies are built, providing tactical tips to ensure you own plans never fall foul of the biggest mistakes.

Key Takeaways

  • Treating all metrics as KPIs is over-thinking strategy, so instead, we should focus on one particular relevant and useful KPI as it will allow you to accomplish what you set out to do.
  • When we don’t know how to properly read and interpret data, gathering it often causes more problems than it solves.
  • Ensure that the channel is in harmony with the content and target group. For example, targeting young people on LinkedIn with video marketing is obviously a bad choice.

Video Transcription

Thank you. So you're not all in the pub. Good.

Hello, lovely to be here in wonderful Dublin. My name is Phil Nottingham. I actually spent a lot of my time in another fantastic Irish city, Boston, Massachusetts.

And Boston is, you know, it's a lovely place. It's got a lot going for it, but it's also got a few problems. Mostly all the Irish people. The fact that they think baseball counts as a sport, and the fact that Marky Mark and the Funky Bunch haven't released an album in 25 years. But the biggest problem which they have in some of the public sphere is potholes.

So Boston is quite old, by American standards, for a city. As a consequence a lot of the roads are in a slight state of disrepair, and this has become a bit of an actually a bit of an endemic issue in Boston, and so a couple of years ago the mayor's office decided enough was enough, and they were going to try and get to grips with this particular problem. So what they did is they got some kids with beards in a room, and they came up with an ingenious idea called "Street bump" which was basically designed to allow people to find a smart way of tracking where the most problematic potholes were.

So this was a smartphone app that you download, you'd install it on you phone, and then as you were driving around, it would track the data and send this back to the mayor's office, where they would get the aggregate data and understand exactly how many potholes were causing which problems, which ones were being affected most, and you know, generally a really smart way of trying to track what was going on. This won a lot of sort of press coverage, and lots of people talked about it.

It was a really smart way of engaging the private sector with the public sector, and all around, the plaudits were handed for how they handled this, and how they were then able to prioritize the work they were doing in terms of fixing potholes in a really sensible way. But- here comes the kicker. There was a problem. And the problem was these people. Now I'm sure we all know a few people that are like this.

Definitely some in the room. I know many. And the issue was these people, sort of early adopters, tech-y kind of people, and there's a lot of them in Boston with Harvard and MIT, etc., these are the ones who kind of embraced this particular piece of technology. So they downloaded the app and they started going around Boston in their cars and measuring when there were potholes, and finding out all this information.

But the reason why this project ultimately failed, was that these people have one thing that most of the people in Boston don't have-money. So what happened was instead of tracking the most problematic potholes overall, they were tracking the most problematic potholes in the rich areas of the city.

So public money designed to fix the major problems for this big American city was going to fix the problems of the 1%, rather than the majority of people. So on the face of it this seems like a really smart idea, but the fact that the data was actually showing them something that was different to what they thought it was showing them caused whole heaps of problems. So there's a great quote from this professor at Harvard University, "This is an example of really bad analytics, and it's even worse because it's the kind of thing that feels like it should work, and does work a little bit."

I'm sure you can see where I'm going with this. So the problem that they thought they were solving for was the most problematic potholes, but in reality they were just solving for the potholes that were near rich people who actually downloaded this and had [inaudible] their cars. So what they should have been measuring really was potholes with the highest density of traffic. So they may have used, if they wanted to try and do this a different way, they may have used Google maps to get their traffic data, and then go and scout out the potholes, or something similar.

I think the real underlying issue here is this: we now live in a time where we are able to get far, far more data. We live in this age of big data, everyone's talking about big data, nobody really knows what it is. I think the problem is that actually collectively our data literacy has not improved for many decades. I think within this room we're probably reasonably data savvy, but even I don't have a particularly strong knowledge of statistics, and the every-man on the streets certainly doesn't, and yet we're using far more data in our work, and certainly marketing we're using a ton of data to try and improve the return on certain things, and try and measure success in different ways.

I think that this use of data without really understanding what it's telling us is causing three main, huge endemic issues in the way we're approaching marketing strategies. Today I just want to kind of be able to have a little rant and indulge with you these three issues and kind of explain how I think they can be solved. The three problems are thus.

Firstly, valuing everything by ROI. Secondly, treating all metrics as KPIs, and thirdly, mis-aligning goals, channel, and content. So let's start with valuing everything by ROI. So what is ROI? Well it's optimally a very simple metric, it's essentially just the cost of the investment versus what you get back in monetary terms.

So this is obviously a very clean, nice way in aggregate of looking at whether or not what you're doing is effective from a business revenue standpoint. So when you're applying return on investment as a metric and a kind of measure of success, often what you think you're doing is valuing the efficiency and the efficacy of the spect, which on the surface, much like the Boston potholes issue, sounds sensible.

Why not add a monetary value to all the activity you're doing, try and get a figure that kind of represents the value it's providing to the business, add this all up, compare activities, and then you're going to get much better strategic insight into what's going to work for your business. There's many, many CMOs and many marketing directors and people who still think this way, often think this way, and then try and make sure they're optimizing all of their activity for return on investment.

I'm sure many of you have [inaudible] many of you may have tried it yourself. So it kind of makes sense on the surface to try and add a dollar value to everything, and I think the most common, and indeed the most egregious way of doing this is with last touch attribution. So last touch attribution is basically saying, "Well, we're going to say that the interaction that somebody has with our business, be that on a website, on a channel that eventually results in a conversion is how we're going to kind of value the monetary value of it."

So if somebody goes to a Facebook page, and then they click through to my product page, and then they eventually convert and sign up for whatever it is I'm offering, then we're going to give that particular session the monetary value of the equivalent of the conversion." This is very common, this is seemingly what most people still do, even though I'm sure many of you know kind of some of the flaws.

But what happens when you look at a website with last touch attribution is as follows: here's a kind of general frame of a typical website, so you know if we're doing a SaaS product, you've probably got like a product section, we've got a support section, a blog or a content section, pricing, contact, that sort of thing. What ends up happening is you look at the product pages, you measure them by conversion in terms of last touch.

They look pretty good, because a lot of people will go to the product page, look around, eventually go and convert. Super. Support, doesn't really help that much. Support pages arekind of mostly used post sale, so it doesn't seem to make sense. Blog- terrible. Nobody goes to a blog, reads a blog post, and then converts. So actually from this metric the blog just looks like a waste of time.

Pricing, well, you know, that looks amazing, because tons and tons of people go to the pricing page before they're going to actually convert. Similarly with the contact. So this- we know this is problematic, we know this doesn't make sense, because we know the blog is valuable. The issue is far more kind of insidious than this. It's that once you start measuring something in a certain way, everything eventually accidentally starts optimizing around that metric.

So I've got an example here from Nationwide, and I feel okay having a go at them, because I pay them a lot of money. Now, Nationwide, this was a blog post on their website- "Seven Safe Driving Habits You Should Know" bom, bom, bom, bom, at the bottom, little call to action here saying, "Go and make sure you have the right auto insurance policy to cover you in the event of an accident."

Who goes to a blog post like this, "Seven Safe Driving Habits,"and then buys a stress purchase like car insurance? No one. That is not an action anyone would do. But the problem is the moment you start to even consider measuring a blog post in terms of how it's attributing to conversions, you're very tempted to kind of do this thing, because, you know, if it just worked a little bit that's going to provide a bit more value. Yet, from a kind of audience perspective, you look at this, know that this feels incongruous.

We know this feels weird. But equally, we've probably all been in situations where we've been tempted to do something as bizarre as this, because of the way we're measuring the success of our content. So that's what you see with last touch attribution, and people who kind of know the issues with that are likely to go for something a bit more like linear attribution, where you measure the value of an interaction based on each one equally, so if somebody eventually comes to your blog post from an organic search, and then they come via Facebook, you measure each one dependent on the user, rather than the session.

That's okay, that's a step in the right direction. You might do it at a time decay model, or something else. But actually here you're still ultimately solving for conversions. You think you're solving for the efficacy of spend, but actually you're just solving for conversions. I'll explain why. Let's take an example of a channel or a strategy or an approach that I'm sure many of you have done before, which is PR. So we actually do some PR at Wistia, where I work, and earlier this year we managed to get my boss, Chris Savage, onto BBC news, in the morning, to talk about the business.

So this was, you know, major TV channel where he was actually talking about the product, what it does, how it works, how he's built the business, on major news in one of our major markets, the UK. Obviously, by any kind of standard you look at this and think, "That's a good return for a PR campaign." That is the kind of result that you would expect and hope to have won. This is what our analytics looked like that night.

So we had a little bump here, when Chris was on the TV, and then you know, later in the day, this is our usual bump that we typically get around 10AM, when people are waking up and that tells you our most active time of the day. So it kind of made a little bit of notice there, But ultimately, didn't do much. Actually, if we break down- even if we take a first touch attribution, or we take a multi-touch attribution of any sort, we still end up with something that looks like this.

Five hundred visits overall, zero conversions that we can track that were related to this experience and all in all probably cost us about $10,000 by the time it took to send him to London, and be on the TV, and the PR agency fees, etc. So we look at this, even from a multi-touch attribution perspective, and we go, "PR sucks. This is stupid.This doesn't make any value." Yet, we know that's kind of silly, because getting somebody on major TV we know is going to be of value, because people have tried to do this for decades.

We know that this kind of makes sense, so there's a real mismatch between the smartest ways we're trying to measure value, and the activity which we kind of think in our gut sense works. Then we have to report to our boss that we manage to get us on at BBC but it did nothing for us? this doesn't feel right. So what we might then try to do is say, "Okay, well let's think about how brands used to be measured back in the day."

Maybe PR is more of a brand building activity than a conversion activity, which I think is probably true enough. So we can look at TV. Now the way people use to measure TV was in terms of reach, so the impressions that you thought you could get based on your placements. Then the brand salience, so you do brand recourse surveys and you see exactly how much lift you'd get over time in terms of recognition of your brand.

This was okay back in the day, this kind of worked as a rough proxy for determining the value of TV advertising. Because there wasn't much else going on. You'd normally have TV, maybe some offline, maybe some magazines, etc., but ultimately any advertising campaign would be able to tell you whether or not you'd increased that salience, maybe an increased footfall in shop, if you're a retail brand.

But this kind of thing doesn't work anymore, because there's too much noise. How can you possibly say that because salience increased, it was because of this PR campaign? You can't. We can't track that. We can't say that it was because of TV, we can't say it was because of digital. We can't say it was because of everything. We can say it was because of everything in aggregate, but breaking it down becomes almost impossible.

So the really problematic thing here, and I've seen this several times, is that then somebody will go, "Well, we really want to be able to measure PR, so why don't we just stop doing all other brand building activity for a while, just do PR, see what that does in terms of salience, and then we'll know roughly the value of PR." But that's insane. We all know that PR doesn't work in a silo, neither does any form of brand building activity.

You need to be doing TV, if you're doing PR, if you're doing digital, it all works together to build this brand. Yet, you know, we can go down these bizarre routes just to try and measure things effectively. The problem that we often have failed to recognize, and I think we need to, is just to accept that ROI is just a bad way to measure PR, and many other brand building activities. The analogy here which I think works, is thinking of it like a football team.

What's the ROI on a football team? Goals and matches that you've won, right? But you don't measure every player based on the goals that they've scored, or indeed how they've assisted with the goals. You wouldn't measure a defender or goalkeeper on their assists, in terms of a goal. You'd probably measure a goalkeeper based on saves, maybe a defender based on how many times they do a tackle. Maybe a midfielder on how many times they pass something into the box.

These are metrics which aren't really tied to the conversion of the goal itself, and yet in marketing we seem to be obsessed with trying to get some variation of saying how much it tied into the goal, and this is causing huge amounts of problems. Anyone work in SEO? Yeah, a few of you. Ever had this? Yeah, a few of you I'm sure. Or how much traffic did this link bring us?

It doesn't mean the link's worthless, it just means that you can't track it in this way to get it to that granularity. So when you start to think about ROI, you might think you're solving for the efficacy of the spend, you're actually just solving in some sense for conversions. Fine for activity which is designed to optimize for conversions, not for everything else. What you should be [inaudible] instead is just a sensible proxy for value.

Now, what I mean by this is essentially picking a metric, knowing that it has flaws, understanding those flaws, knowing that there's something which maybe isn't quite correct about it, but picking the best thing that you can that will accurately represent whatever value you think you're trying to get. For PR, for example,.we know that what we're trying to get is brand awareness, salience coverage, interest from parties who may one day become customers.

So what we're actually going to measure, really, is trying to get a flavor for the value of the coverage we're getting. Now there's many, many ways to do this, and I think, you know, [inaudible] Verve have got a really smart way of measuring coverage, and Laura was telling me this morning about something about some interesting stuff that her agency is doing as well, so hats off to them, really simple way for anyone else- take domain authority of every bit of coverage you get, square it to reflect the fact that domain authority is logarithmic, say that DA-70 is worth exponentially more than a DA-69, and then add them all together.

Now that metric in and of itself doesn't mean very much, it's just a number, but over time if you track that, that's going to give you an indication of how things are working, and you can use that as a sensible proxy to measure the value of PR. And then for efficiency, you can just look at that over cost. So that becomes your proxy for "ROI" but it's actually tying it to a value which is relevant to PR, rather than something just relevant to conversion. So that's all lovely.

I think a lot of people kind of have got to the point now where they can understand that. But then comes in our second problem, which is realizing that ROI isn't particularly good, knowing that there's other metrics which might be useful, and then treating all metrics as KPIs. This is what I call the "All the Things" strategy. It basically looks like this. You take everything that you can measure, so you go, right, bounce rate needs to go up and to the right, need to lessen bounce rate.

Need to get more traffic. We need to increase conversions. We need to get more PR, more links, more this, more that, more leads, more conversion, better metric, and think about everything in terms of just upping the game and increasing everything. Increasing everything on every channel. What you think you're doing in these instances is optimizing the breadth of value, so you know, for example, that if you're doing link building, you know that getting a link provides SEO value, but it also provides referring traffic.

It also provides PR value, brand value, yadda yadda, so you try and get all these metrics. We're trying to optimize all of these things. But it doesn't really work that way. I'll give you an example. So at Wistia about last year we were looking at seeing what we could do on YouTube. And the team had read a guide, they looked at all the stuff that YouTube might be valuable for, and they came up with a list, which was quite smart, saying, "Here's all the things that we know we can track on YouTube, and that we should be caring about."

Which was views, retention, shares, subscribes, click throughs, and conversions. All things which you can reasonably consider that YouTube might be helpful for, and that you can track on YouTube. Then they came to me and said, "Right, well we're going to have a strategy that makes sure we can tick all of these boxes." So, for views, we're going to do some true view advertising on YouTube. We're going to spend some money to make sure we're getting a lot of people watching it.

We're then going to do a short video, make sure we've shortened in to keep the retention high. The shorter videos tend to get people staying and watch you all the way through, versus long ones. Then, yeah, we want to make sure that we're getting shares, so we'll include a little call to action on the video, of an annotation to get people to share it. We want subscriptions, so we'll make sure we include a subscription annotation. We want to get click throughs to our website, because we want to get the traffic, and we want to make sure that we're getting conversions.

So we're going to track the clickthrough conversions as well. So all very good. So they decided this was what they were going to do, and this was the video that they came up with. ♪ [music] ♪Basically, what happens is you can see it's got a CTA here. There's another one that should appear down here.

There's one up here, and the video itself is boring, uninteresting, of no value to anyone. It's short, but the problem is they haven't actually solved the creative problem of why this would be interesting. So it was a low quality use of content, demanding everything from its users, and nobody cares, obviously, because this was a terrible piece of content. But the problem is they started thinking with optimization in mind. So we're going to optimize all of these things that we can do on YouTube, in a single piece of content.

So they thought they were optimizing for breadth of content, actually what they're solving for is nothing, because there was no focus. You'll find this anytime you're working with someone, or a company, or whatever it's going to be, that are really trying to do lots and lots of things with one particular campaign, and if they can't tell you what the focus or goal of a particular campaign is, that's often because they're going to turn around and demand everything from that campaign, and that's immensely problematic.

What you should be noting instead is one specific qualified variable that you've decided upon. Now I'm hoping this video has been cached, so this can actually work. I'm going to show you another video that is what we then subsequently did on YouTube after we realized that wasn't going to work. So we decided, okay, well we're going to take a specific approach, and we're going to try doing some advertising.

What we're going to do is just target people who we think are going to potentially convert. So the metric, the variable that we're going to look at, is assisted conversions, and we're going to just target people who've been to the website, or engaged with some other channel, using re-marketing, and we're going to send them this ad. Then we're going to measure value based on assists, and here's what we said. - [Woman] When you don't control your viewer's experience, people like me can do whatever they want before your video.

There is a better way.

- So, simple enough? But, you know, just sells the point in a really simple way, and then the metric we look at for success here was view through conversions, which means somebody watched the video, and then at a later date went through to the website and converted. Easy, nice and simple, so the effectiveness became view through conversions, and the efficiency is view through conversions over the cost, so cost per view through conversion becomes the metric we try to optimize with when we're tweaking things in ad wares, etc.So very, very simple, but I don't care about retention.

I don't care about views. I don't care about shares. The only thing you care about on this campaign is the one metric, and that makes a big, big difference. Which is all well and good, but we now get to the third kind of issue which I'm seeing off the back [inaudible] measurement, and this is where you end up with mis-aligning goals, channels, and content. So I'm going to talk about a little example from something that happened to me a couple of months ago, as well as working for Wistia I do a bit of consulting with firms in London, and I was working with a big financial company, an insurance broker, who wanted to try and get more customers from millennial audiences.

So they came to me with this here strategy, and said we've got a budget of a million pounds, a big budget, I can do a lot with that. We want you to target millennials on LinkedIn. We want to do explainer videos, and we want to get signups. So basically, we realize that millennials don't have enough financial investments, we think explainer videos of different kinds of investments would be a good way to do this.

We're not doing enough on LinkedIn, so we think that might be a good place to distribute this, and we obviously want to get signups. Now, if you come to me with a million pounds, I can do a lot for you, but I can't do this. And the problem was that in thinking about all of these elements they thought they were solving for an integrated strategy. They thought well, we've considered where our weaknesses are in different channels, we've considered where we need to be targeting people, we've considered the kind of content that we think we're missing and need to be valuable.

But what they hadn't done is align them all. So there was nothing that was considered that matched where the channel, the content, and the audience overlapped. The three things were considered separately, and so you look at this and you realize that this is never going to work, because LinkedIn is not a great channel to distribute videos, and videos are not a great way to help people understand financial products, necessarily.

And millennials certainly don't go to LinkedIn to find financial advice, so this entire strategy was very problematic from our perspective. So what they were actually solving for was discrete, unconnected, and fragmented problems. Spent a million pounds on this campaign, and got no return. A million pounds wasted, because they hadn't really considered the core problem. What should have happened is instead creating content on channels appropraite for the goals and the audience.

So what I mean by this is essentially considering them all in a hierarchy rather than jsut disparately. So I think basically this is the way it works. You have to kind of start by thinking about the goal. You can then move on to thinking about the audience, who you're trying to target, then the channel, where it's going to be, then the content, how you're going to speak to people.

Then the distribution metric KPI. So with the goal it's what are you trying to achieve, with the audience the question is who are you trying to reach? It's then where can you actually reach these people? How can you meaningfully engage with people on this channel? How can you get your content to the right people? What can you actually measure? Then what is a good proxy to measure the goal overall?

Now, I say you have to do this in order, you can start at any point in this process. So maybe you have a really, really great content idea. Maybe you're going, "Okay, well we've got this really cool idea for content," but then what you have to do is kind of work backwards and say, "Well, is this content kind of appropriate for this kind of a channel?Are target audiences visible on that channel? And if so, what kind of goal would this content actually suffice?"

And if you can't match these things together, if you can't then reverse engineer your idea so that you can start with a goal, audience, channel, content distribution, metrics, KPIs-you're going to find a problem. So I'm going to show you an example of a recent campaign that we've had on Wistia to try and make this work. So we recently started actually with the channel, and we decided, okay, Facebook.

We want to do something on Facebook. Facebook is now a big video platform, Wistia is also a video platform, we want to see if we can market on Facebook, convincing people to try and use Facebook in an exciting way and use Wistia in an exciting way. So we started the channel and said, "Let's try and do Facebook." Well, who is the audience on Facebook? Well, for us, probably people who do social media, and what kind of goal?

Well, let's think about a broader sort of goal of brand awareness, and so we made this video in an attempt to kind of do that, I'm going to explain a little bit more about this in a sec. - [Announcer] We're excited to announce our brand new Facebook video competition-What will Lenny choose? Each week, we'll release a new video where our office dog, Lenny, has to make a tough decision between an awesome piece of video gear, and a small piece of food.

Enter the contest by liking our page. One lucky winner will be chosen at random, and will receive whatever Lenny picks in the mail. Please note that Lenny is a dog, and he really loves carrots. So go ahead and like our page for a chance to win.

- All right, so that's the idea. So how did we actually come up with this? Well, as I mentioned before we kind of decided that we're going to make a Facebook. We then thought, well who can we target? Well how about people who are trying to do video on social media? That sounds pretty reasonable. So we can actually use Facebook to...

Facebook promoted spend to target people who have an interest in video. We thought, well, let's think about brand awareness as a very broad goal here. We're not going to worry too much about that. Say to just be clear that this is all going to be about brand. Then we decided, well let's try a competition, because we looked, did some research, saw what was more successful on Facebook previously, using BuzzSumo to work out what worked there.

We decided on a competition, and then thought, well how can we distribute this? Well, we didn't really know, okay, so we just had this content idea at this point, and we thought well, it would be quite fun to do a competition. We don't have much money. Why don't we just basically force people to end up winning something terrible, and this is how the idea was born of getting the dog to choose between a carrot and a piece of video gear that cost a thousand dollars.

So we basically thought, well, let's just try it. Let's just do one and see what happens, so test and iterate. So we did one, and it did okay. We got about 2,000 organic views on it, we got about 50, just under fifty likes there. We got a bunch of kinds of people seemed to react positively to it. So we thought well, this test indicates that probably there's some chance this might distribute, so we thought what we'll do is we'll kind of do that, we'll put a little bit into the spend, so about a few hundred dollars, behind some promoted posts, so we can target people who we know are interested in video, and then we'll rely on word of mouth as well.

This worked pretty well for us, but then we thought well let's see if we can take this up a notch, and we decided to actually surprise, the dog always goes to the carrot. So we decided to basically send out carrots to people in mail, like fake ones from Etsy, so we decided to target Angie Schottmuller, aome influence from the CRA space, to see if she would like it because she likes dogs.

So what we ended up doing is then finding another way to increase distribution by a bit of light touch influencing marketing, just by picking the winners out of the people we thought we going to be most likely to share stuff and have a good audience. So the metrics which we could use to success here, well we knew that we could track views, audio, and people who actually clicked to play the video with sound. Most people watch Facebook with silent autoplay, look at the retention, the likes, the comments, the shares.

We can track all of this, but as I mentioned earlier, what we're not going to do is think about all of these as KPIs, we're just going to pick one, and we choose pages likes, because page likes is actually quite a good proxy for overall brand awareness, in the sense that if somebody's engaging with the competition, they're likely to like the page, because that's a call to action. Plus also, if they like what we're doing enough to want to hear from us again, then they're likely to like the page, and come into that social CRM infrastructure.

So the KPIs for the Facebook competition became page likes as the effectiveness, and the efficacy is then page likes over cost. So we managed to get it to I think about 50 cents or so per page like, and over last couple of months we've increased our page likes by about 30% overall, just through this little campaign.

So it becomes very focused and very targeted, and because we are very focused and we've considered the goal from the start, we're able to come up with a much more exciting, fun, interesting campaign. I honestly think the difference between a great, exciting, interesting campaign, and often something very dull, kind of like that Nationwide example that I showed you, is not smart people. There's many, many smart people in all of these organizations.

It's the fact that if you are forcing people to measure their success in a certain arbitrary, bizarre way, then you're not allowing the great strategies and the great creative to come to light. You have to set up things for success by measuring the right things, and I think this is where all great strategies need to start. So, takeaways, I just think bad data is often worse than having no data at all.

Like with the Boston potholes example, if they'd just gone by gut and said, well, this is on a main road, let's just fix that pothole, that probably would have been better than using this big data approach. Because big data that's used incorrectly can lie, and it can force us down bizarre routes that doesn't actually help us. So make sure that if you're using data in an interesting way that you really understand what it's telling you, and are not assuming something that turns out to be incorrect.

Measure everything throughout your campaigns, but make sure you're just optimizing for a single KPI. If you can be really focused on what that is, and it has to be a smart proxy, it's not always going to be a completely comprehensive KPI, but pick something that seems sensible and go with it. Only use ROI to measure for direct response and conversion-focused activities, and then ensure alignment of your content with your channel and the audience, and make sure that those three things always work together, and aren't considered disparately.

When you've got there, tie everything back to the goal to make sure it's actually going to work, and hopefully if you take this approach, you're going to have much more successful and exciting marketing campaigns. So, thank you very much.

you might also enjoy

Share your email address and we’ll keep you updated on all upcoming marketing related events and news so you never miss a beat...