Local SEO: Reaching a Local Search Audience

Does Local SEO need to be a geographical dilemma?

Local search is rapidly growing in importance for businesses of all kinds. By simply including a place name in your search query on Google, and it becomes hard to reach any e-commerce site without triggering a Google Maps snippet or a Google My Business page listing. However, there’s always been a conflict withing local search optimisation between good editorial copy and geographically focused SEO. This stems from the way users prefer to search for, say, ‘Patisserie Bakery Cardiff’ rather than using more natural language. There’s no good grammatical way to include that in any sentence, let alone your headline and opening paragraph. But does it even matter anymore?

Emergence of natural language

Search engines are certainly evolving. Advancements in speech recognition and machine learning has dawned a new focus on voice search. This leads queries to slowly become much more conversational but how do you write good content for your local website “conversationally”?

  • Stop query matching in copy

First ask yourself, do you even need the exact phrase to appear on the page? Google is getting better at recognising the pattern of a sentence. A search for ‘Patisserie Bakery Cardiff’ returns plenty of results that don’t feature that exact search query in their text. In fact, as Dr Pete pointed out in his Searchlove 2016 talk, “tactical keyword research in a RankBrain world”, many SERPs are now using this notion of concepts. Over 57% of all SERP results did not include the keyword variant being queried.

That’s great news for marketers as it allows us to focus on producing copy that people will actually give a crap about. It’s effective to weave in prose that legitimately ties your product to your place.

  • Local reference points

Using region specific research or local statistics, for example – and mentioning the place names and landmarks prominently gives search engines more insight into what concept you are trying to craft.

  • Structured data

Going one step further and including relevant schema.org markup or JSON-LD to your page to give search engines further hints towards the locality of your business.

This is a whistle-stop tour of a few points on getting some quality into your content. I’m certainly in no position to lecture anyone on how to write the world’s greatest editorial, but this article doesn’t intend to be anything more than a warning and a challenge to old school SEO thinking.

Many “best practice gurus” take the view that you should simply keyword whatever phrases people are searching for, regardless of how grammatically correct they are (or even whether they’re spelt correctly).  Please, if you take nothing else from this post, remember this:

Respect your readers

Google’s major algorithm updates continually stress the need for well written web copy that’s aimed at a human audience, not simply there to attract the search crawler’s interest. Treat your website, its content and your readers with respect, and remember why your content is there in the first place. It’s not for the bots…

 

[This post is a work in progress – I’ve previously written about my iterative approach to content if you like that sort of thing]

Forget Demographics. Habits Rule.

Changing consumers

In an age where competitor benchmarking and industry analysis is commonplace, marketers often lose sight of what really matters. The customer.

In an interview with First Round Review, Dropbox’s Head of Design, Soleio Cuervo said “Remember that you’re not competing against other services. You’re competing against people’s habits.”.

He’s 100% right.

Customers have become brand agnostic. Today, consumers have their regular purchases that they make routinely, applications that aid them and brands that help them define who they are and what they stand for. A study by Deloitte found that consumers buy online primarily out of ease and convenience, the lack of fixed opening hours, and the fact that they do not need to carry the products home themselves – and this holds the crux of the modern consumer.

Brands continue to play an important role in the purchase process but blind commitment to individual brands has decreased. Modern consumers switch brands, experiment, and are open for inspiration in their aim to help . In future, the number of newly emerging niche products and brands will continue to increase. Only brands that focus on the customer relationship and not the products can expect loyal customers. All others will experience much tougher competition. For retailers, digital technologies give the opportunity to extend their range considerably and to test the sale of new brands online.

Smartphones – Assistance and distraction.

Smartphones not only act as shopping assistants, but also as a serious source of distraction. At present, mobile phones tend to divert attention from buying, and not supporting the purchase process. But even today, some consumers already use their mobile phone to make better purchase decisions or buy cheaper elsewhere. Presently, fewer than 10% of respondents now use their smartphones in stores in most retail sectors. These figures are much higher for consumer electronics and home improvement products. With the increasing penetration of smartphones, this is set to change in the future. High street retailers will gain a wealth of opportunities to sell, and to give information, navigation and help.

Social commerce: Friend to salesperson.

Brands are now also beginning to weave social media into the sales process. Facebook, Twitter and the like are fast becoming not only communication tools, but also sales channels. The idea of social media becoming the “eCRM on acid” that will change the digital landscape seems to be edging closer;  turning customer’s friends and acquaintances into affiliates to meet the growing need for personal guidance and orientation of the modern consumer. They thus gain quality and flexibility. They can personalize the shopping experience, make more relevant offers and require less staff. Initial approaches are pointing toward this development. Even today, users who tell their friends what they are buying or were become eligible for discounts. Twitter is being used actively for sales help and product announcements. Retailers use Facebook to identify customers at the start of the buying process, present personalized offers and deliver recommendations from their friends. The first online stores are opening within Facebook. And the key to the success of new players such as Groupon is that many consumers shop together.

From advertising space to involvement space.

The era in which physical retail outlets and e-commerce were separate and competing spheres is drawing to a close. In the future, consumers will buy in numerous physical locations and via a variety of physical media. The triumphant progress of touchscreen computers and smartphones is making this possible. Consumers will be able to browse and make purchases at display windows and outdoor advertising spaces. They can also use touchscreens to order products in stores or view more information. Tablet computers let sellers present products more effectively and sell them directly. A variety of new digital services will support this buying process via sensors. Orders are then delivered or are ready for collection in the store.

Static business models

While consumers are changing their habits at an unprecedented rate to better fit their connected and “busy” lives; most businesses are failing to adapt with them. Sure there are a few that are looking to make their operations more digital. But it is the companies that look to drive a wedge into people’s habits that are the ones that are truly out there disrupting things. Take a look at Uber, for example:

“You used to walk out to the street to flag a taxi down. Now with Uber you can book a taxi home from work using your smartphone.”

Successful tech companies are built on new habits they helped form around the utility they have provided. Google, Ebay and Netflix are all examples of great habit changing business models that have replaced their analog counterparts.

As you think about your product or service, always be thinking about what people currently do and how you can create a new set of habits around your business. After all, that’s what retention really is —assisting people to build a routine around your product offering.

 

[This post is a work in progress – I’ve previously written about my iterative approach to content if you like that sort of thing]

How Does Googlebot Handle Cookies?

With cookie based personalisation based CMS’ s such as Sitecore becoming more popular on the web, it’s important to understand how they work and how they could effect a search engine crawler.

How Googlebot handles cookies

I setup a test to determine if Googlebot was in fact tracking cookies. It’s a pretty simple to follow script. If there’s a cookie set, a message is echo’ed out. If there’s no cookie set, a new cookie is created. Easy right?

 

Next, I opened up Google Search Console and ran a few “Fetch as Googlebot”. Here are my results:

Test 1: (Initial call should set the cookie)

Test 2: (We’d expect that the cookie was set the last time)

Test 3: (Nothing on the last attempt. This for sure should have the cookie.)

So, according to my test results, GoogleBot does not currently store or use cookies. This is likely due to how crawlers work.

Nothing that we didn’t already assume but this is something to keep in mind if you are using cookies on your site to provide a personalised user experience. This is the current situation but may change.

As the search engine crawler will be unable to accept the personalisation cookie it will revert to a cookieless browsing experience. Therefore it is vital that you optimise the default templates to give the crawler the optimal experience for SEO value.

11 Books to Boost Your Leadership and Innovation

We live in an increasingly connected world of shifting environments, agility, and innovation. As customers, workers, and partners continue to relate to organisations in new ways, it becomes especially important to react to this age of increased connectivity by understanding how people interact and adopt a specific focus to keep up with the changing face of business. However, it can be difficult to intuitively know how to realign your business practices in a modern, interconnected world.

In my opinion, there is no better way to help you in re-orienting your digital leadership in the best possible direction than by learning from the success and mistakes of others. It’s also important to note that you needn’t manage staff to develop as a leader; in fact it can be even more beneficial to extend your leadership knowledge before you pick up undesirable habits.

The collection below features some of my favourite leadership books to boost your leadership and innovation:

  1. Born Digital: Understanding the First Generation of Digital Natives
  2. It’s Complicated: The Social Lives of Networked Teens
  3. Blended: Using Disruptive Innovation to Improve Schools
  4. Blueprint for Tomorrow: Redesigning Schools for Student-Centered Learning
  5. Digital Leadership: Changing Paradigms for Changing Times
  6. Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live
  7. The Big Moo: Stop Trying to Be Perfect and Start Being Remarkable
  8. The Advantage: Why Organizational Health Trumps Everything Else In Business
  9. Switch: How to Change Things When Change Is Hard
  10. Yes!: 50 Scientifically Proven Ways to Be Persuasive

These texts don’t have all the answers but they start a journey of discovery that you can use to deliver better results in various business environments.

What are your favourite books on leadership and innovation? Feel free to share them in the comments below.

Networking as an Introvert needn’t be scary

These days, what matters is not who you know – but who knows you. That’s why networking has become such a talking point within small business circles. Whichever point you are in your career, whether you are a business owner starting a new venture or a professional looking to expand your circle of peers, networking plays a crucial part to the success of any business and can help grow your profile; encouraging you to connect with people who may become an integral part to your business or career plans in the near future.

For many the benefits of networking are vast and I truly believe it’s worth doing. The problem is that, for those of us with introverted tendencies, networking is hell.

Networking as an introvert

I think Andrea Ayres put it brilliantly:

I’m an introvert and people scare the hell out of me.

Don’t get me wrong. I’m not one of those super shy types but I’m not one to initiate a conversation if I don’t have to. IntrovertIn fact, very few people are fully introverts or extroverts, instead they probably share some characteristics of both but that doesn’t make social situations any less painful for those of us who lean towards the quieter side of that spectrum.

Networking events are usually big, busy affairs with hundreds of outgoing people talking and exchanging business cards. A scene I used to both admire and detest. I’m not good at these events. They, again as Andrea put it, “scare the hell out of me”. they are draining, nerve-wracking and inevitably for me, largely ineffective.

An introverts alternative to “networking”

Disillusioned by networking events, I stumbled on a sweet spot. Back in April 2014, I was honored to be asked to speak at BrightonSEO, a big conference to around 2000 people. I was terrified as I’d never spoken at anything more than a school play before but it was too good of an offer to turn down. Leaving the stage after the talk I felt relieved that it had all gone well. I sat through the remaining talks in my section while I planned how to escape the crowds to get a coffee somewhere quiet during the break when something miraculous happened. People began to talk to me.

One of the most difficult things for a nervous introvert at a networking event is to start a conversation with a stranger. I mean, what do you say? how do you keep a conversation going with a stranger? what if they don’t want to talk to me? what if they think my opinions are stupid? – these have been real thoughts in my head at one point or another. But here I was, networking with no effort expended. The conversation flowed perfectly as they asked me to expand further on the points I had made and spoke about their own experiences with entity search. I had ended up having 5 great conversations with some really nice people I’d have never of dared to approach otherwise.

A few weeks later, I was talking again. A friend of mine was looking for speakers for the Digital Marketing Show at the ExCeL in London, and I jumped at the chance. Hoping to recreate the networking success I had achieved in Brighton, I came off stage to be greeted to a small line of people who wanted to chat to me. That’s when I knew there was something to this speaking malarkey…

After two events I had spoken to more people than I had ever spoken to at the many events I’ve been to over the years. I’ve found no more efficient way of networking as an introvert than giving a talk. I could get my point across, leave my twitter account around for other introverts to chat online, have a few conversations with people who seek me out, then head home to recharge. Bliss.

I’ve since found a wonderful book on this subject called “Presentation Skills For Introverts where the author, Rob Dix, gives further advice on dealing with introversion in networking situations. I highly recommend it.

Do you have any advice or tips for introverts looking to network more successfully? Give us a hand and leave a comment below!

6 Time Saving Hacks for SEOs

The life of a digital marketer is one of constant multi-tasking. One second you are brainstorming ideas for interesting content, and the other you are auditing a website for technical SEO issues. The name of the game is productivity; and the way to achieve this is through better organising your time to waste less on activities that don’t add value and spend more time on delivering for your clients and employer.

That’s why I got in touch with some really smart digital folk to help all of you save valuable time and become more productive and successful SEOs by providing you with their top hacks to help you get more done. Without further ado; let’s get started:

Simon Penson, Managing Director at Zazzle Media

“I spend a lot of time looking at competitor and market data and one of the toughest jobs is to understand how much overlap there is, especially when you are really trying to nail down a very relevant competitor for data dive purposes.

One way of doing this quickly without having to guess or spend hours looking at mountains of Excel data is to utilise a vastly underused little tool hidden within the SEMRush suite. In the ‘Tools’ tab you can find a Domain V Domain option and in here it is possible to paste in your competitor short-list. From here you can see how many keywords they share and it is then very simple to refine the list to a ‘best match’ scenario. From here you can then dive much deeper.”

Kevin Gibbons, Managing Director at Blueglass UK

“My tip is to learn when to switch off – I’m suggesting this because it’s the one I’ve found most difficult personally, but knowing when to stop is essential towards keeping a clear way of thinking.

Focused effort is so much more valuable – find the environment that allows you to block out any distractions and get the best results. One thing that helped me was to remove all social apps from my phone last summer and I haven’t missed them since – an even more surprisingly, I did the same with email 6 weeks ago and the world has kept turning too :)”

Gareth James, Freelance SEO at SEO Doctor

“Lots of automation tools are great for helping digital marketers save time, but my best time hack has been to actually work more efficiently. I started using the Pomodoro Technique last year and found it worked really well for me.

You basically work for 25 minutes then have a break for 5 minutes completing tasks in each time slot. Sounds simple, but it actually trains you to get tasks done faster and avoid other distractions like social media or watching Jeremy Kyle if you’re a freelancer.”

Kirsty Hulse, Head of SEO at Found

“Re-purpose old content. Often it’s easier, quicker and cheaper to inject new life in to successful old content than creating something new from scratch.

Got an infographic from a few months ago that worked well? Use a tool like Powtoon to turn the infographic in to a short video animation; or take similar content, give them a refresh and group them to create a “guide”. If you work in a fast paced industry – take snippets from old blog content and discuss new perspectives and how this may have changed.”

Steve Morgan, Freelance SEO at Morgan Online Marketing

“As part of my freelance work, I’ve done a bit of link removal/disavow work for clients who have been affected by Penguin and/or have acquired a Manual Action penalty. The fiddliest part of the work used to be grabbing the inbound link data from multiple sources (Google Webmaster Tools, Majestic, Open Site Explorer, etc.) and then removing the duplicates while keeping hold of the most information (as – for example – the GWT data only gives you the linking URL, but other tools give you more data, such as anchor text, Domain Authority, the page being linked to, etc.).

I found out about URL Profiler from someone and gave it a try. If you put all the data files into it, it automatically strips out the duplicates and gives you the data that you want for each and every URL. It saves so, so much time. My current licence ran out, but the next time I’m doing this type of work for someone, I’m renewing it straight away.

I also like to use the CONCATENATE formula in Excel to speed up with the disavow file creation process, which combines bits of data together from multiple cells into one cell. Fill Column A with “domain:”, put the actual domains in Column B (e.g. “example.com”), use the CONCATENATE formula in Column C – grabbing Columns A & B’s data – and it’ll combine them to make “domain:example.com” in every instance. URL Profiler can even give you just the domain for every link, making Column B really easy to put together, too.”

My own time-saving top tip

Reporting is the bane of digital marketer’s lives; regardless of their in-house/freelance/agency status. The measurable nature of the platforms we work on mean that we are required to constantly report on a number of metrics. My top time-saving tip is to spend a little time gathering reporting requirements and use those to automate as much as possible. Tools like AWR Cloud allow you to automate regular ranking reports, social media follower growth and (my personal favourite) visibility tracking for a large number of keywords.

Bonus tip for in-house SEOs – Have multiple product lines? Use automated Visibility Reporting through AWR to offer product / section level performance tracking to add value to the business on a more useful level.

So there we have it. A few time-saving ideas that have hopefully got you thinking about ways you can minimise wasted time within your working day. If you have any time-saving hacks you’d like to share; pop them in the comments. I’ll be updating the post to include the best ones.

How to get Social Profiles on the Knowledge Graph

As you know I’m a big fan of talking about semantic search so hearing the news that Google have recently opened up the Knowledge Graph to include social profiles for brands meant that I had to investigate. In this post we look at how we can use JSON-LD markup to add your social profile information to the Google Knowledge panel for branded searches.

What social profiles can be marked up?

There are a whole host of social media platforms out there but which ones can you add to the Knowledge Graph? Well; using structured data specify social profiles from:

  • Facebook
  • Twitter
  • Google+
  • Instagram
  • Youtube
  • LinkedIn
  • Myspace

Although you can’t get other social profiles to currently show within Google search results, it’s still a pretty good idea to include other accounts where you can.

What structured markup do I need to add to my business’s website?

The schema.org vocabulary and JSON-LD markup format are an open standard for embedding structured data in web pages. If you’re not familiar with it Aaron Bradley over on SEOSkeptic wrote a fantastic post on JSON-LD and it’s relationship with the Knowledge Graph that can help you out.

Essentially it’s pretty straightforward and requires only a few elements:

  1. The Schema.org Organisation type
  2. Your business name
  3. Your websites official URL (homepage)
  4. Links to your social profiles referenced through the SameAs attribute
  5. The social profiles in your markup must also exist on the same page.

Here’s the template for a business to specify their social profiles in their Knowledge Graph (assuming they have a Knowledge Panel in the first place):

 

What about my personal social profiles?

Well, I’m a sucker for hacking the Knowledge Graph for my own personal amusement and it seems that you can add personal social profiles to the Knowledge Graph too. Though this may not work for everyone; it is a clear step in giving search engines a much bigger hint at who we are and where we converse online.

Here’s the template for a person to specify their social profiles in their Knowledge Graph (assuming they have one):

You can see this code in action on the blog; just view source.

You can insert these tags into any area of a HTML page on your company’s official website, whether that be the or happy in the knowledge that it won’t affect how the webpage looks to users. Furthermore, thanks to Google’s improved  Structured Data Testing Tool, you can now verify that your JSON-LD markup can be processed properly; so when Google next crawls the page, your social profiles will become eligible to be used in search results.

Simple eh?

Will you be using this? I’d love to hear how you are using structured data to deliver more information to crawlers in the comments!

Digital Marketing Show 2014

The guys over at the Digital Marketing Show were kind enough to invite me down to the ExCeL London to give a talk on search.

After much thinking, and ample amounts of procrastination), I thought it’d be a nice opportunity to highlight some of the changes we’ve seen in SEO over the past few months and give a few hints into what digital marketers need to think about when creating a strategy including search.

Content Marketing and The Modern Consumer

There is a host of content marketing articles on the web, from content creation and optimisation to more strategic pieces around the finer points of the craft. The more and more of these I come across the more I keep coming to the same conclusion.

Why is no-one talking about the customer?

The customer (or consumer, user or whatever other pseudonym you wish to apply to them) has never lived in a more connected society than they do today. Face to face conversation, landline phone calls and postal mail have been joined, and to a degree replaced, by Snapchat, Twitter, Instagram and the seemingly ubiquitous Facebook. Add to that the ability to search almost the entire corpus of human knowledge through search engines like Google and Bing, and you begin to see the communicational prowess of the modern consumer. This evolution in communication has meant that content, and the way we use it to communicate to customers, needs to evolve too.

Just look at the prevalence of mobile devices, one of the most increasingly popular trends for modern consumers. With advances in technology being made, there are so many smartphones, tablets, and handheld devices that connect consumers with the Internet. The key is to recognise that mobile devices are not just phones anymore. For example within retail, smartphones play a central role at the point of sale as shopping assistants, with some consumers already using their mobile phone to make better purchase decisions or buy cheaper elsewhere.

Buyers tend to combine three sources of information to make good decisions in such a short time: media, social contacts and sales staff. According to a study by the Retail Revolution, the most commonly used sources for independently gathering information are newspapers, television and the Internet (48-90%). 43-68% use recommendations from friends and acquaintances as an orientation. 24-80% seek help from sales staff. Significant differences exist from sector to sector. As a rule, the more media-savvy consumers are, the more likely they are to obtain their own information. The less experience consumers have in a particular shopping sector, the more likely they are to also ask friends, acquaintances, or sales staff. For content marketer, it will become increasingly important not only to provide content to help convert, but to offer customers with the right information in the right place, often tailored to the context of the user and their device.

I guess the point of this article is to highlight the simple thought that while (some) core marketing principles still apply, your content marketing strategy must reach an increasingly fragmented and attention-deficient consumer…and don’t you forget it.

Recommended Reading

Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information – Itamar Simonson

Market-Led Strategic Change: Transforming the Process of Going to Market – Nigel Piercy

The Conversation Manager: The Power of the Modern Consumer, the End of the Traditional Advertiser – Steven Van Belleghem

Could Branded Web Mentions Supplement the Link Graph?

With news of an imminent Penguin algorithm refresh on the cards, I’ve begun to re-examine how Google could use data extracted from the web to help determine if a site triggers a Penguin penalty action when refreshed.

Last week I came across an academic paper written by 3 Spanish scholars (Jose´ Luis Ortega, Enrique Ordun˜a-Malea and Isidro F. Aguillo) entitled Are web mentions accurate substitutes for inlinks for Spanish universities? which looks into the relationship between hyperlinks and web mentions; and attempts to establish if the former can be used to supplement the link graph with a view to eventually replacing it.

“In order to predict inlinks, URL mentions are enough to predict (in 82 per cent of cases) the number of inlinks that a website receives whereas the title mentions should be rejected.”

The correlation analysis shows that the closest web mention alternative to inlinks is URL mentions as they may better express the transitivity of a hyperlink.  The similarity of these results with the findings obtained in the analysis of the Spanish university system reinforces the hypothesis that these different web mention types could be used as a proxy to measure the web impact and visibility of a web site on the web.

There have been rumours within the community inferring that brand mentions count as SEO-friendly links and, in turn, confirm the significance PR plays in SEO performance. This idea is based on patent exploration originally uncovered by the blog SEO by the Sea (a respected SEO blog covering search engine patents), further discussed by industry thought leaders at Moz, and ultimately brought to light in the PR world by industry leading publication SHIFT Communications.

The point in question is in a reference to an “implied link” element within Google’s US Patent 8,682,892. The patent argues that links to a group can include “express links, implied links, or both.” The “implied link is a reference to a target resource, e.g., a citation to the target resource.”

Are “brand mentions”, in the classic sense of public relations, truly the objective in this portion of the patent? Bill Slawski, the author of SEO by the Sea, argues in both Moz and SHIFT articles that the word “brand” is not mentioned in the patent filing, and the “implied links” mention most likely covers “entity associations.”

Here I agree. When defining entity associations, brand mentions could be implied but shouldn’t be assumed. Entity associations are concepts that businesses or entities are “known for” which is a more complex discussion than just brand mentions. There is a fantastic resource for more background on entity associations.

Beyond entity associations, “implied links” could also infer un-linked URLs written in text, as a blog comment on the SHIFT article indicates. The point is that we’re all interpreting the patent information in some way or another and have no concrete way of knowing the true aims. Bill Slawski concedes that drawing the connection between entity and brand “isn’t much of a stretch, but it doesn’t limit the patent the way that just saying that this applies to brands might.

Do brand mentions have direct impact in SEO today? Despite various experiments with suitable correlation it is still difficult to assess. The information found in this patent filing is interesting and exciting but it’s difficult to draw a conclusion that brand mentions, linked or not, have direct influence in a website or web pages’ ability to rank well in search engine results but one thing is certain; the search engines continue to refine, reassess, and optimise the factors that should impact the results a user requests, and they recognise that offsite factors, beyond the traditional “inbound link” should play a part in establishing relevance.

Where I see the clear opportunity is for SEO and Digital PR practitioners to collaborate in a much more cohesive fashion. This is something we can start making inroads on today before SEOs are chasing their tail on the goldrush of citations (anyone remember the early 00’s?)… SEOs need to better understand the organisation’s brand, including its objectives, target markets, points of differentiation, and the competitive landscape in a more traditional sense. SEOs can also help PR pros understand how to more succinctly connect the dots between brand mentions and SEO visibility.

Our future isn’t private

As stories of NSA spying and the threat to our privacy have rippled through the press again in the past week or so due to Julian Assange and Edward Snowden’s appearance at the Moment of Truth rally in Auckland. They have done much to raise the profile of government intrusion into our personal lives but as more and more data is collected by corporations, are we looking at an enemy closer to home?

A few days ago I attended BrightonSEO; a fantastic SEO conference that I have been to many times and was fortunate enough to speak at earlier in the year. One talk that really hit home with me was from Ian Miller who stepped away from digital marketing and looked into the advancements that Google is making through research, development and acquisition into the Semantic Web. I’ve long felt that Google’s advancements will come fast and thick but I’d not fully comprehended just how far they are now.

While fellow conference attendees and I returned home and took stock of the information we’d just absorbed; a clerk at the United States Patent office was priming his rubber stamp of approval for a new search patent from Mountain View.

Today, Google was granted a patent that outlines “A computer implemented method for using search queries related to television programs.”. This seems pretty benign but it really isn’t. Google’s new patent outlines a method to distinguish, through an extensive TV listing database cross referenced with a user’s location, what a searcher is watching at that very moment and adjust their search results accordingly.

tv-as-ranking-signal

Bill Slawski does a great job of disseminating what the patent actually means in the short-term but I can’t help but feel that this is the beginning of something much bigger; the start of our future without secrets.

A life without secrets

One of my favourite books of all time is the classic novel 1984 by George Orwell; if you haven’t read it I highly suggest you buy it now. The book explores a multitude of issues in a dystopian world where the government has total control over not only the public’s behaviour but also their thoughts due to their impetuous propaganda and constant surveillance. This is of course a far cry away from our current situation but one aspect of the novel became too vivid to ignore – that of the Telescreen.

Telescreen
The Telescreen is a device which operate as both televisions and security cameras used by the ruling Party to keep its subjects under constant surveillance, thus eliminating the chance of secret conspiracies.

Google released a new feature back in June 2013 that gave Google Now the ability to listen to TV programmes, identify them, and give more information about the episode you are watching. An element missing from the above patent but an improvement on its design I’m sure you’ll agree which begs the question; when/if this becomes integrated with Google search will Android devices become portable telescreens constantly listening to our day-to-day lives under the guise of an advanced “feature”?

Of course this is just one part of the puzzle but there are many more coming to the surface all the time. Most recently if you’re a Facebook user, you’ve probably noticed that the company is forcing users to download the Facebook Messenger app if they want to send and receive messages. Something which prompted Jonathan Zdziarski, a noted author and expert in iOS related digital forensics and security to look into the application and tweet “Messenger appears to have more spyware type code in it than I’ve seen in products intended specifically for enterprise surveillance,”

In an email to VICE’s Motherboard, Zdziarksi also told reporter Matthew Braga that Facebook logs “practically everything a user might do within the app.”
“[Facebook is] using some private APIs I didn’t even know were available inside the sandbox to be able to pull out your WiFi SSID (which could be used to snoop on which WiFi networks you’re connected to) and are even tapping the process list for various information on the device,” he wrote.

Herein lies the ever-increasing problem with our digital lives; a problem that we can’t put 100% at the door of Facebook, Google and other tech giants. Over the past few decades we have fuelled their success, offering up vast amounts of personal information to these services with little regard for how this information could potentially be used. I’ve written before about the level of personal data I have personally given up to Google’s search engine which has allowed them to “know” more about my personal life than some of my closest work colleagues. Unfortunately we are now at a critical mass where the amount of data freely available on us online has become dangerous.

A brilliant TED talk by Carnegie Mellon professor Alessandro Acquisti suggests we are making privacy tradeoffs as a result of the analysis of big data. Privacy-cracking techniques that until recently were not available broadly are now essentially open to anyone with an Internet connection. Facial recognition, for instance, has improved exponentially in recent years. He shows a project where he found he could take a photograph, match the face to publicly available information, and use the results to predict sensitive information such as a Social Security number. The most worrying part of his talk looks further ahead:

“Pushed to an extreme, you can imagine a future with strangers looking at you through Google Glass or their contact lens, and with seven or eight data points about you they could infer anything else about you,”

TG2013_057939_D41_8279

Marketers of the future will be able to scour your Facebook contacts, find your two best friends, and then blend their portraits to form a composite photograph. So next time you’re looking to buy something, the spokesperson will be an oddly familiar, friendly face, unrecognisable but subconsciously influential.

As Acquisti concludes “One of the defining fights of our time will be the fight for control over personal information” and it seems we are losing more and more ground by the day. One of my other favourite books of all time is Aldous Huxley’s Brave New World, where technologies invented for freedom end up coercing citizens and it seems as though we are sprinting towards a similar fate. The game is on, in other words, whether we like it or not.

If you still wonder if this future without secrets is dangerous or if we should care? The simple answer: yes.

The more difficult question is can we realistically stop it?

Machine Personas

As digital marketers we are tasked with creating online experiences for our users be it through content, creative or technical solutions. One area the web community is beginning to wake up to is the notion that the sites we build need to work for machines as well as humans.

What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors: the machine. We are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race

–  Extract from Darwin among the Machines by Samuel Butler

This is an amazing insight into our current thought process when it comes to robotics, the semantic web and even wearable technology but what is particularly astonishing is the amount of foresight Butler had. To put his vision into context, “Darwin among the Machines” was written in 1863. 136 years before the invention of the WWW, around 100 years before the Internet and only 42 years after Michael Faraday invented the notion of the electric motor.

And yet today in 2014 this concept couldn’t be closer to the truth. Everyday ‘we are giving them greater power’, greater power to act on our behalf, act autonomously. One way we do this is by empowering them with data and systems allowing them to access, retrieve, consume and aggregate content from the web.

“I thought this was a digital marketing blog?” I hear you scream well, yes. So what does this mean for marketers, designers and developers? Working recently on a couple of web projects I realised that although the idea of using semantic web technology within search is now a well-established, trusted and accepted model, it surprised me how little forward planning we put towards designing and developing web solutions that work for machines as well as humans.

Research has shown that brands that have jumped on the Schema.org bandwagon have seen increases in click through rates, traffic and ultimately conversion. So why is it that, despite the obvious benefits of a more semantic experience for web crawlers, we’re still primarily designing websites, interactives and content for human consumption only?

We are missing a persona: the machine-based persona.

The Machine Persona

Personas have emerged in the past few years as a means to put a human face to the soulless and faceless stats of the demographic data scientist. The problem is that just considering a demographic profile misses that human element. So personas were invented to help the new breed of marketer understand the human side of the data.

Broadly speaking, the goal of establishing personas comes from the world of user-centered design; to understand important tasks in the UI and the user’s motivations. Like traditional personas, machine-based personas need to have a name, a picture, some goals, background information and usage scenarios on how the persona would interact with the website or application. It is interesting to note, however, that machine-based personas are functionally-led rather than emotionally-led, which means that a machine persona’s descriptions can be more methodical and factual.

Let’s take an example in what would be, in many cases, a key machine persona: GoogleBot.

Machine Persona - Googlebot

Googlebot

  • Background – GoogleBot is a web crawler, a computer program that browses the World Wide Web in a methodical and automated manner.
  • Key goals – as a web crawler its main objective is to capture and index as much information about sites on the web as accurately and as fast as possible.
  • Usage scenario – on a regular basis GoogleBot crawls your site and identify any changes since their previous visit. Your site publishes local events, and it happens that the GoogleBot webspider has functionality that can deliver your site’s events straight onto the search engine’s results page and ultimately provide more exposure for each listing as well as your website. The spider needs the content to be semantically structured for this feature to work.

This example shows that by adding a simple machine-based persona we are able to identify specific scenarios and opportunities that could be easily missed otherwise, thus bringing the semantic needs of crawlers front and centre when designing/re-designing new websites, interactives and content.

While this may seem like a trivial step for an SEO to get their head around but for UX designers and developers it forces helps them to map the needs of crawlers into the design and build process ensuring that semantic markup isn’t just delivered eventually but, as part of a user story, is given priority over non-essential development tasks.

Quick and Dirty Guide to Surviving BrightonSEO

It’s that time of year again. Brighton is awash with SEOs, PPC’er and other digital types all descending on the Brighton Dome for a day of nerd based festivities. Taking place across 3 venues at the Dome, BrightonSEO is the largest natural search conference in the UK with over 2000 delegates scrabbling for tickets from all over the UK and further afield and of all the conferences I have been to in the past 5 years; it is definitely the most fun to attend.

Having said that there are a few things that you need to know when attending a conference like this:

Get your grub on

Brighton is an amazing place to eat and drink with a whole host of brilliant places to eat and drink so try to avoid the lure of a national chain and try something a bit more local. I’ve knocked up a quick list of some great local places to eat at:

https://foursquare.com/andrew_isidoro/list/brightonseo-food-for-the-masses

If I could go back in time and tell my pre-conference self one thing, it’d be that SEOs love a tea and coffee break, and the queues for food and drink at peak times can become unholy. The sight of the bar area is usually totally impenetrable through a wall of people so try to leave as soon as a talk finishes to beat the starving masses, before the wait on a quick caffeine fix requires you by law to take up citizenship.

Pay attention, listen and absorb

I know it’s tempting to start jotting down notes, live tweet what the speaker is saying and make a start on that roundup blog but its really not what you are here for. I’m as guilty as anyone for this but I do try and eliminate all these things while the talk is going on as much as possible so I can fully take in and understand what is being discussed. There will be plenty of time for tweeting the next day while recovering from your hangover – which brings me nicely onto…

Get your beer on

BrightonSEO is known for it’s afterparty. Kelvin does a fantastic job at organising the day and I think it’s only fair that you show your face and give thanks. Plus they are really good; the drinks flow, the music is usually top notch and its a great way to meet new people without the social awkwardness of trying to remain professional in front of your peers – especially when they are asking you to come to “a little club they know”. 8 hours later your a mess but have met some really great people and had the time of your life!

So go forth; enjoy the day, it’s going to be a cracker!

Knowledge and Understanding Your Audience

‘Any fool can know, the point is to understand’Albert Einstein

If there is one thing I have learnt during my time in digital is that there is a big difference between knowing and understanding, especially when creating something for a specific audience in mind, because you might know who they are but do you understand them?

Knowing. It’s just knowledge…

Understanding something is being able to take your knowledge and put it to use.

To put this into perspective allow me to give you an example; I have owned a computer for many many years so I know how to use the keyboard, mouse, system updates and WiFi. I don’t understand the mechanics of what’s going on when I hammer the keyboard with my fingers. Nor do I care. As long as my actions allow me to login to twitter, to read my emails and write for this blog.

The real difficulty is moving from knowing to understanding; from simple regurgitation of facts to using your knowledge to innovate autonomously.

Know a number, understand a person

Within digital marketing we often quantify people. It’s one thing to know you have 15000 unique visitors to your site last month,  2000 twitter followers or 500 email signups but it’s another thing to understand their behaviors, lifestyles, likes and dislikes. It’s good to keep tabs on the data, but how can you target what your  ‘2000 twitter followers’ if you do not understand their pain points.

The trouble is the boundaries between knowing and understanding can often be blurred. It’s easy to think you understand something when you have observed or read something many times. That regular exposure doesn’t mean an increased level of understanding though, it’s just reinforcing what you already know.

The more we understand, the better informed our decisions will be during the campaign’s lifespan and that’s why campaigns, regardless of budget, should begin with a research phase. Even if the results of that research confirm what was assumed about the audience it is worth investing time to get that confirmation. Chances are you will learn a few new things about your personas along the way that can better help you understand them and create future user focused campaigns.

To me the idea is simple. You cannot target people if you don’t understand them. You may still reach some of the intended audience, but you also risk offending user without an understanding of their needs. At the very least you will be delivering campaigns that are poorly researched and unlikely to reach their potential.

Is the Knowledge Graph Ethical?

“A system of morality which is based on relative emotional values is a mere illusion, a thoroughly vulgar conception which has nothing sound in it and nothing true” – Socrates

Socrates poised that an ethical argument based on emotion was not one worth discussing, yet as SEO consultants, we have been guilty of this in recent weeks. Back in 2012 Google introduced a feature to the search results called the Knowledge Graph. It gave users an improvement to the level of their interaction with the SERPs that they had never seen before (or since). There has been much talk of this aspect of search over the past few months but the ethical questions to be discussed around the implementation of raw data into search deserve to be discussed with logic at the forefront. When discussing ethical matters like this; I’ve long been an advocate of the voice of the collective so for this post I have decided to surround myself with people much more intelligent than I such as Bill SlawskiDr Pete Meyers and Gianluca Fiorelli. We discussed briefly over email the topics in question and here were their answers:

With Google expanding to include more panels pulled from pages without markup; how do you see information retrieval effecting brands, publishers and retailers alike?

Bill Slawski

The purpose behind knowledge panels are really two-fold. The first of those is to improve discoverability, to make it easier for people who don’t know a topic well to learn more, so that they have related information and topics to search for. The second purpose is similar to that of a snippet in search results. Knowledge panels provide a representation of the entities they are about, include some disambiguation information when there are other entities or concepts by the same name so that a search can explore those as well. In neither instance is the purpose to replace web pages or documents that might be pointed to by Google, but instead to give people more to search for from the search engine, including in many instances, topics that people often search for next historically when they perform a search for the original entity.

Dr Pete

I think it depends a lot on the vertical. It’s easy to look at a quick answer derailing a result and see nothing but bad news. It’s fair to ask, though – if your business is nothing but aggregating easy answers (plus ads, most likely), how much value do you add? Sites that listed dates for major holidays provided a service for a while and made good money on ads, but now that Google can answer a question like “When is Christmas?”, that business model is over. Being brutally honest, though – it wasn’t a very strong model to begin with. On the other hand, imagine you’re a local restaurant, and Google is serving up a rich knowledge panel with your photos, address, telephone and today’s operating hours. Have they potentially taken a click from your website? Sure, but does that matter? They’ve made your brand look more credible and given people the information they need to find you. If those people walk in the door, it doesn’t matter where the information comes from. I’m not arguing about Google’s intent or responsibility to webmasters (I think they’ve milked “good for users” a bit too hard lately). I’m just saying that the impact on your business can vary wildly. Some people will do well.

Gianluca Fiorelli

I think it is already doing it, if it true what implementation data are telling us about the real use of schema.org and other structured data, being it quite small with respect the total amount of web document indexed by Google. A very simple example is how Google is able (well, not always) to interpret authorship thanks to the by-line and with the rel=”author” being absent. How brands, publisher et al are going to be affected? I think that at first they will see and notice a traffic decrease, probably… But what they will also see will be – IMHO – a better quality of the traffic that still they will receive, also from a Knowledge Graph navigation. They will loose traffic that tends to bounce a lot or that is not going to convert ever. More over, if web site owners/SEO are able to monitor and control what Google is “scraping” from them, they can gain visibility above the fold in the SERPs, which is quite a precious value right now that organic search snippets visibility is shrinking.

Many see the see Google’s expansion of the knowledge graph to include more and more terms to be aggressive; Do you and would you ever recommend against schema.org or other microformats to limit information passed to search engines?

Bill Slawski

Search engines have been working to extract structured data from the somewhat unstructured nature of web pages for a long time. The labels from microformats and schema might make it easier for a search engine to extract information from a page, and if you want your page to be a source of such information, including that kind of markup isn’t a bad idea. I can envision some people portraying Google’s knowledgebase to be “aggressive”, and there have been people who have written about search engine bias, and a desire for search engines to show their own properties instead of those from original sources. But often those other properties are just more finely focused vertical searches.

Dr Pete

There may be isolated cases, but in general, I wouldn’t recommend that. Google is going to find ways to extract data from someone, somehow. Either you can control that data and make sure it comes from you, or it can either (a) come from a competitor, or (b) come from you however Google finds and mangles it. From a purely commercial standpoint, I’m not sure what choice we have but to play the evolving game.

Gianluca Fiorelli

No, I wouldn’t. What I would suggest, and actually that’s what I suggest to my clients from some time now, is to craft their content in order to have “answers” ready to be used by Google in the Knowledge Graph and Answers box, but to put special efforts in offering in-depth content in the same page. For instance, using as an example a site offering IP information, if it was just answering to a question like “what’s my IP” with just the IP number of a domain name, then that site is going to sink due to Answers box. But if in the same page the site offers deep information as what others domain are hosted in the same IP, what country is that IP assigned to, what historical information we can find about that IP, if that IP was ever flagged for malware and what kind of malware and so on, then we are offering informations that will be valuable to the users and that Google cannot offer with a simple answer.

Many webmasters have complained about results containing scraped data; but in your opinion is Google doing anything wrong? Is there any logical or ethical argument (from a user perspective) against Google presenting scraped data within panels?

Bill Slawski

One of the tenets of copyright is the concept of fair use, and there’s a 4 pronged test for whether a use of someone’s artistic work is or isn’t fair use. Facts themselves aren’t something you can copyright, though unique compilations of facts have been shown to be. So, Abraham Lincoln’s height isn’t something that you can copyright, and the fact that Bill Clinton plays the Saxophone isn’t either. If a summary of facts is shown in a knowledge panel from a templated Wikipedia biography box, that information isn’t necessarily going to stop people from visiting the Wikipedia page, and may actually encourage more people to visit it.

Dr Pete

I think they’re starting to tip the balance. Google will argue that this data is good for users and that they’ve made webmasters a lot of money over the years. This is true, and we should be honest and admit it. Many of us have made a lot of money off of Google and they leveled the playing field for a while for small business. On the other hand, they make $60B/year, and the vast majority of that comes from either putting advertisements on search results extracted from our sites (AdWords) or on ads placed directly on our sites (AdSense). There’s always been an implied promise – Google will make money from our data, but in return they’ll drive traffic back to us. Once they start to extract answers or create knowledge panels that just link to more Google searches, the relationship starts to break. Is that illegal? No. Is it unethical? I think it’s a broken promise, even if the promise is implied. I think they run the risk that, pushed too hard, we may block our sites and abandon Google. They still hold most of the power, admittedly, but I don’t think they should take the balance lightly.

Gianluca Fiorelli

My first reaction, as a marketer, is not really an happy one when I see Google “scraping” an answer from a site. But as a user I must admit that it really makes my life easier, and if the answer is followed by a link to the source (and that link should be more visible as such, not in light grey), I found myself clicking on that link many times and with a far more convinced interest than when I find the same hint from a search organic snippet. And that is surely better also for a the web site owner. So… after a more paused reflection, what I think Google is doing is not really scraping, but: a) offering an immediate answer for who is looking just that, especially on mobile; b) is doing somehow a sort of Curation of its own indexed data.

Where do you see the Knowledge Graph expanding to by 2020?

Bill Slawski

I can see more people working to help expand the amount of information shown in knowledge panels by 2020. We will see information that is publicly accessible but not necessarily publicly available on a wide scale, showing up in knowledge panel or Google Now card, or Google Field Trip card. These will include things like information from historical marker programs, inscriptions on landmarks and memorials, or from documents like historical register applications.

Dr Pete

I strongly expect the on-the-fly Knowledge Graph to expand rapidly. Google can’t rely on human-edited databases for entity data – they have to be able to create entities and relationships directly from their index. Honestly, though, that expansion will happen in 2014-2015. By 2020, Google will have made the SERP completely modular, allowing for any variation of device, screen, resolution, etc. Ten-result pages will be gone and replaced with fully dynamic combinations of knowledge panels, targeted results (maybe just one or a handful, depending on the use case), and entity/relationship browsing. I’d expect something less linear and more mind-map style, especially for data on people, places, and things. I’d also expect the Knowledge Graph to expand into social and be more and more personalized. Part of that is already available in Google Now cards, but I’m not just talking about things like your flight status. I think Google will try to extract your own relationships and build on your network. There’s a huge untapped commercial potential in being able to personalize product recommendations built on your trust of your own connections, for example. Your Knowledge Graph experience and mine in 2020 may be completely different.

Gianluca Fiorelli

It’s hard to know or even preview. What I expect is that Google will start looking at ways to avoid that people will be “spamming” the Knowledge Graph itself, which is now theoretically possible (and easy), as we can manipulate the sources from where big part of the information is pulled from.

In summary

The question of ethics surrounding the Knowledge Graph will no doubt continue for many months/years but there is one fact that is not going away; users love it. Providing answers within the search results not only allows users access to information at a glance but they also allow them to do all this within Google’s environment. That’s good UX. To paraphrase Socrates once more “From the users deepest desires often come the SEOs deadliest hate.” While the Knowledge Graph continues to give users a superior search experience; we can expect them to display more and more information within the SERPs. Ethical or not…

More on influencing the Knowledge Graph here but as always, lets discuss in the comments!

H2Only: Helping the RNLI

For 2 weeks I’m swapping my favourite drinks, taking the H2Only challenge and drink nothing but water for 2 weeks from the 27 May – 10 June, to raise money for the RNLI. Anyone who knows me will know just how much coke, beer and general fizzyness I consume on a daily basis so this will not be as easy as it sounds. I’ll also be navigating a moving house party, a moving in party and a few birthdays too; this will be quite a challenge but all for a good cause.

I’m saying farewell to fizzy pop, bye-bye to beer and laters to lattes and drinking nothing but water for 2 weeks to raise money for the RNLI and their lifesavers at sea who drop everything at a moment’s notice to save lives on the water.

 Help me through moments of weakness, sponsor me at https://justgiving.com/Andrew-Isidoro-h2Only and follow the hashtag at #H2Only

#BrightonSEO – Hacking the Knowledge Graph

‘Hacking the Knowledge Graph’ – Andrew Isidoro at BrightonSEO April 2014

 

#BrightonSEO April 2014: ‘Hacking the Knowledge Graph’ – Andrew Isidoro, SEO Manager, GoCompare.com


A talk into how Google are making use of semantic markup, Linked Data sources and their own data to fuel the Knowledge Graph and how SEOs can influence these to highlight themselves as entities.
An extension to: I Am An Entity Hacking The Knowledge Graph and led to discussion on the ethics of thee Knowledge Graph.

Links to further reading

http://wordlift.it/
http://www.seobythesea.com/2013/05/google-knowledge-graph-results/
http://moz.com/blog/the-day-the-knowledge-graph-exploded
http://moz.com/blog/a-deep-dive-into-google-myanswers/

Problems with Breaking News with the Knowledge Graph

The world awoke on Tuesday to a new Microsoft CEO. Satya Nadella, the former Head of Cloud Computing, had been promoted in a pretty uneventful affair replacing the incumbent CEO Steve Ballmer.

The media picked the news up exceptionally quickly and the story spread around the web like a wildfire with news outlets, bloggers and social media all talking about what the appointment meant for the business and what changes Nadella would be likely to make.

There was one place however that didn’t even notice that anything had even happened. Google’s Knowledge Graph.

Google’s news search served an updated story on the chief executive switch, of course, but the first visible result was provided by the Knowledge Graph, and despite it being a database containing encyclopedia entries on about 570m concepts, relationships, facts and figures, it was quickly made out of date by the Microsoft move. A fact I’m sure wasn’t lost on Nadella, a former Internet search employee.

Steve Ballmer

I was alerted to this anomaly by Samuel Gibbs of the Guardian who wrote about the lag in the system. So I jumped at the chance to examine the issue in real-time.

The Knowledge Graph is fueled by a number of knowledge bases that push facts to be used in information panels, however, as shown in search patents unearthed by Bill Slawski, these need to be verifiable. Essentially Google needs two sources of information to verify against before they will insert data into a panel. This seemed like an ideal area to investigate further.

As Wikipedia is seen as an important source of information for the Knowledge Graph I began scanning through dbpedia, a database of Wikipedia used by many semantic web applications. Diving into the RDF output for Nadella and Ballmer’s entries came up with nothing to cause alarm. Both sets of data had been updated to include their new employment status.

Updating the Knowledge Graph

Now that there was one definite source of data on the web, I took to Freebase to edit their profiles to see if the Knowledge Graph could be “kicked into gear”.

After a few minutes I had entered Nadella’s new CEO status at Microsoft and updated Ballmer’s new employment details. Now I had to wait.

Using a tool called Page Monitor I tracked the RDF output of Freebase to see of there was a correlation between the time of publication to the moment the Knowledge Graph updated with the new information.

Alas, just a few hours after editing, the RDF dump had updated followed quickly by an updated entry within the SERPs:

Microsoft CEO - Satya Nadella

So what does this tell us about the Knowledge Graph?

Verified sources
We have long understood that the Knowledge Graph needed multiple sources of information to populate a panel for an entity, and thanks to patents we had an indication that two separate sources of information would be enough to influence results. However, seeing this (albeit rough and ready) experiment in the wild gives a solid sign that this may well be the case.

Freebase as a source
Freebase is seen by many (myself included) as key to the growth of the Knowledge Graph and other semantic agents. This shows how Freebase data can also be used as a source of user-generated information that can be passed into the Knowledge Graph.

Time sensitivity is an issue
Last but not least, this debacle shows that for time sensitive information such as breaking news; the Knowledge Graph simply isn’t ready. The process of becoming (or editing) an entity isn’t well-known and as such will hamper the ability for Google to keep its panels updated to respond in (almost) real-time.

As more and more results begin to show more dynamic knowledge panels like these, it’s the job of an SEO consultant to understand how these panels are created, why and how they can affect our clients in the real world.

Digital by default: the problem with online marketing

The Web has been touted by many as the ultimate measurable medium allowing digital marketers to assign value and resource to the best performing channel. However, the digital landscape and indeed the entire internet based consumer-to-advertiser relationship have blurred the view of marketers to other useful channels.

Marketing budgets are increasingly being spread over multiple channels. Within digital alone search is increasingly being seen as a separate budget item taking the helm for SEO, and PPC, and often social media. While these may be seen as subsets of digital, they are vastly different in terms of creation, production, and media purchase. Not only are their production costs different, but their intended target audiences also vary widely.

All too often marketers, agencies and communications departments calculate their ROI based on channel. However, calculating ROI like this misses at least two critical components: the specific role each channel plays and the interplay between the channels.

So often digital marketers guilty of holding onto a “Digital by Default” approach to marketing rather than taking into account how we can work together with alternative channels to build the most comprehensive strategy we can for our businesses or clients. Though we are not alone in our often ill-informed thinking that our discipline is the most beneficial to a brand, but we are one of the most stubborn when it comes to calculating ROI.

We focus on how SEO, PPC and social media interact so much that we often forget that offline media can be one of the best ways to attract new inbound leads and sales. We forget that television and radio advertising can drive social mentions, press coverage and branded traffic. We forget that PR and events can build better relationships with influencers than any twitter outreach ever could and we forget that as marketers we should go one step further to understand the interplay between our overall marketing efforts.

In order to do this, we can use media mix models to compare specified performance metrics for each media type when running individually to when they are running together to give some understanding of the interplay between different channels. Once we understand how channels interplay with each other it becomes easier to strategise complementary channels to achieve the best impact possible and return the best ROI.

Agile Pubishing: An Iterative Blogging Process

I’d like to start this post with a confession:

My name is Andrew Isidoro and I’m a flaky blogger.

Over the past few months I’ve allowed this blog to be a bit slack on the publishing front, yet it’s not for a want of trying. There are over 20 posts in my backlog that I have written as drafts yet none are finished, partly due to an increasingly busy schedule and partly due to my demand of a certain level of “polish” before a post is put live. To remedy this; over the next few months I’ll be taking a much more Agile approach to publishing this blog.

Agile publishing process

Agile isn’t a new idea. The term itself, obviously, comes from the software development industry, which has used agile development models for a several years and it was my time at Box UK that allowed me to picked up elements of an Agile workflow.

In tech, agile development means releasing iterative and incremental versions of a software product or website, getting comments from your customer about that version, learning from that response, and then repeating the process until you reach an improved finished state.

So after some thought I had the following questions:

  • Could this approach be applied to writing a blog?
  • Can you build a community around an author’s ideas and content?
  • Would it make a better blog?

Over the next few weeks my subscribers and other regular readers will notice that, for some blog posts at least, I’ll be trying something new; I’ll be trying to publish early drafts well before the post is in any kind of “finished” state.

I’m trialling this approach with the following hopes:

  1. I hope this will encourage me to get started on topics which I feel will take some extended writing effort. In the past I have parked such posts and rarely got round to completing them.
  2. I hope that early publication will encourage readers to comment and feel they can influence the “finished” post.
  3. I hope it allows a more meaningful connection with people within the digital community; that maybe readers will be more likely to engage with the topic whilst the post remains “rough”, open for discussion and evolution.

This is something that I think could be a new way of forcing me into iterating faster, learning more and communicating more effectively with my peers. I’d love to hear your thoughts on this process and how you think I could

Learning to Code: How I Got 2000+ Visits With a Basic Sheetsee.js Twitter App

It’s no secret that high quality (or at least that of perceived high quality) does well in the digital sphere through social shares, inbound links and web traffic. It’s the basis on which the SEO community has turned its attention to content marketing for success.

There are a flood of blog posts around writing content, structuring content, and pitching content but all seem to miss a trend that has become more and more impressive in the results it can harness for digital marketers; technology.

To quote Koozai’s Mike Essex:

“The future of Content Marketing is going to be driven by technology, both old and new, being used in clever ways that complement the core story.”

Until a few weeks ago my coding level was low at best. I knew how to knock up simple HTML/CSS sites, and I could hack apart a WordPress theme with some trial and error (mainly error) but I had never made anything from scratch or written JavaScript but learning about the tech behind content pieces really enticed me to get my hands dirty.

Starting Out

I’ve always got a small notepad around with ideas for content, websites and side projects so I knew the sort of thing I wanted to make. I decided on an idea I had been talking to my girlfriend about around how the English Premier League would look if it was ranked not on points and goal difference, but on twitter followers.

I began to look at how I would get the app up and running and came up with 3 basic steps:

  • Create database of teams and corresponding twitter accounts,
  • Have database update with live twitter follower counts
  • Display database in a simple league table

Sounds pretty simple right?

Getting My Hands Dirty

Armed with my simple roadmap I quickly gathered the teams and their respective twitter accounts and began to delve into the Twitter API to work out how I was going to retrieve and display live follower numbers. This is where my non-developer brain got a bit lost but after a few hours of reading I found a JavaScript library that would change everything.

Sheetsee.js is a JavaScript librarySheetsee JS App, or box of goodies, if you will, that makes it easy to use a Google Spreadsheet as the database feeding the tables, charts and maps on a website. To use sheetsee.js you’ll definitely need to know HTML, CSS and know JavaScript enough to be able to hack your way around.

This all sounds great but the real benefit of  using Google Spreadsheets as the backend database is that it is super easy to use, share and collaborate with, and once set up, any changes to the spreadsheet would be auto-saved and be live on your site as soon as a visitor refreshes the page.

Using the example code that Sheetsee.js creator Jessica Lord offers as part of the documentation and a bit of XPath trickery, I was able to have a scraper that would pull off live twitter follower numbers, store that in a Google Spreadsheet and then push that data to a live application.

The Finished Product

In just a few hours of tinkering I was able to put together a basic Twitter based Premier League table that updates every time a user visits or refreshes the page. You can check it out here.

Twitter premier League

The Traffic

Whilst I was pretty happy with how everything went, this was by no means something I was looking to release as a linkable asset. It’s not the prettiest of pages, it could do with a bit more love in some areas and isn’t mobile friendly at all. Despite this, I shared it with a few friends and my twitter followers to see if I could get some feedback on it.

I got a few comments on the build and a few asking how it was built but nothing really out of the blue. Until I checked my analytics the following day and saw this:

Sheetsee.js App Analytics

I thought there might have been a problem with my tracking code until I took a look at the referrers. It seems that the page had been picked up by the Metro who liked the angle for a news story. Thinking back in hindsight, it was a perfect time to launch something like this, with so many of the top teams not having a great start to the season it offered an alternative for sports journalists to compare and contrast. Add this to the large number of tweets and shares the article and app got and we are really in business…

I’ll gloss over the news angle element (I have a whole other post on the way about Digital PR for that), but it did amaze me how a fairly generic application can generate such good coverage just because of the way it is uniquely displayed.

So whilst you don’t have to be a coder to understand what is happening, the important thing is that everyone in Content Marketing appreciates that this technology isn’t scary. Anyone in content/digital role should be able to grasp why these pages work the way they do, the tech they run on and the technical issues behind the “features” you are requesting from their developers.  Trust me, your web developers will appreciate your efforts.

Are you looking into coding? What are your motivations? How do you think technology will impact on content marketing in the (near) future? Leave a comment below!

Do You Notice Great Work?

It was 7:51 a.m. on a cold Friday in January, the hustle and bustle of the morning rush hour fully underway. Over the next 43 minutes, as the violinist performed six classical pieces, not just professionally but perfectly, and over 1,000 people passed by as part of their morning commute. The talented performer earnt just £32 minus a few dollars and pocket change he’d used as seed money to get the ball rolling.

Each passerby had a quick choice to make. Carry on with their daily routine or break tradition to stop, listen and maybe donate to his cause.

His performance was arranged by The Washington Post as an experiment in context, perception and priorities. He played these classical pieces on a $3.5 million violin handcrafted in 1713 by Antonio Stradivari.

The violinist; none other than internationally acclaimed virtuoso Joshua Bell.

You could use this as a stick to beat the uncultured society that we live in (I make no excuses; I am one of the uncultured proles), but many of us go through our lives blinkered, hurried and ironclad, unwilling to let a chance encounter with something beautiful cause a hiccup in our routines.

Notice Great Work
My point is this; your peers and superiors are going about their daily business and not thinking about you, your work or the improvements you are making to your process, department and company as a whole. I’ve long been a believer in intrepreneurship but it’s not enough to just innovate in your role. You have to show your innovation, share them with your colleagues and entice them into thinking about improvements in their own role.

As I work at an Agile development consultancy (Box UK), we run daily stand up meetings; not only to keep the team on the same page with existing workloads but also to give our digital and marketing teams the chance to highlight the good work that they have done. It’s not something I was used to, but as you get more involved with this feedback process, you’ll see how beneficial it can be.

Would you appreciate or even notice a colleague doing great work if it wasn’t mentioned at lunch or in a meeting? And if you didn’t, how do you make sure your best staff are supported in their intrepreneurship?

Marketing personas for non-profits: design and implementation

Why personas?

Everyone thinks they know who their audience is but without data, it’s just a guessing game. Building up a detailed picture of your audience is a vital stage of marketing planning and potentially even more important for the voluntary sector – by getting a firm grasp on who your audience is and what you want to achieve from your non-profit’s digital communications, you can tailor your message to resonate better with potential donors and volunteers.

Despite its importance, following a recent free training session for charitable organisations we found that many fundraisers lacked the necessary knowledge to create solid research on which to base their marketing decisions and so, during the workshop, we introduced them to personas.

Personas are not new in marketing; in fact they have been around since 1994 and since then many digital marketers have used them to gain better insight into their users. With 56% of charities reporting that they needed training to maximise the potential of digital, however, it perhaps isn’t surprising that even charitable organisations that operate heavily online did not know what they were or how to make best use of them. In response to this I’ve written this guide to personas specifically for non-profits.

A persona is a “fictional character that communicates the primary characteristics of a group of users, identified and selected as a key target through use of segmentation data”.

By using personas in their marketing planning our attendees were able to understand and adopt the cognitive frameworks of their supporters and concentrate on designing content to fit their need states. In referring back to these reference points they are able to make sure that the content created is actually read and found useful by supporters and that it helps aid them through the decision-making process and donor funnel.

Personas are a fantastic tool to create a well-rounded view of your charity’s market segments to not only help improve your brand messaging to these audiences but also, thanks to their transparency, to help you get internal buy-in from the many stakeholders from within the organisation.

Creating the personas

A marketing persona can be a complex document (especially when a large number of stakeholder groups are involved) or they can be as simple as this example highlights. Either way, below are 6 simple steps that can help you put together personas for your own specific donor segments using readily available data:

Collect your existing data

To create marketing personas for donor group segments, start by pooling the data you already have, collating all available qualitative and quantitative information about those who have already interacted with the brand. This is a great place to begin as there is no doubt a tremendous amount of material readily accessible; from recent event sign-ups, newsletter subscribers or even basic information from your CRM system.

If you are not yet collecting data on your charities event, attendees and donors, start doing so. When collated this information can be incredibly useful in understanding your current evangelists.

Use your social networks

When searching for demographic data, look no further than social networks (and no… I don’t mean Klout scores). People freely volunteer their demographic information on social networks due to their open privacy settings which allows marketing tools such as Facebook Insights access to a host of data instantly, all fine-tuned to your specific audience. Logging into your Facebook page will give you a whole host of information about the community who are already engaging with your charity such as age, gender, location and language.

Another good tool to mine persona data from social media is Demographics Pro which offers further information based on followers of your Twitter accounts.

Make use website data

There are many places you can go to get further data on your web users, for example; Quantcast and Google’s AdPlanner allow you to gather information on demographics based on the advertising profiles of websites. This is especially potent for those who have a niche target market that regularly frequents particular online meeting places or reference websites, for example, my local charity Ty Hafan could look to related sites such as that for World Hospice Day.

Drill deep down into your own website analytics data too. Take great care to look into metrics such as social media traffic and organic keyword performance to identify intent, but also pay close attention to internal search, as this may offer clues about behaviour or missed content opportunities. Look for commonalities that can help backup the insight you have already captured. For example: you may believe that your current donors consist of young digital natives, but if your analytics show a distinct lack of mobile and tablet activity there may be cause to review your hypothesis.

If you know how your donors prefer to find information online, whether that is via search, social or other means, you can also make yourself present in those areas and, using your improved knowledge of the target audience, work on establishing the charity within related communities.

Ask the audience

Further expand your research for more qualitative data on your target market and gain more insight into the decision-making process of donors by gathering customer feedback.

Getting responses from your current audience on their feelings towards certain social issues (it helps if these issues are related to the non-profit) can give you much more information on the more “touchy feely” elements of persona creation, highlighting as they do their current mind-set which, when combined with your raw data, helps give more of a narrative to your personas. This is particularly useful to charitable organisations as it offers a much more natural way of illustrating key insights to key stakeholders outside of the persona development process and the project. For example, you can create extended descriptions to personify the donor segment, making it easier to explain your marketing decisions to others by asking: “Would ‘Donor Persona A’ relate to this?”

It is important to note however that this qualitative information must still be substantiated with hard data – don’t forget that outside influences and biases might skew feedback responses.

Pull it all together

Using all the data gathered you can begin to piece together a set of marketing personas that blend all of your research into a series of documents, each focused around a single personification of a market segment. The content and complexity of these documents can vary from project to project depending on the level of insight needed but, if very in-depth, can become quite detailed, including:

  • Age
  • Educational level
  • Social interest
  • Job status
  • Typical work experience
  • Main information sources (TV, web search, social media, etc.)

It is important to understand that a marketing persona does not reflect a single person. It is a hypothetical representation of the behaviour and motivations of a group of similar people who, often, are captured in a 1-2 page description to make the persona a realistic character.

With a completed persona you have a real (though hypothetical) person you can imagine, understand and plan around, making it easier to predict how they might act under any given situation and, importantly, how they will respond to certain stimuli from your campaigns.

Keep refining as more data becomes available

This list is far from comprehensive and while it does not guarantee success for your charity, it does give you a basis from which to develop your well-researched personas based on real market data. Remember though, it is important to keep up persona profiles by adding in new data from alternative sources as they become available as well as removing any traits that can no longer be backed up.

In creating personas, and gaining a more detailed understanding of donors, you can better divide marketing budgets, evaluate opportunity cost and minimise wastage within your campaigns.

How do you currently identify your target market? Do you use a similar technique to highlight audiences or do you use a less focused digital marketing approach?

This post was originally written for the Guardian’s Voluntary Sector network. First published on 29th July 2013 here.  Image credit:elontirien

Penguin 2.0 – Everyday It’s Shuffling!

Google Shuffle
For the past few days I have been noticing something very strange both in my own data and within the rank tracking software that we use at Box UK. Data was fluctuating massively each time a test was run and rankings were sporadic to say the least. I had spoken to colleagues and friends within the search industry about the issue but it seemed to be a bit of an anomaly until I came across a tweet by the one and only Dr Pete:

Rankings and Data Centre Fluctuations

The latest Penguin update (2.0 or version 4) was released on and is said to impact around “2.3% of English-US queries to the degree that a regular user might notice.”. Matt Cutts also mentioned that “The change has also finished rolling out for other languages world-wide. The scope of Penguin varies by language, e.g. languages with more web spam will see more impact.” which does seem to rule out this hypothesis yet I’m not totally convinced.

Whether or not this is a planned staggered roll-out or a roll-back to previous states is unknown but could offer an explanation for the major SERP fluctuations that the UK has noticed frequently changing throughout the day based on different data centres.

I suppose the main thing to take away from this is that Penguin 2.0 has only just happened and very few (if anyone) within the industry knows the whole story surrounding this update. There have been hundreds of SEO experts outlining how to recover or take advantage of this update but all I can say is: take everything you read with a pinch of salt in the next few days/weeks and test for yourself. There be dragons…

Has anyone else noticed fluctuations in their search environment? Care to share any other theories on the matter? Let me know below.

How recent changes to search will help marketers with brand messages

Back in 2001, UK computer scientist and inventor of the world-wide web Tim Berners-Lee wrote an article in Scientific American Magazine about his vision of the semantic web in which he described a web that was accessible and understandable to both humans and machines.

Search engines have made dramatic changes in the past few months, adopting some of the ideas of the semantic web to create a new way of showcasing people and brands in search. The addition of Authorship and the Knowledge Graph has made huge ripples within the search industry, but many digital marketers are still yet to realise their full potential.

Allowing digitally aware brands to display author profiles next to their webpage within Google, enhanced ‘Authorship’ listings receive a higher click-through rate than more traditional results due to their improved visibility and perceived trustworthiness. Authorship is also one of the first steps towards a potential algorithm change called AuthorRank, which would calculate a piece of content’s relevance for searchers not only based on quality, but also audience size and the author’s authority across the web. In essence, ensuring good audience engagement with high quality content attributed to a trusted author will become hugely important for ranking well within search.

Currently, about 17% of Google queries include at least one instance of author verification within the first 100 search results, but I expect this number to rise rapidly over the next few months as Google+ begins to become more integrated with search.

The Knowledge Graph is a database of more than 570m of the most searched-for people, places and entities online, including about 18bn cross-references. Using their vast bird’s-eye view of the web together with structured machine-readable data, Google supplements search results with factual information in the form of an information panel, adding value to the user’s experience.

For instance, when you search for “How old is Tim Berners-Lee?” Google identifies, from your raw text query, an entity against which to cross-reference its Knowledge Graph data. In this example, it understands that you’re referring to the inventor of the web and returns the corresponding information.

At the moment, these results are reserved for only noteworthy people, places and things. This is, however, about to change. Google patent blogger Bill Slawski has identified a recent patent application from the search giant exposing plans to widen the scope of the Knowledge Graph to include businesses, just a few days before Google announced new ways of highlighting brand logos to its crawlers.

Bill noted: “In the future we might get either a local business listing and/or a corporate listing, depending upon where we might be located, with a disambiguation set of links based upon informational intent.”

This new development opens up a huge opportunity for digital marketers to enhance their presence within the search results, influencing what may be the first piece of brand messaging their customers see and driving increased traffic and sales.

It makes sense that publishers of content would look to control their brand message on the most visited platform on the web and with the recent buzz around content marketing, the growth of brand authors will only accelerate.

With the expansion of search into an increasingly full-featured experience and as digital marketing budgets swell trying to keep up, I predict that Authorship and the Knowledge Graph will become a huge part of future campaigns. In the continually evolving digital industry, it’s vital that as marketers we adapt and evolve our strategies accordingly.

Blogger Outreach: A Guide to Guest Posting

Blogger Outreach Guide

Guest blogging offers a great means of building up your SEO with quality links and offers both parties a win-win situation. The blogger gets great content that can be monetised and you get those all important links for your post panda SEO campaign. What’s more, by building long-term lasting relationships through a good blogger outreach plan, you can even generate further link building opportunities in the future.

Continue reading

Klout Score: You Are Not A Number

What’s in a Klout Score?

Since they burst onto the social media scene in 2009, Klout has quickly become popular as a way of measuring a user’s influence across many social networks though primarily focusing on Twitter, Facebook and Google+. The general idea is that Klout pulls in the users social media data from the services it tracks, throws that info through their “algorithm, and spurts out a number between one and one hundred to give you your Klout score measuring a users social media influence.

Klout Logo

Continue reading

Android SEO Apps & Tools to Supercharge Your Tablet

So you’ve got yourself a tablet for Christmas, and you’re thinking of how it can make your portable life more useful. Well, there are lots of little jobs in SEO. The problem with these little jobs is that they take up your free time but now thanks to some great Android SEO tools you can take your PC with you everywhere you go to get some work done while on the move.

Continue reading

Google Disavow Tool – Has Negative SEO got a new weapon?

I haven’t written as much as I would have liked lately but a thought that has festered has drawn me back to the drawing/writing board, and that is Google’s latest addition to the Webmaster Toolset – the Disavow Links Tool.

Twitter has been awash with new blog posts about the new Google Disavow Tool ranging from quick updates to full-blown methods of using the new toy to its fullest extents. But there is another area that has me puzzled…

Google has a problem with identifying link spam. It’s evident to many of us when we see some of our competitor’s backlinks and not at all surprising considering the mammoth task that it is to police the web.

So it should be fairly obvious that they would use any data that is available to identify spammy websites that Google clearly don’t want cluttering up their search engine (and server space for that matter).

Enter the Disavow tool.

I completely agree with the general consensus that this tool will be a great way to keep a link profile looking squeaky clean  which many webmasters and SEO practitioners will find of great use. But I can’t help but think that it could be very dangerous in manipulative hands.

To clarify I’m not just talking about hacked Webmaster Tools accounts, I’m talking more on the lines of webmasters disavowing a competitors domain to try to get them positioned as a spammy site in the eyes of Google.

Dr Pete answered a similar question on his Moz post:

“Could they compile all of these lists across thousands of sites as a signal for which sites might have link problems (especially big directories)? Absolutely, they could. I don’t think that’s their ultimate goal or that they’re going to do that anytime soon, but it’s certainly possible.” – Dr Pete

Now, although his answer didn’t feel like the data wold be used in the algorithm, it does seem a little strange that they would not make use of such data.

That would mean, large-scale blog network owners could potentially now have a new money-making method and that is negative SEO using the disavow tool.

What are your thoughts? Do you think this data will be used in the algo? Have you used the tool yet? Drop me a comment, I’d love to hear your thoughts!