Google Authorship – getting it right


by Sally Kavanagh

Google+ is something to be taken seriously.  It’s a Google product and it makes sense that Google will do everything it can to promote its use – including using it as a signal when determining search engine rankings.

Author rank is linked to your Google+ account and author rank is definitely worth developing.  Anything that increases the authority and credibility of you as a content generator and of content that you have generated has to help with the visibility of pages associated with you as an author.

But managing author rank is not the easiest task.  Danny Sullivan has written an excellent article on some of the pitfalls and issues of Google author rank - it’s well worth keeping a note of as a source of reference.

Social metatags

On a related topic, here is another excellent article worth keeping to hand.  This one is by Cyrus Shepherd and looks at social meta tag templates. 

 


 

Hummingbird and keyword research


by Sally Kavanagh

Hummingbird, Google’s algorithm update that came online in Aug 2013, would appear to be much more fundamental than many previous updates, such as Penguin and Panda.  Rand Fishkin’s Whiteboard Friday – October 18th – gives some very interesting ideas on how Hummingbird will affect keyword research, or more specifically how Google is dealing with keywords and therefore how we, as SEOs need to.

 

Rand’s Whiteboard Friday can be found at http://moz.com/blog – highly recommended!

 


 

Social media sizing infographic


by Sally Kavanagh

I spotted this on Visual.ly - always useful to have all the information for social media images in one place.

Social media sizing inforgraphic


 

To follow or not to follow


by Sally Kavanagh

The way in which the no follow tag is being used seems to be getting silly.

When it was first introduced it was intended to prevent abuse caused by spamming blog comments with links, a very good idea although my blog is still bombarded with spam comments if I don’t take action to prevent them.

Then it was used to prevent paid links, links from adverts etc, being used to pass link juice. Again quite sensible and reasonable but it is just beginning to get difficult. Surely a link from a reputable directory that vets all its listings should pass page rank. No low rank, spam site is going to either pay if it’s a paid directory nor pass editorial review so surely it is a legitimate SEO signal that the search engines could use.

Today I read a very scary headline in a emailshot from econsultancy’s Daily Pulse

“Has Google really just killed the PR industry”

The article references a recent update to Google’s webmaster tools guidelines 

The relevant section reads

Additionally, creating links that weren’t editorially placed or vouched for by the site’s owner on a page, otherwise known as unnatural links, can be considered a violation of our guidelines. Here are a few common examples of unnatural links that violate our guidelines:

•   Links with optimized anchor text in articles or press releases distributed on other sites. For example:
There are many wedding rings on the market. If you want to have a wedding, you will have to pick the best ring. You will also need to buy flowers and a wedding dress.

Now does that mean that if I write a press release I should not use keyword anchor text? Surely if I am creating a link to a page on my client’s site, then it makes sense to use anchor text to tell my reader and the search engines what the page is about. And Google has said that links in press releases do not benefit rankings –  https://productforums.google.com/forum/#!topic/webmasters/O178PwARnZw/discussion

We are in danger of getting to the point where what would have been considered ‘good practice’ until recently is now branded as unnatural link building!

So perhaps we should all start optimising for click here and contact us, then Google will realise that no one is bothering with the meta keyword tag, start ranking sites on that and keyword density and everything will have come full circle.


 

Baidu – is it opening up to the UK?


by Sally Kavanagh

There is no doubt that Baidu, the Chinese language search engine is making moves westwards.  It seems Baidu’s revenue has fallen in recent years and this has been cited as a possible reason.  On the other hand you could say it just makes sense for Baidu to capitalise on its technology and to exploit is access to its 70-80% share of the huge Chinese onine market.

At the end of February, Baidu announced is English language web site for developers and it looks like this is part of Baidu’s effort to maintain its dominance in the Chinese mobile market in the face of competition from other Chinese competitors such as Qihoo.

Another development is Baidu’s approach to leading brands in the west to partner with Baidu as advertisers.  A launch event is being held in London on March 18th.

So what does this mean for UK business looking to gain visibility in the Chinese search engine?  It certainly looks as though Baidu is planning to open up more to the English speaking world.  Currently Baidu is a Chinese language only engine and sites need to be translated into simplified Chinese and also it hard to impossible to get rankings if the site is not hosted in China.

Baidu does offer paid listings – the PPC model.  It is not as straightforward to sign up as with Google or Bing and there is a minimum spend of $2000 a month, but might offer a very interesting way of testing the Chinese market before investing larger sums on the ground.  More information at http://www.chinasearchint.com/en/why-baidu/getting-started

Baidu could prove an interesting area for SEO in the near future as China’s rapid development means it is looking more and more to Western markets to expand into.

 


 

Google Analytics accounts and profiles – why the difference matters


by Sally Kavanagh

Having taken on a couple of new clients in the last couple of weeks, I have again hit the problem of confusion between accounts and profiles when setting up Google Analytics.

It is vitally important that every domain has its own Google Analytics account – not just its own profile within an agency’s account.

The reason is that it is not possible to give anyone – the client for example – full administrator access to the data from their site if it is just a profile.  Administrators have access to all profiles within an account. So if  an agency has 50 clients on one account that means if anyone of those clients were given full admin access to their data they would have full access to the other 49 clients’ data!

The importance of admin access

So why would a client want admin access to their data? For one thing I would say they have a right to it if they want it. The data is ultimately held by Google but it is the client’s site so if they want it then they should have access to everything there is to know about it. But there are two common problems that arise if full admin access is not available.

The first is that the client or their service provider wants to do some analysis on the site and will need to see how the account is set up, configure goals and filters etc. This is the problem I hit most frequently when I want to provide a stats report for a new client, or ongoing monthly reports for existing clients. In one recent example, I have been looking at an ecommerce site that I know is not performing well and the client wants to know why. Google Analytics has not been configured to track ecommerce but it has not even been set up to track transactions, for example by tracking the thank you page as a goal. This means I have no information at all – such as traffic source, landing page etc – on the few transactions that have been made.

The second problem is very serious. If a client changes their web agency – the one holding the Google Analytics acciybt – then there is no way of migrating the profile. The old agency can simply stop access to the data and there is nothing the client can do about it – either legally or practically. Even is the old agency wants to hand the data over to the client or the new agency, there is no way of doing it.

So how does this confusion between accounts and profiles arise. The big problem is that so few web agencies, especially small ones, understand Google Analytics. I have come across this profile problem many many times and never has it been intentional. It is simply that web agencies do not understand that setting up a new profile for a new site is not the right way to do it.

Every site should have its own account. The only time when this may not be true is if one organisation has more than one site. So one of my clients has what they refer to as their brochure site and they have – for historical reasons – their ecommerce site on a separate domain. Essentially it is the same site but split over two domains so from an analytics point of view, it makes sense to handle it on one account.


 

Google algorithm changes in 2012 and checking rankings


by Sally Kavanagh

As we all know Google changes its algorithm frequently – small changes are reputed to happen on a weekly if not daily basis.  Wordtracker recently published a very useful little article summarising the major updates that occurred in 2012.  The description of how this affects how to check rankings is particularly useful, especially the reference to Google NCR (no country redirect) if you want to look to see how rankings will look in another country.

The full article is available on the Wordtracker site.

The main changes last year were increase in personalisation of search and the two big anti-spam updates Penguin and Panda.

I have big issues with personalisation of search, although I can see the reasoning behind – delivering results that aligned with the searchers interests.  My concern is that inevitably that means manipulating results which veers towards censoring results.  The aim may be laudable but the unintended consequences are not.

On the other hand, Penguin aimed to remove spammy sites, especially those with over-optimised links (remember Florida back in 2003, nothing changes!) and Panda which is designed to remove sites with very thin content.

The Wordtracker article is a good resume of the 2012 i[dates, and more importantly, how to deal with them.

 


 

Remarketing, Google and Privacy Issues


I attended a Google webinar earlier this week on remarketing and very powerful it was. Just to make sure we are all talking about the same thing, remarketing is where you use your advertising, in this case Adwords, to get in front of visitors who have already visited your website for a second time. So if visitor A had visited your site and then goes off and visits a site on Google’s Display Network, then your ad will appear. If you are looking for information and visiting lots of content sites that display Google Ads, then ad may even follow you around.

Now there is no doubt that remarketing is a powerful marketing tool. Every advertiser knows that the more times a prospective customer sees a name and becomes familiar with it the more likely they are to buy from it. And so long as your website includes the correct private policy information, then it is perfectly legal. But is remarketing entirely ethical?

Google 10 points

Google’s 10 point corporate philosophy says “You can make money without being evil” often misquoted as ‘do no evil’. I certainly wouldn’t say that remarketing is evil, far from it. It may be aggressive but not evil.

However, going right to the beginning of Google, Paul Buchheit, the creator of Gmail, is reported to have come up with ‘Don’t be evil’ as a sort of information slogan for the new company. He also added, and I am quoting from wikepedia here, it was “a bit of a jab at a lot of other companies, especially our competitors, who at the time, in our opinion, were kind of exploiting the users to some extent” Here I do think Google has (again) fallen short of its lofty ideals.

In order to work, remarketing must collect and then use information about which sites you have visited and then deliver content, in the form of ads, based on that information. Technically your privacy policy should say what you are doing, but I very much doubt if there is 100% compliance on this, and even if there were, what percentage of the public would understand the implication of dropping cookies etc. Most internet users outside the online and advertising industries don’t understand the difference between organic and paid ads, what chance the finer points of cookies! To me that gets awfully close to “ exploiting the user to some extent”.

What I find particularly interesting is how inconsistent we as webmasters, and the web in general, is about privacy. Earlier this year in the UK, there was a tremendous amount of hype about the implementation of the EU directive on privacy. Companies were becoming paranoid about the need to inform visitors about the use of cookies – including ones for shopping carts and anonymous stats reporting. But almost nothing was said about remarketing cookies and cookies used for other sorts of advertising.

Google, in my view, is also inconsistent. It promotes the use of remarketing, provides keyword data to Adwords data, but declines to provide keyword data in organic search results on the grounds of privacy. I’ve never understood how it squares that circle.

Perhaps I ought to just add that I am happy to offer remarketing as a service to my clients – so long as they have the correct privacy policy in place. Another circle to square!

 


 

Where will Google translate and statistics lead us?


I have been critical of the mighty Google in earlier posts – The dangers of personalisation in search - but I’d like to talk about another side of the giant, one that in my view is much more likely to bring benefits to mankind that the insidiousness, in my view at least, of search personalisation.

Hans Rosling presented a fascinating programme on BBC four (available on the BBC iplayer for those not fortunate enough to live in the UK) about the Joy of Statistics. The way Hans Rosling presented statistical information about world health made me realise that I need to up my game in reporting Google Analytics data to clients, but I digress.

Hans explained that Google translate is a statistical product. I hadn’t given how Google translates an item much thought but it is revolutionary. Earlier attempts at automated translation tools had relied on compiling rules and structures in much the same way as a teacher might teach a person a foreign language. But this approach is set with problems. For every rule and structure there are exceptions, for which rules need to be constructed. So Google with its vast experience in managing huge data resources turned the problem on its head and looked at the problem as a problem in statistical correlation.

It all sounded very reminiscent of latent semantic indexing (LSI) which is at the heart of how Google ‘reads’ a page of content. I don’t feel I need to understand LSI in order to use Google or even optimise a site, in much the same way as I don’t need to understand the internal combustion engine to drive a car. But it does help to have some idea of how Google ‘thinks’ at least in outline, in order to understand what it is looking for and where.

There is no doubt that Google is an awesome thing, and the way I have always tried to get a feel for how Google reads a page is to look at a page of Arabic, about which I know absolutely nothing, and try and imagine working out using statistical correlations with the same material in English, which I do understand. Now I know that is not exactly – or even approximately – the way Google works but I find it is a useful metaphor.

It sounds remarkably like the task that the code breakers of Bletchley Park faced during WW2 and which accelerated the development of computing by Tommy Flowers and Alan Turing and so many others. Or the deciphering of the Rosetta Stone.

But back to Google translate, by adopting a statistical method of machine translation, Google has the potential to translate any language to any other language. At the moment, Google translate is used to translate copy found on the internet. But Google is looking at speech recognition, again using statistical methods and when this becomes reliable, then combined with Google translate, we have the possibility of being able to talk to anybody in the world irrespective of whether or not there is a common language.

Now being able to talk to anyone, anywhere is an awesome thought. I think it should lead towards greater understanding, world peace and harmony but I admit it is such a powerful tool that I don’t think I – or even Google – can envisage where it might lead and with what consequences.


 

The dangers of search personalisation



(function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

Search is getting more personal. Google in particular is striving to make the search experience as customised to each individual as possible. If Google knows I live in the south of England then it assumes I am more interested in results from websites with a connection to this part of the world than to the north of Scotland. If I am looking for somewhere to eat tonight, then yes, it’s very helpful. If I am looking for a major capital project for my business, then no I want best fit for my needs.

As someone working in SEO, I have of course been familiar with the increasig personalisation of search for sometime. Good or bad for the searcher, it certainly causes issues when talking to clients and trying to explain that different people looking on different machines may see different results.

But it was not until I read a book recently by Eli Pariser called ‘The Filter Bubble’ that I really appreciated the full impact of personalisation. The internet has been heralded as opening up information in way no other medium has managed to achieve and for that we must all be grateful. Personalisation though is having the effect of funnelling what we see by delivering what the search engines knows, or thinks it knows, we are interested in. I thoroughly recommend reading Eli’s book for an in depth analysis, in my view he in danger of becoming a little too concerned and even borders on paranoia as the book progresses but he makes some very interesting points that we need to think about.

We all know that everything can be, and therefore is being, measured on the internet. It is also reasonable to accept that all this information is, or certainly could be, available to anyone willing to pay for it. So it not unreasonable to assume that anyone who wants information badly enough has access to it – and this is perfectly possible.

So here are a few scenarios that Eli highlights in his book and which made a particular impact on me.

Personalisation and social media

Scenario 1 surrounds social media. Let’s say I come from a fairly poor neighbourhood and have lots of friends with dubious credit histories. I on the other hand, have ‘made good’ without leaving my roots behind completely and have an excellent credit history. Is it unlikely that financial institutions are going to ignore the company keep and not take it into account when assessing me for a loan?

Personalisation and social mobility

Scenario 2 is about how difficult it might be for someone to make good from humble beginnings. Let’s say from my less than privileged background I go to a modest University though I am very bright. Recruiters looking to fill the most prestigious jobs are most likely to target Russell group candidates or Ivy League ones in the US. This has always been the case but it is so much easier now to target them on social media. If I do not see an advert for that great job, there is no way I am going to get it. My application never gets submitted.

Personalisation and news censorship

Scenario 3 has perhaps the most serious implications for society as a whole. Increasingly people get their news from the internet. If I go an buy a paper, then I may dive straight for the back pages to read the sports news but I am probably at least aware of the front page headlines about phone hacking or the banking crisis. The internet on the other hand feeds me news based on what I interact with. So if I never show any interest in the banking crisis, gradually less and less news about it is put in front on me and gradually I become totally unaware of its existence.

Perhaps the biggest danger with personalisation is that it is insidious. As an SEO I am aware of it and can, if I choose take some steps to mitigate its effects. But most people are totally unaware that in effect their information is being censored. And that is dangerous. In fact, if I could argue that personalisation is the start of the end of the information age and threatens our very democracy. But that would be getting paranoid – wouldn’t it?  Perhaps it does just enhance the search results.

Update-24 Aug

Eli Pariser uploaded a video about his filter bubble idea a while back – explains the concept very well.