Category Archives for Search Engine Optimization

Panda: The Google Algorithm Change That Everyone is Talking About

Yarn Panda

It looks so innocent ....
Flickr Image by TheBrassPotato

I’ve been asked by a couple of people why I haven’t written about the Google Panda algorithm change which was a major event in the SEO world. I like writing useful articles, and when practically everyone in the search industry (not to mention mainstream media) has written about it, what more could I contribute? And actually, I have written about it, but as guest author.

From the Huffington Post, to the Wired Interview with Matt Cutts and Amit Singhal, to every SEO pundit; there is no shortage of reading you can do about Panda. So instead of getting into the weeds like everyone else, I’ll write about it from a more overview perspective for the non-search geek.

For steps to take, check out my guest post on Panda.

Continue reading

Long Tail Keywords of Search … Video

70 percent of search queries are classified as long tail searches, those specific several keyword phrases that are more easier to convert and easier to rank for.

This video has some interesting statistics in it. Eric calculates that it is 4 times easier to rank for a long tail term than a head term and the conversion is at least two times easier.

Listen as Ralph Wilson interviews Eric Enge, SEO Expert on ‘How to Leverage the Long Tail of Search’.

Avoid the Duplicate Content Penalty from Google

Here’s a question I got today…

I want to submit my recipes to I’ve noticed that the big, well known recipes sites have submitted their recipes to this site as well. Wouldn’t this cause duplicate content issues?

Well yes, any content that is duplicated across domains would fall into the duplicate content definition and would present to Google and the other search engine with a duplicate content issue (see below for the definition of duplicate content). But the real question being asked here is: Will it adversely affect my site?
Continue reading

WordPress SEO plugins to improve bounce rate

One of my goals has been to address my high bounce rate on this blog, which has been above 80%. To do that I’ve been working on features that increase my website stickiness.

Bounce Rate is a SEO factor

Bounce RateWhy would I want to do this? Let’s put the SEO factor aside for a moment. As a blogger you of course want people to get value from your blog, read several posts and return. A high bounce rate means that most of your visitors just visited the single page they originally landed on and then left your blog. Of course if the article answered their question completely that would be a good thing, even if they left without browsing further. This is more difficult to measure, but you can look at time spent on the site as another clue.

Most SEOs would agree that Google and the other search engines are taking bounce rate into account. Google might be looking at Google Analytics for the bounce rate, but even if you don’t have Google Analytics installed, Google and Bing can collect bounce rate data by looking at how quickly the user returns back to the search engine results page and clicks on the next result. A quick return and click on the next website on the list means the user didn’t find what they are looking for on your website.

So from both the SEO and user engagement / conversion perspective you should care about your bounce rate and website stickiness.

Continue reading

The rel nofollow tag and page rank sculpting

Updated: May 25, 2012
I see a lot of outdated information about the rel nofollow tag. Particularly a lot of advice to use it in page rank sculpting. That might have been useful at one point, but it hasn’t been for over a year now.

What is the nofollow tag?

The nofollow tag is a tag you can add to a href link to tell Google not to pass page rank to the link. It is typically used for comments on a blog (to discourage comment spam) and for sponsored links. By adding this tag to your href HTML code you are telling Google, I’m not sure I trust this site and I don’t want to pass my link juice over to it.

Continue reading

Check Backlinks with Google link Command?

One of the things you often need to do as a SEO is look at the backlink profile of websites. This can be useful to see how your competitors have acquired backlinks – which, as we all know, quality backlinks are important for your website to rank well. Yahoo search explorer’s linkdomain command has been a way to do this in the past, but it has grown unreliable and misleading. Alternatives are available, for example SEOMoz provides and there is also Majestic SEO, but you need a paid account to get the full benefits.

So I was excited to hear Matt Cutts mention on a recent video, that although Google had historically limited the number of backlinks the link: returned due to storage issues that now it was possible to get the full set of backlinks.

Continue reading

16 Blog Directories – The Good, Bad and the Ugly

I’ve seen several “top blog directories to submit your blog to” articles, however these lists are often light on details or out of date. Here are 16 blog directories that I went to take a look at. Like regular web directories, some blog directories will only list you for a fee, and yet still more request a reciprocal link before they will list you. Some are rather sneaky about it, you don’t figure out you have to reciprocate or pay until you are a step or two into the submission process.

That being said, just like web directories, it might be worth paying for a listing in some of these blog directories, however that analysis (choosing which one to pay for) is for another day.

For some of these you should be prepared to create an account and choose a category that your blog belongs to. Some ask for a full profile. Some have validation/ownership verification processes. It WILL take more time than you expect. Many of these will have a human review the submission before publishing it. That’s ok, in fact it is good, as google looks more favorably on directories that have editorial review.
Continue reading

Use Excerpts to Avoid Duplicate Content Problems

WordPress is an extremely flexible platform. One aspect of it’s flexibility is that it provides many different sorts of aggregate pages, you can choose to show your posts chronologically, grouped by your tags and/or grouped by category. And of course the home page shows a list of your posts unless you have chosen to show a static page instead. However having all these different ways to show the same content can be seen as duplicate content by the search engines.

Unfortunately duplicate content in wordpress is a rather complex topic, but in this post I want to focus on just one aspect that has a relatively straightforward solution to it. But first let’s understand the problem better. When I talk about duplicate content, I am talking about content that is the same across multiple pages of your site. This is beyond having duplicate pages. Google and the other search engines essentially don’t want to see the same block of content on multiple page. Which is precisely what you have when you have category, tag, home and archive pages showing your posts in their entirety.

Continue reading

Why You Need Web Directory Listings

Directory NewSEOs routinely recommend that you should list your site with a web directory such as the yahoo directory. This is different than submitting your site to the search engines (which is usually unnecessary). Submitting your website to a web directory is a great way to jump start your link building.

Free vs. Paid Web Directories

I would love to be able to tell you that there are some great free web directories out there, unfortunately in this case, at least in the general web directories, it is a case that you get what you paid for. For certain niches you might be able to find a good free directory that is industry specific, but for a general web directory, all the free ones are not of high quality.

Criteria to look at

  • The easiest thing to look at is the PR (page rank) of the directory site. Yahoo and are not cheap ($299 a year) but they are PR 8 and 7 respectively. If this is out of your budget, there are some good second tier directories that are much cheaper.
  • It is important too, to determine the page your website is likely to be listed on. What is it’s PR? The PR of the parent page? Getting listed in some of the lower quality directories often means your listing gets buried on a page that is unranked and rarely crawled.
  • Google and the other search engines tend to devalue links that didn’t involve a human making a decision to create that link. So look for a editorial policy that does not guarantee a listing. Yes, you take a risk that your listing won’t be approved or will get listed on a page you didn’t choose, but this editorial oversight is what gives that link its juice.

Next: A list of web directories

Search Engine Meta Tags – no index

When we first build a website, the thought of actually telling Google and the other search engines to not spider a given web page seems counter-intuitive, why would anyone want Google to not spider their website? (Well except when you are Rue La La).

Here’s one reason. A more sophisticated website might have a login page or registration page. Often these pages shouldn’t be indexed as they don’t add value for ranking for keywords. Compounding the issue, in one case I looked at, the registration page was manifesting as many registration pages, because the site was tacking on a return url in the parameter (so that after the registration the user would be returned to the calling page), creating duplicate content.

If you have many URLs that all point to the same page, that is known as duplicate content (this is different than duplicate content across many websites … and worse) and definitely to be avoided. Each site gets limited link juice and a limited spider crawl budget, you don’t want to waste either on yet another version of a page the spider has seen before.

So to tell the spiders you don’t want a page to be indexed, you put the no index meta tag into the HTML source code (between the open and closing <HEAD> tags) for that page.


Why the follow? So that the link juice from external incoming links and internal links can pass through to the links on the page you are noindexing. Otherwise you are creating a dead end that stops the link juice from passing through. The registration page might not be important, but it might have links to articles that are.

I wanted to explicitly point this out, because if you search on “meta tag no index” you will find lots of examples of “no index, no follow”. Lindsay Wassell makes a compelling case for the right use of this tag in her article and explains why using robots.txt instead is not a viable alternative.

More on the noindex meta tag.