Rand's New SEO Quiz…

Since Rand has included more than a few things that are a matter of opinion, and at least one thing where he is just plain wrong, take it for what it’s worth… it’s fun anyway.

Watch out for #3 and #17, and try to think like Rand. Read #72 carefully, because monitor dyslexia had me all confused on that one.

They had some kind of coding problem on #65 because I have the right answer (even according to Rand!) yet it’s scored as wrong… so if you get 100% you didn’t really… and if you don’t get 100% (I scored 86%) you can just fix the code for your image and make yourself feel better, like this:

SEO Overlord – 100%

Take Rand’s Quiz

I, For One, Welcome Our New SEO Overlords

Matt Cutts & Google have sure stirred up a lot of mayhem by insisting that webmasters label paid links with "rel=nofollow." Their stated purpose is to create a "machine readable disclosure" that the links represent advertising.

Cutts has also added to the controversy by referring to past U.S. Federal Trade Commission (FTC) rulings on ad disclosures as justification for Google’s "nofollow plan." Apparently, there are other countries in the world that aren’t yet subject to U.S. laws and regulatory agencies.

The issue is clouded, as debates always are, by semantic quibbling and disputes over definitions. The most courageous (or stupid) thing to do in any divisive debate is take the middle ground, but I have nothing to lose either way.

In this article, I’ll try to bring some clarity to the issue, by framing the discussion of what a paid link is, explaining why Google’s not going to win this one on nofollow, and wrap it up with some observations on what we can expect from the FTC if they do weigh in.

What Defines A Paid Link Anyway?

From the FTC’s perspective, defining a "paid link" isn’t going to be as important as defining "advertising." When you look at it that way, all that really matters is that some financial consideration is given for the link. It "helps" if the link is sold as advertising, and in each of these cases it is:

  • Pay-Per-Click (Adsense, YPN, etc.): This is clearly advertising, not even debatable.

  • Pay-Per-Action (affiliate links): Clearly advertising, not even debatable.

  • Advertorial (Paid reviews, "buzz" marketing): Clearly advertising, not even debatable.

  • Paid Placement On Page (text link ads): Clearly advertising, not even debatable.

  • Paid Editorial Review (Yahoo Directory): Clearly advertising, not even debatable.

In case you doubt that the Yahoo directory is advertising, riddle me this: why else would you pay them? To perform a site review? The last time I checked, I could get a much better site review from Kim Krause for only $1 more, and when she’s done she actually lets me see a written review.

If the site review were the product, then Yahoo would give you something – like a copy of the review. No folks… all paid directories are advertising. Yahoo has been selling their directory as an advertising opportunity. End of discussion.

Friends trading links, SEOs buying each other drinks, linking to your employer’s site from your blog, Chamber of Commerce membership and the like are just not going to get the FTC excited. End of discussion.

Stop trying to muddy the waters, everyone – we don’t need specious arguments about the definition of a paid link. The inconsistency in Google’s position is clear enough if you just accept the definitions above.

Because, if you noticed, Google seems to be perfectly OK with high-quality directories like Yahoo from a "paid link" perspective, but clearly these links are advertising. 

By relying on past FTC statements (on advertising disclosure) Google further weakens their case. If advertising must be disclosed as such (this is why the FTC would weigh in), then Google’s nofollow plan won’t work, because nofollow does not (and can not) explicitly mean "this is an ad."

What Disclosure Will Mean To The FTC

For TLAs, plain text that says "Sponsored Links" above the links would probably be sufficient*.

That shade of gray apparently meets Google’s standards for disclosure, because it’s what they use to disclose the paid ads on their SERPs. Of course, if the paid ads are in a "top box" @ Google, the disclosure is way over on the right side, well outside of the searcher’s foveal vision, but let’s not digress into how "evil" all of the SE’s are in trying to "barely disclose" the ads that they sell.

I have no doubt that the FTC would frown on using "Sponsored Links" in an image (without equivalent alt text) because the disclosure would need to work with accessibility devices like screen readers. That’s about as far as Google’s going to get with the FTC on "machine readable disclosure."

After all, the FTC isn’t going to give a rat’s tail about the effect of paid links on Google’s organic results. Or at least, Google had better hope so, because if the FTC decided that organic listings are a form of advertising, that would put all of the search engines onto a very slippery slope.

I think we all understand why it’s important for Google to identify and filter paid links. I think we all understand that they have every right to filter the links from a site that doesn’t disclose in some form. But the nofollow plan is just plain bad.

If Google wants another bite at this apple, they better try to get it soon, and come up with a better plan… because one thing is for certain: there is no stopping them; the FTC will soon be here.

And I, for one, welcome our new hyperlink regulation overlords. I’d like to remind them that as a trusted blogger, I can be helpful in rounding up others who may have strayed from the true path… whatever that actually is.

* Disclaimer: I am not an attorney, this is not legal advice, it’s not even SEO advice, and I need a vacation… We’ll talk more soon.

Web 2.0 & SEO: Must We Piss In Every Public Fountain?

2008 SEMMY FinalistUpdate: This post has been nominated for a SEMMY award – vote today!

I started this project a couple months ago, researching "Web 2.0" and SEO. Let me begin by confessing that I have no good definition for "Web 2.0" – but there do seem to be some things that everyone agrees fit the Web 2.0 label.

This post is going to ramble all over the place, and if you’re looking for SEO instruction, wait for the newsletter. I did come up with some positive ways that we can use social networks and other "Web 2.0" stuff for marketing and SEO. That will be published next Friday.

For now, my research uncovered a lot more trash than treasure, and I have to vent a little first.

There was an Internet before the web…

In the beginning, there was Usenet, and it was good. Today, everyone calls it "Google Groups," and people think Google invented it. They didn’t invent it, but at least they saved a whole lot of it from vanishing.

Usenet was an amazing thing. All discussion, all in one place, for the entire Internet. One big global community. Part moderated, part unmoderated. But the ‘net got too big, AOL invaded Usenet, and we had to run somewhere else. So we had email discussion lists, and when the web got big enough to support it, discussion forums.

Email discussion lists are still alive and well. So are forums. But compared to the "worldwide community" of Usenet, they split us apart. Conversations at Cre8, WMW, IHY, HR, SEW, WPW took all kinds of directions. Forums became tools for personal agendas. Barry Schwartz created the SERoundtable blog as an attempt to bring the forums back together. We’ll never know if this could have worked, because it was overtaken by a bigger wave.

Bloggers started blogging like mad, and the conversations got even more spread out and harder to follow… but at least we knew whose personal agenda the blog served.

The blogosphere needed glue, and social networking was (re)invented…

"Social network" is another one of those weird terms that means too much, and therefore means very little without context. When I look at Digg, I see Slashdot. Was Slashdot the beginning of Web 2.0 (the beta?), is Digg really Web 1.0, or am I missing something? (probably the latter…)

Now that I’ve decided that Web 2.0 and Social Networking mean nothing, I still need to come up with something to talk about. So I’m going to broadly classify Web 2.0 into a few categories:

  1. Actual social networking sites, like Facebook, LinkedIn, Orkut, and MySpace. Individual users have their own profile, everyone makes friends with everyone, and there are "groups" that look a lot like discussion forums. However, because we can explicitly tag someone as a friend, as opposed to having actual friends that we actually know, it’s Web 2.0.
  2. Social bookmarking sites, like Delicious (sorry I’m not in the mood to remember where the dots go), Digg, and Sphinn. Everybody can tag stuff. Everybody can push links into the system. Users "vote" or don’t. Links that get added at the right time get seen by more people and get more votes.
  3. Socially created content, like Wikipedia. I like Wikipedia, so to me, Wikipedia is Web 2.0. If you ask me what Web 3.0 should be, it would be a version of the web where everyone can edit every website. That would be way cool, but difficult to build, so we may have to push the release date back a little.
  4. The blogs themselves, which seem to separate the cool Web 2.0 kids from the Web 1.0 dorks. You know, the dorks who are just busy running a web site and a business. Much of Web 2.0 is only relevant inside the blogosphere, which is not the web.
  5. Personal "publishing" platforms, like Blogger, Squidoo and Twitter. Now you can make "social" noise on the web without figuring out how to install WordPress. Maybe this is Web 1.5. Which would mean that Geocities "sites" were Web 0.5.

What do all of these have in common? There’s one thing I can think of. They’re all really cool and useful (with the possible exception of Twitter), and they’re all forced to deal with spam. Nofollow wasn’t invented in a vacuum. It was invented to help Web 2.0 deal with Spam 2.0.

Wikipedia, Nofollow, & The Tragedy of the Commons

PJ O’Rourke defined the "tragedy of the commons" perfectly, by giving one example: public restrooms. I’m going to go with a slightly different example.

Imagine that you live in a village. The villagers need water. It’s a long walk to the river. So some of the village leaders get together and decide to dig a well. They create a public fountain, and everyone can get water from it. It’s a wonderful thing, until the village drunk starts pissing in the fountain.

Now replace the fountain with Wikipedia, and the village drunk with SEOs… and you have a perfect picture of why Wikipedia had to nofollow outbound links. I knew prominent SEOs who actually bragged about how easy it was to spam Wikipedia, by having their employees create accounts, do enough minor edits (fix spelling, add citations) to become trusted editors, and then pepper the community encyclopedia with links to their clients. Nice.

Not a week passes without another invitation to join a "Digg Ring," requests to vote up a worthless article on Netscape, and even sillier stuff. Hey Dan, we’re all going to go piss in the public fountain, you wanna come?

Unfortunately, far too many people think that if they can just add a little more noise to the channel, they can gain a competitive advantage. It’s a shame that so many people can’t find truly creative ways to market their web sites. It’s a shame that there are so many who don’t want to add value to the web, or can’t figure out how… and it’s a shame that search engines can’t find better ways to filter the noise out. It’s a shame that so much of this spam actually works.

Web 2.0 needs work. I still think that social websites can become more spam-resistant, as Wikipedia has, by making some users more equal than others. I don’t know how you can do this without turning Digg or Wikipedia into another DMOZ, but I have to believe that it can be done. A lot of social sites let users "vote" on each others’ contributions, but as far as I can tell, none of them makes productive use of this feedback.

Anyway, if you want to learn how to spam Web 2.0, I’m not the one to teach you. Just because you can do it, that doesn’t mean you should. There are better ways.

We’ll talk more soon. In the meantime, if you must spam, do it on Twitter. That thing is a waste of electrons.

Sphinn Assassinated by SpamAssassin

After mentioning Sphinn in a recent newsletter as a great place to keep up with the latest buzz in search, I was surprised when AWeber sent me a report that several ISPs were blocking my newsletter based on the content.

After doing a little research I discovered that SURBL (a database which feeds several tools including SpamAssassin) has Sphinn.com on a list of "spamvertised" domains. Apparently, after receiving enough SpamCop complaints about emails that happen to contain a domain name in the content, SURBL decides that any message mentioning that domain must be spam.

I hate these SURBL guys. I really do. They haven’t done a damn thing to keep me from getting a ton of spam every day, but they block legitimate emails without any investigation or notification. If you use SpamAssassin, I recommend configuring it to ignore this worthless blacklist, better yet, uninstall it and filter your emails at the client. It works better.

I "went Mac" a few weeks ago, and the amount of spam I have to manually delete is down to only a few a day. I get a false positive every few days on the Apple Mail client… which has been pretty much flawless after a couple weeks training for the Junk Mail filter.

Blacklists don’t work, and bad guys can exploit them to hurt the good guys, like Danny Sullivan & Sphinn.

Dynamic Linking & Nofollow – Practical Examples, Diagrams, + FAQs

In response to all the questions and comments I’ve received after my recent post on using nofollow with internal links, I’ve put together a few practical examples and a couple diagrams to better illustrate the concepts.

As I mentioned last time, slapping a nofollow on some of your internal links is not intended to remove pages from the index. In fact, what we’re trying to do is to get more pages indexed, by reducing the share of your site’s PageRank that flows to less important pages. I’ll begin with an example that illustrates this technique.

Moving “Overhead” Pages To The “Third Tier”

Close your eyes if it helps, but try to picture a typical eCommerce shopping cart site. You have a home page, several product categories, and products in each category. Your home page is the first tier in your site’s linking structure, the category pages are the second tier, and the product pages are in the third tier.

You also have several of what I call “overhead” pages on the second tier, like privacy policies, terms & conditions, shipping information, guarantees, contacts, price match promises, etc. It’s not unusual, in fact, to have more of these pages than you have product categories.

To make your users’ experience the best it can be, you probably have “run of site” links (on every page) pointing to all of the second tier pages.

The effect of this on the flow of PageRank should be obvious – the overhead pages on your second tier receive as much PageRank as your product category pages… and far more than the actual product pages. This is clearly an upside-down arrangement from an SEO perspective.

The diagram below illustrates a simple modfication that moves your overhead pages down to the third tier – this will drive more PageRank to your product categories, which pass it along to your product pages.

prflow1.png

As you can see, we’ve “nofollowed” the run of site links to the overhead pages. We have a single site map page (directly linked from the home page) that passes PageRank on to the overhead pages.

In this simplified diagram, I’m not showing you “one way” and “two way” links… and I’m ignoring the third tier, which would also have “nofollow” on links to the overhead pages.

This structure allows you to get your overhead pages indexed (so they can appear in site: searches) without giving them as much weight as your product category pages.

What If The Overhead Pages Are A “Quality Signal?”

Some folks have expressed a concern that if search engines don’t find a “followed” link to these important overhead pages, they might consider your site to be of lower quality and rank your pages lower. I’ve never seen this actually happen, but I can’t say that this isn’t a legitimate concern.

If you are concerned about that, you can make some modifications, as shown in the diagram below. This time, I’ve shown the two-way linking relationship between the tiers, and added the third tier pages to the diagram.

prflow2.png

In this structure, the home page passes PageRank to the second tier, which in a shopping cart site consists of overhead pages and category pages.

The overhead pages link back to the home page, passing some of the PageRank back, and they also link to the category pages on the second tier, passing some PageRank across.

The category pages don’t link back to the home page (they do, but the links are nofollowed), so more PageRank passes down into the third tier. The third tier pages link back to the category pages (and may crosslink, see below).

Mix & Match As You See Fit

Neither of the approaches I’ve illustrates so far is designed to remove pages from the index. The intent is to conserve the total amount of PageRank within the site, but to simply redistribute it to pages that matter more to us. The goal is to get more of our important pages indexed. By doing so, we can actually add to the total PageRank within the site, because every page has an intrinsic value.

Neither of these approaches is a “recommendation” for what you should do with a given web site. Every situation is different. Sometimes nofollow helps, but it’s not the only tool at your disposal, and not the only tool you can or should use.

Imaginary Real Life Scenario – PageRank Misses The Point

Imaginary real life. Sorry. Best I could do – this is based on a true story, but sanitized for your protection.

Let’s say that Joe runs an e-commerce store selling gardening equipment and supplies. Joe has 10 categories of products in his store. Most of the categories have a couple dozen products, but the “sprinklers” category has 85 products. Why? Because that’s what he needs to have in order to meet his customers’ needs.

Unfortunately, Joe’s having indexing problems. Google doesn’t have a problem picking up his other product pages, but he’s scratching his head over why they don’t want to index his sprinkler product pages. So he goes out to the forums for advice, and in no particular order, is told to:

  1. Rewrite the product descriptions in case they’ve been filtered as duplicate content. Done. No change.
  2. Submit an XML sitemap. Done. No change.
  3. Add content to the category page, in case it’s been filtered as duplicate content. Done. No change.

Experienced SEOs will already have spotted Joe’s real problem – it’s structural. The 85 product pages (plus 10 category links plus 15 overhead pages) add up to 110 links on his “Sprinklers” category page. The PageRank is just being sliced too thin, and the sprinkler product pages barely make the supplemental index if that.

So what can Joe do? Sprinklers are his most important product category – irrigation is the linchpin of good gardening after all. (OK, I made that up).

What if Joe “cut” the links between his product categories (2nd tier pages), by nofollowing the cross-links between his category pages? Except for the link to the Sprinklers category, which is left in place. The result: more PageRank for the sprinklers page, by borrowing a little from other pages.

Would this fix Joe’s problem? Well, it might, but it might also borrow too much from the other pages and create indexing problems elsewhere. That’s why you have to do the math, or do this stuff a little bit at a time.

Joe could also “fix the structure” (without using nofollows) by splitting his Sprinkler category into 3 or 4 categories, and if this actually increases sales, I’m all for it. That’s another option, but if it doesn’t add something to conversion / usability, Joe would be insane to do it simply for SEO reasons…

Don’t “Break” Your Site Over SEO!

In my experience (which goes back to the birth of Netscape as a browser), you never need to harm usability & conversion in order to accomplish an SEO goal. There’s always another way. Nofollow is a tool that can help.

We’ll talk more soon.

How To Get More Pages Indexed With Nofollow

I knew Chapter 4 of SEO Fast Start (on site structure) was going to be just a little bit controversial… but it really shouldn’t be. In this post I will briefly give some facts about where we are, controversy-wise, just to get you up to speed. I hope that a brief statement of the facts and a little explanation will help you filter out the noise that’s going around about this subject.

The timing is interesting, because I had already planned a tutorial for this week, on the pros, cons, ins, outs, and reasons for using "dynamic linking" (nofollow is just a tool) within your site… then a great new tool was released that makes the whole thing a lot easier… and along comes the sound and fury of controversy to make it "topical."

If you’re not interested in the controversy and just want to learn how to use nofollow, don’t worry, because I’ll get to the meat pretty quickly. (If you don’t know what nofollow means, you may want to read the book first).

The Nofollow Controversy Rages – But Why?

Google’s reps have been telling us for over a year that it’s OK to use nofollow on your own internal links, although they usually emphasize that it’s not good for guaranteeing that a page will not be indexed, since they may find other links that aren’t nofollowed. This is actually an important feature that we make full use of in dynamic linking, BTW. Anyone who tells you that using nofollow means removing pages from the index simply doesn’t understand it yet.

Last week, Rand Fishkin published an interview with Google’s Matt Cutts. Matt repeated, in plain English, that it’s perfectly safe to use nofollow on your internal links, to control the flow of PageRank within your own site. I thought this would end the controversy, but Rand’s interpretation of Matt’s comments left an opening for the semantic parsers of the world to pick a fight.

Rand’s words: 

Nofollow is now, officially, a "tool" that power users and webmasters should be employing on their sites as a way to control the flow of link juice and point it in the very best directions.

If you replace the word "should" with "could" then nobody would have a nit to pick… but he did say "should" so let me deal with that.

The Big Question – Should You Use Nofollow?

My answer to this question is an unqualified "maybe!" I can’t really stand behind that answer with pride, because it’s no kind of answer at all, so maybe I should explain a bit more…

In SEO Fast Start, I answered "yes," but the implementation is very limited, because while the "fast start" method is intended to be a framework for all SEOs, the book itself was primarily written as a beginner’s guide.

So for beginners, I described a very minimal implementation that involves nofollowing some links to "overhead pages" like privacy policies, contact info, terms & conditions, etc. This is a "play it safe" approach, which should at least deliver some benefits.

Once you get past a very minimal implementation, it’s very easy to screw things up. So, if you don’t truly grok how PageRank works, you probably don’t want to mess around with it. Although I did outline several "advanced" nofollow & dynamic linking techniques in the book, I claim no responsibility for your ability to understand PageRank.

The #1 Goal Is To Get More Pages Indexed

Before I go any further, let me explain why you might want to control the flow of PageRank within your site. It boils down to one major goal – index penetration. If you can get a little bit more PageRank to your most important content, by taking some away from less important content, you just might be able to get more of your pages into Google’s index. That’s it – that’s the key point. Getting more of your important pages indexed.

If you expect to funnel so much extra PageRank to your "money pages" that they will leap to the top of the rankings, then you’re probably dreaming, because you can only accomplish so much with changes to your site structure. The primary impact on your "money pages" will come from getting more of your other pages indexed, because the additional pages can be used to link (with appropriate anchor text) into your money pages.

Now, if your site is so small that you could literally link to every page from the home page (<150 pages), the minimal implementation (as described in Chapter 4 of SEOFS) is about all you’d ever want to do. Likewise, if your site has very little PageRank coming in from external links, then you probably have bigger fish to fry, so do the minimal implementation, if that, and get to work on more important stuff.

If you have a large site, with a lot of sitewide links to "overhead" pages, and you’re having a hard time getting your deeper pages indexed, then changes to your site structure can make a big difference in how many pages get indexed. One of my students worked through a major site restructuring last year, and went from a few hundred to over 1000 pages indexed – with significant gains in traffic and sales.

The Real Issue Is Site Structure – Nofollow Is Just A Tool

No matter what your situation, the key question isn’t really about nofollow at all. The key question is whether you can improve your position with search engines by changing the internal linking structure of your web site. Most of us can do at least a little bit better, because it’s very unlikely that you’ve developed the optimal structure by chance.

Once you’ve decided to make changes to your linking structure, it’s really down to making choices about the methods you’re going to use. Using nofollow allows you to "cut" links out of the PageRank calculation, without taking them away from users. This makes the nofollow attribute a handy tool, because you can make some kinds of structural modifications transparent to your site’s users.

For the sake of usability, you probably want links to your privacy policy, shopping cart, terms & conditions, contact information, etc. on every page. In fact, you may want some of those pages indexed (contact information) because people do use site: searches to find that kind of information… the question is whether you want those "overhead pages" to be more important (have more PageRank) than your real content (product pages, etc.)

How PageRank Flows Inside Your Site

PageRank, to misquote a friend of ours, is a very subtle beast. PageRank attempts to decide which of the pages you’re linking to are more important, by simulating a "random surfer" who blunders around the site, clicking links. The more times the random surfer stumbles across a given page, the more PageRank it has.

When this random surfer does his work at the scale of the web, the result is wonderful. Important web sites and even particularly important pages (those cited frequently on the web at large) end up with more PageRank. It’s a good thing. Link spam influences it to a degree, but you’d have to be one hell of a spammer to get more PageRank than, say, Amazon.com. Link spam probably has a lot more influence because of anchor text than it does on PageRank. You can hate Google if you like, but PageRank is a beautiful innovation.

Anyway… back to the point.

One of the "subtle things" about PageRank is that the amount flowing out of a page is divided up between all the links on the page. If there are 10 links, each one gets one tenth. If there are 100 links, each one gets 1% of the PageRank that flows out from the page. So removing a link means that the other links carry more weight. If one out of every five links points to an "overhead" page, then 20% of your PageRank is flowing into pages that you don’t really care about very much from an SEO perspective. But you need those links, don’t you?

PageRank works great at the scale of the web, but not so well once it gets inside of your web site. That’s because your web site will have a lot of links that you need for accessibility, usability or legal compliance, which lead to pages that aren’t especially interesting or important. Nobody "out there" on the web is linking to your "earnings disclaimer" page, but if you have to have one, you probably have to link to it from every page on your site.

It actually helps to understand this if you put yourself in the position of the spider, and pretend you’re standing on the home page, faced with dozens of links that all look the same. To borrow from Crowther & Woods, it appears to be "a maze of twisty little passages, all alike."

Unless you do something about it, the overhead pages on your site get more PageRank than they really deserve. You can remove these overhead pages from the index by using robots.txt or a robots meta tag, but completely removing them actually reduces the total amount of PageRank inside your site.

Completely blocking spiders from these pages also means that they can’t be found by visitors using a site: search, so it’s not the greatest thing you could ever do for usability – what if someone is trying to find your privacy policy, or searching for your fax number?

Nofollow Gives You Some Control Over PageRank Flow

I say "some control," because nofollow isn’t a magic swiss army knife. It’s just a tool. If you think of every link on your site as a valve that pushes some PageRank on to the next page, nofollow simply lets you turn some valves off. This increases the amount of PageRank flowing through the remaining links. By "nofollowing" the links to your overhead pages (except, perhaps, from your sitemap) you move more into your important pages. It’s that simple.

The total amount of PageRank that you have to play with is a function of how much is coming in from the web (mostly), and how many pages you have indexed. You can get more, but no matter how much you get, it still has to be divided up between the pages on your site. Nofollow can’t create more PageRank than you already have, unless you actually get more pages indexed.

Because of the way PageRank flows, your home page will normally have a lot more than your "second tier" pages, which will have a lot more than your "third tier" pages. So although nofollow can help you increase the share of PageRank that flows into each tier, if you want a specific page to get the most possible PageRank, you have to link to it from pages that have some to share – like the home page, many second tier pages, etc.

If The Whole Thing Gives You "Tired Head," You’re Not Alone

Thinking about this stuff wears me out… actually doing the math is even more of a beating.  If you’re like me, you’ll do the simple stuff and then move on. If you really want to get hardcore about it, you’re going to need tools… I’ve built them on my own in the past, and I wouldn’t dare share the kind of spaghetti code that I write with the world.

Fortunately, there is a tool out there that you can use… and it’s free (ain’t the web cool?). Halfdeck (of SEO4Fun) has recently released a free tool called the PageRankBot that will spider your site and map out the distribution of PageRank. He’s labeled it badly as a supplemental results detector, because it’s actually a lot cooler than that. There will be some work involved in installing it, and I am not on board for tech support. With that caveat, it can be all kinds of fun to play with once you get it running.

He even used it to simulate a ’3rd level push’ – sort of (I don’t think he cut the links from the second tier to the home page and left the sitewide links in place), and simply by playing around realized that the "sitewide" links were holding him back from getting more PageRank deeper into the site. It would take you a lot of time to do that without a tool – with it, he sorted out a better PageRank distribution in an afternoon.

To Learn More: Read Chapter 4 of SEO Fast Start – It’s Free

With apologies to our guests, most of the folks reading this have already downloaded SEO Fast Start… so rather than repeat it all here, I’ll refer you to Chapter 4 of SEO Fast Start. The book is free, but if you’re not sure that it’s worth the few minutes it would take for you to go download it, you can read my explanation here.

Discuss…

(PS – I will never buy the idea that Google’s just trying to trick us into revealing our sites as "SEO’d" – I think they can spot the kind of SEO they care about by looking at the anchor text of inbound links)