Dan Thies. Live. Dallas/FtWorth. October 20&21. Are You In?

OK, folks… here’s the deal. I have a tentative reservation for meeting space, but I need to make a commitment on the space this week to lock it in. I need to make sure the room is big enough. I won’t go over 60 attendees no matter what, because I want to be able to interact with everyone.

So, I need to ask for a quick show of hands…

Assuming that the cost is reasonable (under a thousand for sure), would you be likely to attend my live 2-day seminar in October?

If you need to know more about it before you can decide, that’s cool – let me know what your questions are.

Right now it’s shaping up as a day of SEO, and a day of link building & promotion tactics, with a lot of time to look at individual web sites and work together on specific actions that you can take home with you.

Lies, Damn Lies, & SEO: Statistical Analysis of SERPs

It sounds so seductive… by using advanced statistical methods, you can determine the best mix of on page factors for SEO. Wow, imagine the incredible competitive edge that you’d have. You could use just the right number of bold tags, figure out whether to use bold or strong, and you’d be an unstoppable ranking machine.

The only problem with this approach is that it’s complete bunk. Let’s try a couple examples…

A Statistical Lie I Kind Of Liked: MSN "prefers" sites that run on Microsoft’s own IIS

A while back, someone published a statistical study that appeared to show that MSN’s search results were far more likely to contain pages from sites that run IIS, vs. Google and Yahoo. Did people take this as a sign that they should move their web sites onto IIS? No, of course not… because Google has a lot bigger market share, people actually thought maybe they should switch away from IIS in order to do better on Google!

So… now you wonder: is MSN rewarding you for using IIS while Google doesn’t care, or is Google rewarding you for using Apache while MSN doesn’t care? If Google doesn’t care and MSN does, then you rush to IIS. If Google cares and MSN doesn’t… enough! Spare yourself the circular reasoning before you go mad, and let’s consider some possible root causes.

At the time this study was published, I pointed out that there are many differences between IIS and Apache, aside from the names.

  • Whereas Apache is the majority choice of the entire web, as you move to larger sites and in particular the corporate world, IIS has a much stronger position. So if Google crawls more of the web’s smaller sites than MSN, they’re going to have a higher percentage of Apache-delivered pages in their index. Which means, statistically speaking, that you’re likely to see a higher percentage of pages on Google SERPs being served up by Apache.
  • ASP.Net, whatever else it does, can come with a lot of extra baggage. Such as the "viewstate" form fields that tend to get inserted, with 10-50k of utter gibberish text. So if MSN taught their bot to ignore this junk, and Google didn’t… well, this alone might account for this statistical variation. Since it’s relatively easy to build a site on IIS without adding all that dead weight, it’s hard to blame the search engines either way.

The bottom line: search engines don’t care what kind of server you run. They might care how it behaves, but not about the name.

The Original Statistical Sin: Keyword Density

If you’ve never used "search engine optimization" software to tell you how to optimize your web pages, good for you. If you run keyword density analyzers to do anything other than extract search terms from web pages… stop. You don’t need to. Keyword density isn’t a factor – search engines just don’t work that way.

Keyword density is loosely defined as "the percentage of the words on the page that are your keywords." I can remember endless debates back in the late ’90s about the "right" way to measure it – did you count all the words, did you only count exact phrases? There was only one problem with those debates – we were all wrong. Search engines do not measure the "keyword density" of a web page. Continue reading

Dynamic Sites – Yahoo Tosses Us A Bone

This is sort of good news: Yahoo has announced that site owners who register and verify their sites through Yahoo Site Explorer will be able to specify a small number of URL variables (parameters) that should be ignored. This is mainly useful for parameters/variables like session IDs that don’t actually identify unique content.

An example of a session variable would be something like:
www(dot)seoresearchlabs(dot)com/thispage.php?session=A99F4C3.

Some shopping carts and other applications fall back on using session variables in the URL, when the visitor doesn’t accept a session cookie. Since spiders don’t accept ANY cookies, you have to write your applications to recognize spiders and not give them a session ID, or only create a session when the visitor does something (like adding to their shopping cart) that you need to track.

Unfortunately, since Yahoo has decided to do this on their own, other search engines will continue to have problems with poorly written applications.

What the SEO world really needs is for the search engines to all come together, as they did with the Sitemap protocol, and come up with one standard method that would allow site owners to tell them which variables to ignore. Such as extending the robots.txt file with a list of variables to ignore.

The fact that Yahoo is going it alone on this one is a pretty strong indication to me, that whatever conversations are taking place between the search engines on protocols, this isn’t seen as an important issue. Kudos to Yahoo for exhibiting some leadership, but this is a problem that should have been solved ages ago.

Relevance, Discoverability, and Crawlability

I hate to blog about bloggers blogging about blogging… but sometimes someone does write something in that "BBAB" line that’s worth mentionining. For example, Vanessa Fox…. with an excellent 4-post series that takes you above the SEO fray, to consider relevance, discoverability, and crawlability. Most of the specific examples are going to have more relevance to bloggers than others – it was written for bloggers. After years of screaming in the woods about relevance, it’s nice to see someone who gets it, or at least sort of does.

Google Proxy Hacking: How A Third Party Can Remove Your Site From Google SERPs

In June of 2006, while working to resolve some indexing issues for a client, I discovered a bug in Google’s algorithm that allowed 3rd parties to literally hack a web page out of Google’s index and search results. I notified a contact at Google soon after, once I managed to confirm that what we thought we were seeing was really happening.

The problem still exists today, so I am making this public in the hope that it will spur some action.

I have sat on this information for more than a year now. A good friend has allowed his reputation to suffer, rather than disclose what we knew. I continue to see web sites that are affected by this issue. After giving Google more than a year to resolve the issue, I have decided that the only way to spur them to action is to publish what I know.

Disclaimer: What you’re about to read is as accurate as it can be, given the fact that I do not work at Google, and have no access to inside information. It’s also potentially disruptive to the organic results at Google, until they fix the problem. I hope that publishing this information is for the greater good, but I can’t control what others do with it, or how Google responds.

Continue reading

Clearing The Air On "The Long Tail?"

In a post last week, I discussed a certain Internet Marketer’s use of "straw man" arguments to debunk the "myth" of the long tail. Well, we have a bit more clarity this week.Since they apparently didn’t get my emails about this last week, you can imagine my surprise when they thought enough of li’l old me, to fire back with a new video.

Now in this video, while ignoring most of what I said last week (controversy sells!), Nancy is kind enough to clear up what she means when she talks about long tail keyword strategy. The straw man looks pretty much like I thought it did – the strategy that Nancy is debunking goes something like this:

  1. Identify thousands of search terms that generate as little targeted traffic as possible
  2. Build thousands of web pages to "target" these low-volume search terms

Nancy says that this kind of strategy is really dumb, and a terrible waste of time. Well, once again, I just have to say: "DUH!" Who in the world would tell you to go and do something like that? What kind of crazy "strategy" is that?? I mean, you’d have to be a complete…

Continue reading

Ch-Ch-Changes… For The Better?

Our Community Is Strong!

SEO Fast Start 2007 has been out for 2 full months now… and our community has grown a lot. These are exciting times for me… but I still look at the portal, and the new subscriber numbers, and say… "is that the best we can do?" And the answer is clear – we can do a lot better. We can grow a lot faster. But I need to do my part.

SEOFS 1.0: A Good Experiment – With Conclusive Results

The diagram below represents what I call "version 1.0" of the SEO Fast Start web site.

 seofs1snap.png

As you can see, the "home page" of the SEOFastStart.com domain has been a short "sales letter" to promote opt-in registration to receive SEO Fast Start. Those who opt in get access to SEO Fast Start and subscriber bonuses through my newsletter.

Even with testing & tweaking, the response rate on that page hasn’t been great. The support portal hasn’t remained a "members only" area, because I don’t want to put up a login wall, and people are linking in. Hey, it’s the web, that’s what we do!

This is, to be honest, about what I expected… but it’s important to me to get people registered so they can get updates. While the core strategies of SEO don’t really change much over time, a lot of important tactical details really do move at a faster pace, and those who aren’t well informed often make major mistakes.

The Definition Of Insanity: "Continuing to do the same thing, while expecting different results."

 I want to get different (better) results, so I need to make some changes… and here’s the plan for version 2.0:

Continue reading