January 13th, 2009 at 12:36pm
I have revamped an ages-old travel-related site (in German language), called Ferientips. This was done a few month ago. The site initially had PR 4, many links have remained the same, yet it is now down to PR 2. I guess that Google noticed a sharp change in the content of the pages. And that even though most of the actual content remained the same, just the layout (and other links) changed. Probably the most interesting thing about it is that during the initial weeks, no change was seen whatsoever.
I am not whining about the change (though I do not like it either), but it tells interesting lessons about the way Google calculates page rank. Especially if you purchase a site, you must be very careful what you do not only with the URLs but the content pages themselfes. Google seems to try hard to detect changes and seems to be quite agressive in how it is responding to such changes. Please note that the site owner (domain registery owner record) did not even change.
October 15th, 2008 at 12:22pm
I have just converted one of our old sites, dedicated to a no-longer existing Lotus Notes Antivirus product, into a very small microsite. Starting with one page, I plan to extend it over time. I will, however, use only very minimal navigation, mostly using the good old way of text-embedded hyperlinks. I guess that will also help to get clicks on the ads. While keeping it small, I still intend to add good content, just not a wealth of it. I am very interested on how this experiment will end. The site is called Notes and Domino Antivirus and is written in German language.
November 30th, 2007 at 09:37am
Ever since it started, Google AdSense prohibited adult content. Explicit nudity, for example, was (and is) way beyond Google’s terms of service.
But of course, there is a lot of content that is bordering on the mature content line. In fact, some webmasters play the gamy of trying to get more page views by intentionally bordering on that line. I suspect, they get their revenue from advertisers who are also close to violating the AdWords TOS. AdWords, too, prohibits explicit ads.
Google seems now to go after them. The have written an interesting post in their blog: “Inside AdSense: Play it safe, family-safe“. In short, Google replaces the “explicit” term with “familiy-safety”. This in itself is more explanatory. But Google also provides a view good questions that a webmaster should ask himself:
Would I be comfortable viewing this content with my parents or children in the same room? Would I feel comfortable viewing this content if my boss walked up behind me while I had this content on my screen?
Of course, the level of comfortability varies from person to person. But I think the questions pretty much get to the point.
And if I may engage in some wild guessing: Google’s blog post could be the beginning of a campaign against webmasters (probably as well as advertisers) playing the “mature content game”. It looks a bit like Google is cleaning up its portfolio, just like they began to seriously penalize paid links. So if you have close-to-adult material and you are running AdSense ads, its probably a good time now to reconsider using AdSense.
From a general point of view (as citizen, parent(!) and AdWords advertiser, …) I think I welcome this change. I don’t like explicit AdWords ad to come up on my site. And I don’t like my family-friendly ads come up on explicit sites.
November 26th, 2007 at 11:30am
A Google search does always bring up ten sites on the first search result page - that’s for sure. Most webmasters try to be among these first ten. But what if you can’t make it immediately. From time to time, you’ll notice that one of the high ranking pages refers to a competitor. I was asked how to approach that site owner to get a link to your own site. Here is the truth about how you may succeed in that.
I assume the site in question is a quality site. If it would sell links, then it would be obvious how to approach it… Besides, I wouldn’t approach it at all, because that would cause problems with Google. They are hunting down sites selling/buying links and demote them in search results. So… don’t do that ;).
Assumed it is a quality site, you need to provide very good reason to link to you. In my experience (being a software publisher), quality site pages ranking high in the search engines are typically reviews or articles. Most of them link to your competitor because they either
- review its site or product
- found the site or product useful and link to it as a service for their readers
In the first case, you are most probably out of luck. You may get them to review your site, too. But it is unlikely that you get them to update that other review page. Contacting them make still sense, because you never know if the new review (your ones!) will gain higher priority and jump above your competitor’s one.
The second case is much better, especially if the link to your competitor is in a list of links (e.g. “See also” resources). But even if it isn’t, you may convince the site owner that adding a link to your page may be useful for him.
And now we are at the real question - how to convince them? The most important point is that you look from their point of view. Study the page and site in-depth before you approach him. Ask yourself a number of questions:
- what is the site’s main theme?
- what is its reader base (Professionals? Folks looking for fun? …)
- what is the writing style of the site
- how frequent is it updated? (is it still alive???)
- are there any guidelines for link requests?
- is there any way to build reputation before you ask to be added (e.g. by good forum posts)?
This list is not conclusive. Try to find out as much about the site as possible. If you think it isn’t worth the time doing this, it’s probably not worth writing a link request, too.
A very important thing is if the site looks alive or not. Many high-ranking pages are quite old. The problem is that with an unmaintained site you may not have any chance at being linked at all - no matter how hard and well you try. No matter how bad you think you need a link from a specific page - you can waste a lot of time by trying to optimize for a page that will never receive an update. If I see such a dead site, I think more than twice about trying to get links from it. Most often, I decide to let it go, because there are more promising targets. In the few cases I tried it, I almost never succeeded. So you have been warned…
Then, after you know the site. Think about “How can my site provide a useful service to it?”. It probably pays to write a page specifically for that site. Try to match its audience and writing style. This is not only important for the link request but also for keeping readers in case it is granted. Try to provide supplemental information, not a duplicate about what is already on the site.
Look at your competitors site, too. Think “What made this page so appealing that it was linked?” Also ask yourself “What is my competitor missing? Which additional information I can offer?“. Again, these questions (or, more precisely, the answers ;)) should go into your custom-created page.
Now go ahead and create your page. Make sure it has unique content an is very appealing to the webmaster you ask for a link. Then, create a great letter to send to him. You probably have only one shot, so everything must be perfect on first try.
When the webmaster responds, you’ve probably won your case. Even if he doesn’t like updating the page (e.g. because it is too old, no longer relevant in his POV, or …) you may have a chance to get something out of it. First of all, you might try to lightly persuade him. For example, the “is no longer relevant” issue can sometimes be resolved by mentioning that it is a high-ranking page and thus it is relevant - you get the idea. But do not try to hard. It’s a bad idea to make the webmaster upset. In most cases, once you have a good contact, you can get him to link to your site from some other page, maybe even a new one he creates. Do not risk this by arguing too much.
As you can see, trying to get links on high-ranking sites usually requires a lot of effort and is not even guaranteed to succeed. However, if you have a great and relevant site it can be very rewarding. You do not only receive a higher ranking, you can sometimes receive many more visitors via that link. Plus, you build your reputation. So it is always worth a try!
I just found a perfect example of how NOT to ask for a link. I am sure you will enjoy reading that one ;).
November 26th, 2007 at 10:36am
I just came across this question:
Here is the situation. I owned a website since 2005 and since them it had for the whole time less than 200 backlinks.
Website was optimized had all the tags H1, H2 and was optimized for specific keyword in extremely competitive industry (insurance) and was gettgin ranked pretty high.
Within last 3 months we have added over 1600 backlinks from all kinds of websites, blogs, press releases, article sites and Directories. All the backlinks were created with proper targeted anchor text.
After doing that the website dropped 10 spots on Google.
Can backlinks damage your website ranking? Does it have anything to do with creating a lot of back links in short amount of time? Is there such thing as too many backlinks?
Backlinks can damage a site’s ranking if they are from a bad neighborhood. This is clear for paid links, which are hunted by Google. But Google and others also try to identify spam sites and backlinks from them can actually reduce your reputation. So one should wisely choose where to try to get backlinks from. Also, backlinks not related to the sites theme (e.g. technical sites if your site is about insurance) and many of them from many different themes may discredit your site.
It has also been reported that rapid and quick growth of backlinks may result in punishment. But I have never seen any evidence for this. I think the root cause here was not the growth of backlinks but the other factors I outlined above.
I have recently written about TrustRank and there is some information in that article on bad neighborhoods taking away trust in your site. You may want to have a glimpse at it…
November 26th, 2007 at 10:17am
Beginners often ask why there is so a big difference between the backlinks they receive in different search engine. Often, they say we have that large number of backlinks in Yahoo and <your search engine choice here> but Google has only one or two.
Does Google ignore your site? Well, that may be (speaking of the sandbox). But most often the cause is much simpler. Google offers a backlink search via the “link:” special search keyword. You type, for example “link:spacelaunch.gerhards.net“. At the time of this writing, it brings up 61 pages linking back to my space blog. That doesn’t sound too bad. But it is not the full truth, either…
Google has a special offering, called the “Webmaster Tools“. There, you can authenticate your site. This is done by adding either a meta tag or a special web page. After you have done this, Google knows you are actually the site owner and lets you dig into all the details. There is a section “Links” and in it “Pages with external links”. This is where you shall look for your inbound links. For my blog, it just showed 506 backlinks! Obviously, this is much better than what the search query brought up. In webmaster tools, you also have a very nice interface where you see which pages attract which backlinks - a useful tool for continuing to build your network.
My assumption is that Google does this to protect webmasters from competitors by not revealing all backlinks. Otherwise, it is too easy to try to get listed in the same places…
Lesson learned: If you’d like to know your backlinks, do NOT use the links: search keyword but use Google Webmaster Tools instead.
November 25th, 2007 at 10:18am
Under adwords+ experiment
As I am broadening the scope of my space flight blog, I have now changed the title from “Viewing a Space Launch” to just “Space Flight”. I wonder how this will impact the ranking of my pages. Currently, I rank quite high for a lot of keywords that contain the keyword “launch”. However, traffic is slow, so there is not much to lose.
I am not yet ready, though, to change the url. Maybe I’ll do that in the future, but this needs to be well thought out. Anyhow, now let’s first see what the title change does to the site’s ranking…
Oh, one side note: although I have slow traffic, the click through rate is quite OK. It’s consistently around 2%, which I think is healthy. So if I could drive in more traffic, I could probably earn a bit from the adsense ads I am running. That would be most helpful (even though this is primarily a hobby site, but it should pay for the costs it creates…).
I am looking forward to see how this move will affect my search rankings…
November 23rd, 2007 at 11:46am
Google has announced in its adsense blog that Adsense Video Units are now being rolled out to the UK, Ireland and Canada. They also speak from good success in the US market (but what else should they say?;)).
The key reasoning behind this move is probably that Google likes to keep the Video Unit experiment limited to the English-language market and also a market that has at least some of the same cultural believes. From a marketing and testing point of view, that makes much sense to me.
What I would like to see, however, is that publishes (like me ;)) targeting these markets also get a chance to show video units - even though they may not reside in these countries. As I am from Germany, this probably does not happen. But a lot of my pages address exactly the clientèle Google is looking into.
Well… let’s wait. At least my sites are prepared for the Video ad formats.
November 23rd, 2007 at 10:49am
Do you remember my test with a site related to the US greencard lottery? This site contains some real (and hopefully useful) information, but it is a two-pager, with very limited links between them. The site is targeted towards Germans, where the greencard lottery always draws some attention. Other then mentioning it here on the blog and submitting it once to digg, no promo was done.
Expectedly, when I look at Google Webmaster Tools, I do not see any traffic or stats for the site. However, when I just looked up my own web stats (awstats), I found out that the site actually received a few hits (surprise, surprise) and some of them were the result from a Google queries. Oops - results from Google SERPs? Interesting… I then did a check and used Google myself to do some queries (look at an example). And, indeed, the site appeared.
The first thing I noticed is that there are very few search results for these keywords, Google lists only 10,900, which means “nothing” in web terms. The next thing I noticed is that my poor page is on a spot above some of its big competitors. My understanding is that these have not optimized their sites for the (most probably) unusual query that I did. So, from that perspective, it looks understandable that my page shows up.
What I wonder about, however, is how quickly Google brought the site online in its index. I had expected quite a delay before it would show up at all.
My leading theory is that the age of its parent domain might be a factor in the equation. The ferientips.com domain is in use for over eleven years now and it contained good content most at the time. Recently, it was heavily outdated because I didn’t maintain it for quite a while. But the content was still solid.
It is often argued that domain age is in important factor in assigning a page’s rank (notice the fine print: I did not say “pagerank” ;)). However, most folks say that the age factor applies only to the hostname. So www.ferientips.com would have that plus, but greencard.ferientips.com would not. I personally always tended to agree to that school of thought. Now, I begin to question it. I had a similar experience with the site spacelaunch.gerhards.net. This is a high quality blog site about space launches and space in general. It gained Google attention and pagerank very quickly.
What both sites have in common is that they are subdomains of domains being in existence for a long time. So I begin to think that Google probably passes some of the “age benefit” down to subdomains of that very domain. This seems plausible, because there is also obviously is a strong relationship between two such sites. Of course, a factor is that my sites do not target heavily competitive keywords. So things may be different in that area. In any case, I’ll keep a keen eye on the development of the sites. Maybe they go to the sandbox and my thoughts were totally wrong ;).
November 16th, 2007 at 09:00pm
I have done a lot of research since I wrote my original article on Google Trust Rank. I was too interested to track down this beast.
First of all, I have to admit that a good number of my technical assumptions and conclusions were simply wrong. My thoughts about this being rumor, however, survived the test. With my new knowledge, I think TrustRank is really existing, most probably has for already a while - but many folks (including me up to recently) simply misunderstand it. And that creates rumor around trust rank that is not true.
The picture began to clear up for me when I found a scientific paper on trust rank. It is from Stanford University, but it is not Google-specific. In fact, it used Altavista as its testing bed. However, I am pretty sure that Google has paid attention to it, if they did not even develop something themselves in their lab.
What also helped to get the big picture was an interesting report about Google’s search labs in the New York times. While it has no specific details on trust rank, it has a lot of things that can be read between the lines.
I try to sum up what I think is most important about this concept. It is my personal opinion - read the sources yourself, you may draw different conclusions. Keep in mind that Google’s trustrank is probably different from what was in the paper. But I think it will share the basic ideas, otherwise it would probably be called differently (oh, I forgot that Google doesn’t call it anything after all… ;)).
Most importantly, TrustRank can be algorithmically computed. So my number one invalid assumption in the previous paper was that TrustRank solely depends on human review. Quite the opposite is true and it now fits much better in my overall picture of Google.
TrustRank (TR) is in many ways similar to PageRank (PR). Just the way it starts is different. Let’s ignore that for now. As with PageRank, TrustRank can (and will) be passed from one page to another. A link to a page is a vote for that page. Part of the linking pages’ TR will be carried over to the linked page. How much, is depending on many factors and shall not be of interest for us here. Important is the fact that TR calculation is pretty similar to PageRank (PR) calculation.
What is totally different is the way the initial ranks are calculated. With PR, every site’s (link) votes are equal. In (too) simple words, you crawl the web once, count how many links a page receives and the most linked page has the highest PR. All fully automatic - and all subject to spam or SEO (to phrase it a little less upsetting).
TrustRank, on the other hand, requires manual labor. Humans need to review sites and check how trustworthy they are. Are they spam? Do they have good information? Are they set up as a trap for the reviewer (eg. have good information now but are scheduled to change after acquiring trust)? Is the site owner trustworthy? Just think about it: a government is probably more trustworthy than a private body than the average Joe (OK, some me argue about that, but I think you got the idea…). So even real-world, non-virtual trust plays a role in human review.
It is impractical to review all web sites. It is impractical to review a small fraction of the sites. And it is even impractical to review a fraction of this small fraction. Only a very, very small number of sties can actually undergo human review. So the TR needs to be able to deliver good results on a small, select set of sites. Let’s call these sites “seed sites”. As their number is small, the selection of them is very important. It, too, can be done automatically. For example, sites which are either high on the search engine result pages (SERPs) could be chosen or those with many outgoing links.
The actual method to select them shall not be of our concern here. For Google, it will remain a secret anyhow. Important is that the seed sites get selected by some parameters that qualify them. This is (by intension) very vague, but the point to note is that there must be a reason to be in that set. It does not happen just by accident.
In case of real-world search engine, I’d also say that the seed set is not fixed, but being worked on all the time. So we do not have a static set, but one that evolves over time. Just think about the spam busters that each search engine employs. I guess any site detected to be spammy will also become part of the seed set for trust rank - with a thumbs down vote. And while I am speculating: I’d assume that there also is a time value that comes with the human vote - a more recent review will count higher than a review done month ago. But that is pure speculation. For a software developer like me, it just sounds like the right thing to do…
The seed sites are reviewed to be either trustworthy or not. Note that a vote to be not trustworthy takes some trust away from the sites they link to. This is basically known with PR too - the old “do not go into a bad link neighborhood paradigm“.
Based on the (ever changing) seed set and the (ever changing ;)) pagerank-like trustrank algorithm, trust is assigned to each and every page. As with pagerank, the closer you are to a trusted site, the more trust you receive (or is taken away from you, if being linked to from a bad page). The TR calculation itself is purely automatic, no human intervention required. The end result is a nice TR value for each page. That value will be ever-changing too, but for a given moment in time it has a specific value. Let’s freeze time now and think about what that value means…
… it is absolutely up to the search engine what it means! Of course, TR will be used to order pages in the SERPs. So it will be used to decide if you site will be shown on page 1 or 1,000. But trustrank alone, IMHO, would be far too inferior to be used as the sole, or major, source of search result page sort order. I guess that Google will use TR as one parameter is uses to compute the overall value that it assigns to a page in regard to this search word. I don’t mean page rank here, which I consider to be just another parameter. I am sure there are a myriad of other parameters. The NY Time interview has quite some good explanation on what may be considered, so if you like more ideas, go and read it.
The question is how much weight Google assigns to TR and PR. You’ll probably never find an official Google answer. And, to be honest, I don’t think one is even needed. It is obvious that Google will tweak that part of the algorithm the same it tweaks other parts of it. So, for example, the weight may be a number x for a given search term and a value of y for another. And the very next day it may even be completely different, because the Google search team has had another bright idea.
Speculation again: what I think what happened by the last ranking update is that Google probably changed the weights as well as some other parameters in its algorithm. I do not think they introduced trustrank for the first time. Its too long known for Google to adopt it at that time. But they’ve probably given it a boost to combat what they consider spam.
So, what’s the lesson to learn? Unfortunately, I can not (and will not) offer any black hat SEO here: nothing has really changed. Google likes sites who get link from authority sites. Google is probably making it harder to fake being an authority site. They don’t like it if you get your link from that poor and unmaintained, heavily spammed university department x link directory. They like it, however, if you get that same link from the hard to obtain spot on that same universities home page. Same applies for other authority sites. I guess the bar has risen in this area.
For us webmasters, it means that it is even more important to try getting links form high-profile sites. Sounds surprising? I hope not… I know it is hard to do that, but I like the idea that there is a reward for high quality content. And, of course, black hats will sneak in and find their ways around the new algorithm. But that, too, will not last too long.
If you intend to build a long-lived site, there is no way around creating high quality, unique content. That will bring the best reward in the long term. And, after all, isn’t that why humans (aka visitors) like and visit web sites?