trust and authority two different things in search engines.
A webmaster world thread analyzes the difference between trust and authority in Google. Its a good thread to discuss please join here www.webmasterworld.com/google/3753332.htm
"While studying Google's recently granted [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=7,346,839.PN.&OS=pn/7,346,839&RS=PN/7,346,839]Historical Data patent[/url], I noticed that the language helps to separate two concepts that we tend to use casually at times: trust and authority.
...links may be weighted based on how much the documents containing the links are trusted (e.g., government documents can be given high trust). Links may also, or alternatively, be weighted based on how authoritative the documents containing the links are (e.g., authoritative documents may be determined in a manner similar to that described in [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=6,285,999.PN.&OS=pn/6,285,999&RS=PN/6,285,999]U.S. Pat. No. 6,285,999)[/url].
Clearly, Google has two different metrics going on. As you can see from the reference to Larry Page's original patent, authority in Google's terminology comes from backlinks. When lots of other websites link to your website, you become more and more of an authority.
But that isn't to say you've got trust. So what exactly is trust? Here's an interesting section from the same patent:
...search engine 125 may monitor one or a combination of the following factors: (1) the extent to and rate at which advertisements are presented or updated by a given document over time; (2) the quality of the advertisers (e.g., a document whose advertisements refer/link to documents known to search engine 125 over time to have relatively high traffic and trust, such as amazon.com, may be given relatively more weight than those documents whose advertisements refer to low traffic/untrustworthy documents, such as a pornographic site);
So we've got two references here, government documents and high traffic! From other reading, I'm pretty sure that trust calculations work like this - at least in part. Google starts with a hand picked "seed list" of trusted domains. Then trust calculations can be made that flow from those domains through their links.
If a website has a direct link from a trust-seed document, that's the next best situation to being chosen as a seed document. Lots of trust flows from that link.
If a document is two clicks away from a seed document, that's pretty good and a decent amount of trust flows through - and so on. This is the essence of "trustrank" - a concept described in [url=http://dbpubs.stanford.edu:8090/pub/2004-17]this paper by Stanford University and three Yahoo researchers[/url].
This approach to calculating trust has been refined by the original authors to include "negative seeds" - that is, sites that are known to exist for spamming purposes. The measurements are intended to identify artifically inflated PageRank scores. See this pdf document from Stanford: [url=http://dbpubs.stanford.edu:8090/pub/showDoc.Fulltext?lang=en&doc=2005-33&format=pdf&compression=&name=2005-33.pdf]Link Spam Detection[/url]
To what degree Google follows this exact approach for calculating trust is unknown, but it's a good bet that they share the same basic ideas. "
"While studying Google's recently granted [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=7,346,839.PN.&OS=pn/7,346,839&RS=PN/7,346,839]Historical Data patent[/url], I noticed that the language helps to separate two concepts that we tend to use casually at times: trust and authority.
...links may be weighted based on how much the documents containing the links are trusted (e.g., government documents can be given high trust). Links may also, or alternatively, be weighted based on how authoritative the documents containing the links are (e.g., authoritative documents may be determined in a manner similar to that described in [url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=6,285,999.PN.&OS=pn/6,285,999&RS=PN/6,285,999]U.S. Pat. No. 6,285,999)[/url].
Clearly, Google has two different metrics going on. As you can see from the reference to Larry Page's original patent, authority in Google's terminology comes from backlinks. When lots of other websites link to your website, you become more and more of an authority.
But that isn't to say you've got trust. So what exactly is trust? Here's an interesting section from the same patent:
...search engine 125 may monitor one or a combination of the following factors: (1) the extent to and rate at which advertisements are presented or updated by a given document over time; (2) the quality of the advertisers (e.g., a document whose advertisements refer/link to documents known to search engine 125 over time to have relatively high traffic and trust, such as amazon.com, may be given relatively more weight than those documents whose advertisements refer to low traffic/untrustworthy documents, such as a pornographic site);
So we've got two references here, government documents and high traffic! From other reading, I'm pretty sure that trust calculations work like this - at least in part. Google starts with a hand picked "seed list" of trusted domains. Then trust calculations can be made that flow from those domains through their links.
If a website has a direct link from a trust-seed document, that's the next best situation to being chosen as a seed document. Lots of trust flows from that link.
If a document is two clicks away from a seed document, that's pretty good and a decent amount of trust flows through - and so on. This is the essence of "trustrank" - a concept described in [url=http://dbpubs.stanford.edu:8090/pub/2004-17]this paper by Stanford University and three Yahoo researchers[/url].
This approach to calculating trust has been refined by the original authors to include "negative seeds" - that is, sites that are known to exist for spamming purposes. The measurements are intended to identify artifically inflated PageRank scores. See this pdf document from Stanford: [url=http://dbpubs.stanford.edu:8090/pub/showDoc.Fulltext?lang=en&doc=2005-33&format=pdf&compression=&name=2005-33.pdf]Link Spam Detection[/url]
To what degree Google follows this exact approach for calculating trust is unknown, but it's a good bet that they share the same basic ideas. "
Labels: Google, Search Engine Marketing
0 Comments:
Post a Comment
Links to this post:
Create a Link
<< SEO Blog Home