Difference Between Organic Search Results and Paid Listings

Sunday, January 30, 2011

There are two kinds of search results: paid results and organic (or natural) results that Most search engines, these days, return two types of results whenever you click Search:

Natural/Organic: The 'Real' Search Results (Often Called 'SERPs' for Search Engine Results Pages)
Organic search results are listings on search engine results pages that appear because of their relevance to the search terms. This relevance is calculated by criteria such as extent of keyword match and number of sites linking to that website. You can only get a high ranking if your content is seen as relevant and important by the search engines.

Paid: Pure Advertising
This is how the search engines make their money. Advertisers pay the search engines to display their ad whenever someone searches for a word that is related to their product or service. The most widely used form of paid listing is Pay Per Click (PPC), where you pay each time someone clicks on the link in your advertisement. The Google, Yahoo!, and Bing search engines combine advertising and search results on their search results pages. In each case, the ads are designed to look like the search results, except for minor visual distinctions such as their background colour and/or placement on the page.

Figure Natural / organic search results v paid ads
When people use search engines, they normally pay a lot more attention to the natural results than the paid results because they know these results are more relevant (and they know the "Sponsored Links" are simply ads).

Which is more effective?
For most industries, the natural results are significantly more effective:
  • Organic search results are free. Unlike paid campaigns, there is no direct cost involved to get your website listed in organic search results.
  • Organic results get clicked 8½ times as often as paid listings excluding search results that have no paid ads.

A Helpful Guide - How Search Engines Work

You know how important it is to score high in the SERPs (Search Engine Result Page). But your site isn't featuring on the first three pages, and you don't understand why? It could be that you're confusing the web crawlers trying to index it. How can you find out?

In order to achieve a useful level of website optimization, it is essential to understand how search engines operate and how they arrive at their results.

Search engines work in a number of different ways that directly relate to search engine optimization:

1. Crawling the Web

Search engines run automated programs, called "bots" or "spiders" or "crawlers" that use the hyperlink structure of the web to "crawl" the pages and documents that make up the World Wide Web. Spiders only can follow links from one page to another and from one site to another. That is the primary reason why links to your site (inbound links) are so important. Links to your website from other websites will give the search engine spiders more "food" to chew on.

Spiders find Web pages by following links from other Web pages, but you can also submit your Web pages directly to a search engine or directory and request a visit by their spider. It can be useful to submit your URL straight to the various search engines; but spider-based engines will usually pick up your site regardless of whether or not you've submitted it to a search engine.

2. Indexing Documents

Once pages & Web addresses have been crawled or collected, sent to the search engine's indexing software. The indexing software extracts information from the documents, storing it in database. The kind of information indexed depends on particular search engine. Some index every word in a document; others index the document title only.

3. Processing Queries

When you perform a search by entering keywords, Search engines match queries against an index that they create and assembles a web page that lists the results as hypertext links. The index consists of the words in each document, plus pointers to their locations within the documents. This is called an inverted file.

4. Ranking Results

Once the search engine has determined which results are a match for the query, the engine's algorithm (a mathematical equation commonly used for sorting) runs calculations on each of the results to determine which is most relevant to the given query. They sort these on the results pages in order from most relevant to least so that users can make a choice about which to select.

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Certain types of navigation may hinder or entirely prevent search engines from reaching your website's content:

Speed Bumps & Walls

Complex links and deep site structures with little unique content may serve as "bumps". Data that cannot be accessed by spiderable links qualify as "walls".

Possible "Speed Bumps" for SE Spiders:

*  URLs with 2+ dynamic parameters; i.e. http://www.url.com/page.php?id=4&CK=34rr&User=%Tom% (spiders may be reluctant to crawl complex URLs like this because they often result in errors with non-human visitors).

*  Pages with more than 100 unique links to other pages on the site (spiders may not follow each one).

*  Pages buried more than 3 clicks/links from the home page of a website (unless there are many other external links pointing to the site, spiders will often ignore deep pages).

*  Pages requiring a "Session ID" or Cookie to enable navigation (spiders may not be able to retain these elements as a browser user can).

*  Pages that are split into "frames" can hinder crawling and cause confusion about which pages to rank in the results.

Possible "Walls" for SE Spiders:

*  Pages accessible only via a select form and submit button.

*  Pages requiring a drop down menu (HTML attribute) to access them.

*  Documents accessible only via a search box.

*  Documents blocked purposefully (via a robots meta tag or robots.txt file - see more on these here).

*  Pages requiring a login.

*  Pages that re-direct before showing content (search engines call this cloaking or bait-and-switch and may actually ban sites that use this tactic.

The key to ensuring that a site's contents are fully crawlable is to provide direct, HTML links to to each page you want the search engine spiders to index. Remember that if a page cannot be accessed from the home page (where most spiders are likely to start their crawl) it is likely that it will not be indexed by the search engines. A sitemap can be of tremendous help for this purpose.

What Is Google PageRank?

PageRank is a link analysis algorithm, named after Larry Page, used by the Google Internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references.

Google describes PageRank:

"PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important"."

In other words, a PageRank results from a "ballot" among all the other pages on the World Wide Web about how important a page is. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page there is no support for that page.

Google assigns a numeric weighting from 0-10 for each webpage on the Internet; this PageRank denotes a site’s importance in the eyes of Google. The PageRank is derived from a theoretical probability value on a logarithmic scale like the Richter Scale. The PageRank of a particular page is roughly based upon the quantity of inbound links as well as the PageRank of the pages providing the links. It is known that other factors, e.g. relevance of search words on the page and actual visits to the page reported by the Google toolbar also influence the PageRank.

PageRank Algorithm:

PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided between all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.

A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.


Mathematical PageRanks (out of 100) for a simple network (PageRanks reported by Google are rescaled logarithmically). Page C has a higher PageRank than Page E, even though it has fewer links to it; the link it has is of a much higher value. A web surfer who chooses a random link on every page (but with 15% likelihood jumps to a random page on the whole web) is going to be on Page E for 8.1% of the time. (The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. Page A is assumed to link to all pages in the web, because it has no outgoing links.

For more information about PageRank Algorithm.

Search Engine Optimization (SEO): SEO Basics for Beginners


SEO Basics are the simplest and most common principles of Search Engine Optimization. In this article find out how to do basic SEO, code search engine friendly pages and how to do a basic promotion of your site.

What is SEO & why it is important?

Search engine optimization (SEO) is the art and science of publishing and marketing information that ranks well for valuable keywords in search engines like Google, Yahoo! Search, and Microsoft Live Search.

SEO, or search engine optimization is important for any website. Essentially, SEO helps to make sure that your website will be found on the Internet. The most important reason for doing SEO for your website is to reach your target audience. Having a beautiful glossy website is of no use if nobody is visiting it. SEO increases the number of visitors to your website which in turn brings profit to your business.

Search engine optimization consists of the following main steps:

Choosing the Right Keywords

Choosing the right keywords and phrases for your website is an important part of the process of search engine optimization and making your site more visible to search engines and searchers alike. You may think you know what your ideal customer is typing into Google to find your company, but how do you know for sure?  Even the slightest variation - such as making a word plural - can cause dramatic differences in search results. The best way select a right keyword is to imagine what 'exact' phrase the author/webmaster might have used to describe your required information.

We need to be more specific, which means:

   1. Targeting a more suitable market that is looking for a content editing solution
   2. Competing with fewer websites targeting the same keywords
   3. Optimizing for keywords that people actually use when performing searches

Targeting a suitable market will depend on your website, as well as the products and services you offer. Try to be specific with your keywords, and remember that people no longer use single keyword search phrases - the average search phrase contains 3-5 related words.

Google AdWords: Keyword Tool & Overture search suggestion tool are great tools that help you choose relevant and popular terms related to your selected key-term.

Your URL and Title Tag

The most important factors in search engines ranking are your domain name and title tag. For example, a domain name such as: www.web-development-india.com will generally get ranked higher than www.companyname.com, assuming that they had identical keywords and page content.

Your title tag is equally as important as your domain name. Using keywords in your title tag can improve your Google ranking significantly. Trying to achieve a balance of professionalism with keyword density in the title tag however is sometimes a little more difficult.

Keywords in title tag shows search results as your page title, so this is one of the most important things and it shouldn’t be long 5-6 words max, and use keyword at the beginning.

Heading Tags and Keyword Density

Header tags are used in millions of websites across the internet, and have become a standard for good design. Header tags are an alternative method to highlighting key pieces of information throughout a webpage. Header tags come in a variety of sizes from a H1 to a H6, which reflect the sizes of the header tags.

These tags should contain your specific keywords along with any other descriptive text relevant to the content of the page. Search engines regard Head tags as valuable pointers to what a web page is all about. The Google ranking algorithm dictates that if you're using a tag, then the text in between this tag must be more important than the content on the rest of the page. So try to include your most important keyword phrases in heading tags on your page if you can.

Keyword density is the percentage of times a keyword or phrase appears on a web page compared to the total number of words on the page. Sprinkling keywords throughout your page content can also improve your sites keyword density. Keyword density simply means the ratio of optimized keywords to the rest of the content on your page. It is usually expressed as a percentage, and should be between 7% and 10% for each page on your site.

Unique Content

Writing search engine friendly content helps your potential customers find you through a search engine. Most search engines use software “spiders” to comb the Web for the most relevant sites, and one way they determine a site’s value is by its written content. The spiders are searching the content for keywords or the words that people type in to the search engines that relate to a particular topic. The good news is that spiders, like your site visitors, are attracted to short, easy-to-understand text that is relevant to what you are selling.

Website Links

Backlinks are links that are directed towards your website. Also knows as Inbound links (IBL's). The number of backlinks is an indication of the popularity or importance of that website. Backlinks are important for SEO because some search engines, especially Google, will give more credit to websites that have a good number of quality backlinks, and consider those websites more relevant than others in their results pages for a search query.

Having links from similar sites is very, very useful. It indicates that the competition is voting for you and you are popular within your topical community.

What Is Google PageRank?

PageRank is a link analysis algorithm, named after Larry Page, used by the Google Internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references.

Google describes PageRank:

"PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important"."

In other words, a PageRank results from a "ballot" among all the other pages on the World Wide Web about how important a page is. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page there is no support for that page.

Google assigns a numeric weighting from 0-10 for each webpage on the Internet; this PageRank denotes a site’s importance in the eyes of Google. The PageRank is derived from a theoretical probability value on a logarithmic scale like the Richter Scale. The PageRank of a particular page is roughly based upon the quantity of inbound links as well as the PageRank of the pages providing the links. It is known that other factors, e.g. relevance of search words on the page and actual visits to the page reported by the Google toolbar also influence the PageRank.

PageRank Algorithm:

PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided between all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.

A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.


Mathematical PageRanks (out of 100) for a simple network (PageRanks reported by Google are rescaled logarithmically). Page C has a higher PageRank than Page E, even though it has fewer links to it; the link it has is of a much higher value. A web surfer who chooses a random link on every page (but with 15% likelihood jumps to a random page on the whole web) is going to be on Page E for 8.1% of the time. (The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. Page A is assumed to link to all pages in the web, because it has no outgoing links.

For more information about PageRank Algorithm.

Free SEO Add-ons for Mozilla Firefox

The Mozilla Firefox has been one of the most widely used internet browser in the world and there are many free SEO add-ons available for Firefox. In this post, I'll share some of the best Mozilla Firefox add-ons:

1.) SearchStatus: It displays the Google PageRank, Alexa rank, Compete ranking and SEOmoz Linkscape mozRank anywhere in your browser, along with fast keyword density analyser, keyword/nofollow highlighting, backward/related links, Alexa info and more.


2.) SeoQuake SEO extension - Seoquake is a Firefox SEO extension aimed primarily at helping web masters who deal with search engine optimization(SEO) and internet promotion of web sites. Seoquake allows to obtain and investigate many important SEO parameters of the internet.


3.) Flagfox: Displays a country flag depicting the location of the current website's server and provides a multitude of tools such as site safety checks, whois, translation, similar sites, validation, URL shortening, and more...


4.) SEO Doctor: SEO Doctor allows easy SEO diagnosis and problem solving for webmasters.



5.) SEO Status PageRank/Alexa Toolbar: See Google Pagerank (PR) and Alexa rank for each website you visit. Simple and customizable. You can see and use everything the toolbar offers, or strip it down to the bare essentials and place it in the status bar. Simple, quick, and very useful.



5.) SEO Workers Web Page SEO Analysis Tool: SEO Workers Analysis Tool extension allows you to perform a basic analysis of the page in your browser with a single click. The results from the SEO Workers Analysis Tool are structured into the following useful groups:



General status, meta tags listing, meta tags analysis, the pages displayed within search engine results, keywords found in the anchor tags, keywords found in the image "alt" attribute text, keywords found on the page, URLs found in the page, headers returned from the server.

6.) Google Global: Google Global is an unobtrusive Firefox extension that allows you to see what the Google search results that you are viewing look like from different geographical locations.



7.) Shareaholic: Shareaholic is the easiest way to share interesting links using Facebook, Twitter, Google Mail, Reader, Bookmarks, Evernote, Bitly, StumbleUpon, and more... from one simple ALL-IN-ONE add-on. This is the ultimate tool for the link sharing addict!


8.) ShowIP: Show the IP address(es) of the current page in the status bar. It also allows querying custom information services by IP (right mouse button) and hostname (left mouse button), like whois, netcraft. Additionally you can copy the IP address to the clipboard.



9.) Echofon for Twitter: Echofon (formerly TwitterFox) notifies you of your friends' tweets on Twitter. Syncs with iPhone.


10.) Foxy SEO Tool: Foxy SEO Tool offers tools for search engine optimization (SEO), web traffic and page analysis for webmasters and web professionals.

Google's new search index Caffeine goes live

Saturday, January 29, 2011
Google's new Caffeine search index will crawl the Web more frequently and in smaller bites in hopes of amassing Web content more quickly and comprehensively than before.
(Credit: Google)
Google recently updated it indexing style called caffeine. Why it is named as Caffeine? Yes I have answer for this question read carefully. The old google indexing  is not caffeinated.  But the recent Indexing is caffeinated.  It means that old type of algorithm is slow and with no kick. But the Caffeine is speed with newly written content and which have some kicks like alcohol that’s  it. So caffeine tells us how the google finds the contents and how  it is delivering to the end users. The new style of this algorithm which really kicks web masters.
Caffeine claims its fast and fresh content delivery, Now the google giving 50%  fresh contents and which is really real time contents!!. So surely it beats Phased indexing.
.........................................................................................................................................
Google cafeine update Claimed to contain 50% fresher results. Instead of being a “phased” indexing process… it is now more “real time”.
The very very important think in it is google reduced it indexing length for most of the categories. Because of that it is expected almost of the spamming sites or blog will be deindexed. For example the number of search result found in old search is 169,000,000 , but now it is reduced as 98,700,000 . So the users can browse now spam free search.
In another region google increased its indexing size!!!. Some of the sites were  getting increased search results through this caffeine update, example hubpages.com  , ezine articles etc.
Google caffeine giving more important to the on page seo. It is must to know more about on page seo.
How  can caffeine affect your site ?
According to my point of  view small level sites which does not have  problem with ranking but some of the big level sites were affected by this algo like amazon dot com.
Please visit my SEO Tips  and SEO Tricks blog again and again I will update you recent seo news. Soon I will update continuation of  this post soon.

What is SEO

Wednesday, January 19, 2011

Search engine optimization (SEO) is the process of improve good traffic of  a website or webpage in  search engine via the organic or natural  way . All major search engines such as Google, Yahoo and Bing have such results, where web pages and other content such as videos or local listings are shown and ranked based on what the search engine considers most relevant to users. Payment isn’t involved, as it is with paid search ads.
Another  SEO Definition :
SEO is the process and practice of optimizing your website so that it ranks well on search engine results pages (or SERPs). When someone types a word or phrase into search engines (like Google) looking for your product, you want to appear on the first or second page of the search results.

The main types of SEO
1) On-page Optimization
2) Off-page Optimization
There could be a breakaway third type of LOCAL SEO but this is just the 2 types above that are targeting local rather than national/international


List out the process - Off-Page Optimization


  • Directory Submission
  • Social Bookmarking Submission
  • Blog Posting
  • Blog Hosting
  • Article Submission
  • Press Release Submission
  • Forum Posting
  • Footer Links
  • Affiliate Marketing
  • Blog Commenting
  • Review Posting
  • Rss Feed Submission
  • Css Submissions
  • Video Submissions
  • Profile Creation
  • Document Sumissions
  • Link Wheel
  • Social Media Optimizatio
  •  Link Building (Reciprocal, Three Way & Authority Links

Glossary of Seo


Algorithm is a programming rule that tells how search engine indexes contents.

Alt Attribute is the alternative text description for images.

Anchor Text is the text part of the link. This is used by search engines as an important ranking factor.

Back links are inlinks, inbound links, incoming links (whatever you want to call it) pointing to a web page.

Banned this happens when search engine blocks your site from appearing in its SERP.

Black Hat SEO this is the optimizations that are against the guidelines of search engine.

Blog is a website also known as "weblog". It is an online diary with regular entries of commentary, description, events, or other materials. Entries are commonly displayed in reverse-chronological order.

Bot short for robot.

Cache is the copies of webpages stored on internet user's hard drive or in search engines database. This is why web pages loads quickly when they hit the back button in their web browsers.

Cloaking is a black hat SEO technique in which the content presented to the search engine spider is different than the content presented to visitors.

Cookie is the information stored on searchers computer by a web server while accessing a particular webpage.

Directory is where websites are group into categories with description by group of human editors.

Forum or message board is a place where discussions on related topics takes place. Forums are also used by search engine optimizers and webmasters for information exchange.

Google is the world's number one search engine founded by Standford University students Sergey Brin and Larry Page in 1998.

Google bot is a search bot used by Google to collect documents from the web to build a searchable index for the Google search engine.

Gray Hat SEO is an search engine optimization using both Black Hat and White hat techniques.

Heading tag is an HTML tag often used to denote a section heading of a web page. SE pays attention to text marked with a heading tag.

Home Page is the main page of a website also known as front page or web server directory index. Usually the home page accumulates the most page rank score since other sites usually links to it the most.

HTML stands for Hyper Text Markup Language. HTML is a computer language used to create pages and content on the web.

Hyperlink is a text or graphics that, when clicked on takes the user to another web page location.

Inbound links are links that point to your site from other web pages, also inbound links are important asset that can improve your site's PR (Page Rank).

Inlinks another synonym for back links.

JavaScripts a programming language most commonly used to add interactive features to webpages.

Key phrase a search phrase composed of keywords.

Keyword is a word that search engine user use to find relevant web pages. It is also the term that captures the essence of the topic of a document.

Keyword density is the number of times a keyword or key phrase appears on a web page.

Keyword popularity is the number of occurrences of searches done by internet users of a given keyword during a period of time.

Keyword research is used in search engine optimization to find and research actual search terms people enter into the search engines.

Keyword stuffing is the act of placing excessive amounts of keywords into a page for the purpose of boosting page's ranking in SERP. Its use is strongly discouraged.

Keyword rich is when a given page is full of good keywords rather than bunch of meaningless words.

Link bait or Link baiting is one of the great ways to get backlinks. This is something on your site that naturally attracts backlinks from other web pages.

Link farm is a group of websites that all hyperlink to every other site in the group for the purpose of inflating link popularity or Page Rank. Participating in a link farm could get your site banned by search engines.

Link Popularity is the number and the quality of the inbound links pointing to a given web page.

Links are text or graphics an internet user clicked to get to another web page location.

Meta description is a hidden HTML code that describes the content of a web page in search results. It should be relatively short; around 12 to 20 words is suggested.

Meta keywords is a hidden HTML code pertaining to the web page content. It is now ignored by most of the search engines because search engine spammers have abused this tag.

Nav bar stands for navigation bar. It is a web site's navigation icons usually arranged in row down the left hand side or along the top and plays crucial roles in getting site's visitor to go deeper in the site and also directing the spiders to the most important content of the site.

Outbound links are links from your web pages to another web page in different domain.

Page Rank(PR) is an algorithm use by Google search engine to measure the authority or relevant importance of a web page.

Reciprocal links is the practice of trading links between web sites.

Search Engine is a web site that offers its visitors to search for information on the World Wide Web.

Search Engine Marketing (SEM) are strategies and tactics that seeks to promote websites by increasing their visibility in search engines.

Search Engine Optimization (SEO) is the process of getting high search placement in organic or natural listing of search engines.

Search Engine Result Page (SERP)is a page where results of search query are listed.

Spam is a manipulation technique that violates search engine guidelines.

Spider is a robot sent out by search engines to catalog websites on the internet. It is also known as a bot or crawler.

Tagging is a non hierarchical keyword assigned to a piece of information such as bookmark that helps describe an item and allows it to be found by searching or browsing.

Title tag is one of the most important bit of text on a web page as far as search engine is concerned. It can be found at the very top of the browser window, above "back", "forward", "refresh".

Traffic is the number of times a website is viewed by a unique visitor within a specified time.

Unethical SEO is a search engine optimization tactics which violates search engine TOS, term of service that can put a site at risk of being penalized or banned.

Unique visitors is a term used to describe the total number of visitors to a site over a certain period of time.

URL stands for Uniform Resource Locator. It is the address of a web page in the World Wide Web.

White hat SEO is the ethical SEO techniques approved by the search engines.

XML stands for Extensible Markup Language. XML is a simple, very flexible text format used to syndicate or format information using technologies such as RSS.

Yahoo is one of the top 3 search engines. It is also one of the oldest and most established directories on the web.

If you know other SEO terminologies that are not included in the list, you may leave it in the comment section and I will be glad to add it in.