I am very happy to introduce myself as a web promotion expert. I am sure & confident that I can promote any website. After my promotion techniques, your website will be available on first page in Google within 7 days...that’s for sure.
I look forward to promote your website…

Sunday, June 27, 2010

Hack's Gmail account : Save your Gmail Account....Rajesh Goutam

Know, Hack's Gmail account

If your Gmail account is opened to anyone without your knowledge and e-mail sent, so now you can get this information. This new feature of Gmail will now tell you where and when your account was opened?

However, this feature already exists in Gmail, but rarely had to Users on the eye. Users of Google on why the recent attention to the information banner as decided.

And also security options: If you are doing Mentoen Gmail account, then on top of your inbox in no time you will see this information as banner. This feature also
Will tell which browser to open the account and IP address has been used.

In addition, Google's other security features will be increasingly active. These features from Google was getting increasingly popular social networking a purpose of fighting, the second of its Users to avoid fraud.

Know where Open Account: IT Experts believe that this will definitely increase in the number of Users of Google. Also it will benefit both user and company. Experts said the same country every day if you are accessing the account. Anytime, any place and any other by the account is opened there is a change, in the event that your Gmail account window will show a Aornig message. The user will know it's easy to hack their account is going nowhere. With that option to click on Details as well as user will know what IP address the account was open Ug.

Keep in mind also: Google with this feature - along Users should always remember that your e-mail after logout Czech and short - a little time to keep changing your password. Ug addition to a password that can not be easily known. Users Baratkara Such precautions can keep your account safe.

Windows V/S Linux Hosting Server: Rajesh Goutam

Windows V/S Linux Hosting Server: Rajesh Goutam

The Basics of Linux & Windows :
Linux (and its close relation Unix) and Windows 2000 (and its close cousin Windows NT) are types of software (known as operating systems) that web servers use to do the kind of things that web servers do. You do not need to know any real detail of either to make a decision as to which you need but here a few guidelines.
Just because you use a windows desktop PC doesn't mean you have to opt for Windows web hosting (and the reverse is true as well). The operating system you use on your desktop has little to do with your choice of web hosts. As long as you understand how to use your FTP or web publishing software, your can use either operating system.
But what is important is that you know what you want your website to do and what you want to offer on it. This is what will ultimately help determine the type of web hosting that will work best for you. As mentioned earlier, interactive websites usually rely on ASP, PHP, or Perl type languages.
Linux Web Hosting or Windows 2000 Web Hosting ? Make your Choice !
When it comes to Web hosting, Linux has, for some time, been widely considered the best OS for Web servers. It's typically found to be the most reliable, stable and efficient system and, as such, it's commonly used for the demanding environment of Web and mail servers. Indeed, aalphanet.com runs on the Linux OS precisely because of this traditional stability.
The million-dollar question is what application are you looking to use for your hosting? Consider the tools and scripting languages you plan to use – if you use PHP, Perl or MySQL, Linux is the way forward. If apps are Microsoft-specific, then Windows is what you need.
If your site, like most web sites, is what might be termed "brochure-ware" then Linux servers are ideal. By brochure-ware I mean a site that offers the kind of information that in the past might have been provided on paper in the form of brochures, newsletters or data sheets. Brochure-ware sites will offer some interaction through enquiry forms and can certainly incorporate online purchasing and other routine e-commerce functions.
If however your site incorporates an online searchable database or interactive chat facilities then Windows 2000 or NT will be a better bet in most cases. It will cost a bit more but you'll get that back in reduced development time and simply better functionality.
The following are the advantages of using Linux based web server compared to Windows based web server :
Stable: Linux/Unix operating systems has traditionally been believed to be very stable and robust. A web site housed on a Linux operating system will have very high up-time (of the order of 99.9%). Of course, other factors such as power supply, network admin skills, and network load etc. also matter when it comes to maintaining the system uptime.
Low cost of ownership: The Linux OS comes free of cost (or at very insignificant cost, usually cost of distribution). Also, it has full fledged server, and desk top applications that comes free along with the OS. These server applications (such as FTP, Web Server, DNS Server, File Server etc.) being free, are also very stable.
Ease of use : When it comes to web hosting, it is easy to host on Linux web servers. The process of uploading and hosting is almost same for both Linux and Windows web servers. If you want to use a Windows based tool such as Front Page for uploading a web site on to a Linux based web server, make sure that the Front Page extensions are enabled. This is only required if you are uploading using HTTP feature (http://www.yourwebsite.com) of Front Page. Front Page also makes it possible to upload a web site using FTP. You need to select ftp://www.yourwebsite.com for up loading using front page FTP option. Note that if you select "Front Page Extensions" during web site design, you must enable Front Page extensions on a Linux web server also. These days, all Linux web servers are coming with installable Front Page extensions, and this should pose no problem for hosting on a Linux platform.
You can use almost all types of file extensions (or scripts) when using Linux web server. Commonly, the following extensions are supported:
.cgi, .html, .htm, .pl, .php, .shtml, .xml, and others.
Basically it means that you can host web sites that use different types of server side scripts including .cgi, .pl, .php, and .asp (with plug-in).
Easy to move between hosts : A web site designed to be hosted on a Linux based web server can be hosted on a Windows web server easily, where as the reverse is not always true.
Most widely used : Linux/Unix based web hosting is most widely used compared to Windows based web hosting.
Scalability : A web site is dynamic. Usually, a web site starts with a few pages of html and grows over a period of time to suit the customers requirements. It is preferable to design a web site keeping this requirements in mind. A web site designed for compatibility with a Linux/Unix based web server meets the scalability requirement easily without making any site wide design changes.
On the downside, Linux based web server is not fully compatible with Microsoft technologies. If you are using any specialized applications or VB for development of your web site, it is preferable to host with a Windows based web server.

Webmaster guidelines

Following these guidelines will help Google find, index, and rank your site. Even if you choose not to implement any of these suggestions, we strongly encourage you to pay very close attention to the "Quality Guidelines," which outline some of the illicit practices that may lead to a site being removed entirely from the Google index or otherwise penalized. If a site has been penalized, it may no longer show up in results on Google.com or on any of Google's partner sites.

Design and content guidelines

Learn more...

Technical guidelines

Learn more...

Quality guidelines

Learn more...

When your site is ready:

Design and content guidelines

  • Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link.
  • Offer a site map to your users with links that point to the important parts of your site. If the site map is larger than 100 or so links, you may want to break the site map into separate pages.
  • Create a useful, information-rich site, and write pages that clearly and accurately describe your content.
  • Think about the words users would type to find your pages, and make sure that your site actually includes those words within it.
  • Try to use text instead of images to display important names, content, or links. The Google crawler doesn't recognize text contained in images. If you must use images for textual content, consider using the "ALT" attribute to include a few words of descriptive text.
  • Make sure that your elements and ALT attributes are descriptive and accurate.
  • Check for broken links and correct HTML.
  • If you decide to use dynamic pages (i.e., the URL contains a "?" character), be aware that not every search engine spider crawls dynamic pages as well as static pages. It helps to keep the parameters short and the number of them few.
  • Keep the links on a given page to a reasonable number (fewer than 100).
  • Review our image guidelines for best practices on publishing images.

With image search, just as with web search, Google's goal is to provide the best and most relevant search results to our users. Following the best practices listed below (as well as our usual webmaster guidelines) will increase the likelihood that your images will be returned in those search results.

Don't embed text inside images

Search engines can't read text embedded in images. If you want search engines to understand your content, keep it in regular HTML.

Tell us as much as you can about the image

Give your images detailed, informative filenames

The filename can give Google clues about the subject matter of the image. Try to make your filename a good description of the subject matter of the image. For example, my-new-black-kitten.jpg is a lot more informative than IMG00023.JPG. Descriptive filenames can also be useful to users: If we're unable to find suitable text in the page on which we found the image, we'll use the filename as the image's snippet in our search results.

The alt attribute is used to describe the contents of an image file. It's important for several reasons:

  • It provides Google with useful information about the subject matter of the image. We use this information to help determine the best image to return for a user's query.
  • Many people—for example, users with visual impairments, or people using screen readers or who have low-bandwidth connections—may not be able to see images on web pages. Descriptive alt text provides these users with important information.

Not so good:

Better:

puppy

Best:

Dalmatian puppy playing fetch 

To be avoided

dog pup pups puppies doggies pups litter puppies dog retriever 
 labrador wolfhound setter pointer puppy jack russell terrier 
puppies dog food cheap dogfood puppy food"/>

Filling alt attributes with keywords ("keyword stuffing") results in a negative user experience, and may cause your site to be perceived as spam. Instead, focus on creating useful, information-rich content that uses keywords appropriately and in context. We recommend testing your content by using a text-only browser such as Lynx.

Anchor text

External anchor text (the text pages use to link to your site) reflects how other people view your pages. While typically webmasters can't control how other sites link to theirs, you can make sure that anchor text you use within your own site is useful, descriptive, and relevant. This improves the user experience and helps the user understand the link's destination. For example, you might link to a page of vacation photos like this: Photos of our June 2008 trip to Ireland.

Provide good context for your image

The page the image is on, and the content around the image (including any captions or image titles), provide search engines with important information about the subject matter of your image. For example, if you have a picture of a polar bear on a page about home-grown tomatoes, you'll be sending a confused message to the search engines about the subject matter of polarbear.jpg.

Wherever possible, it's a good idea to make sure that images are placed near the relevant text. In addition, we recommend providing good, descriptive titles and captions for your images.

Think about the best ways to protect your images

Because images are often copied by users, Google often finds multiple copies of the same image online. We use many different signals to identify the original source of the image, and you can help by providing us with as much information as you can. In addition, the information you give about an image tells us about its content and subject matter.

Webmasters are often concerned about the unauthorized use of their images. If you prevent users from using your images on their site, or linking to your images, you'll prevent people from using your bandwidth, but you are also limiting the potential audience for your images and reducing their discoverability by search engines.

One solution is to allow other people to use your images, but require attribution and a link back to your own site. There are several ways you can do this. For example, you can:

  • Make your images available under a license that requires attribution, such as a Creative Commons license that requires attribution.
  • Provide a HTML snippet that other people can use to embed your image on their page while providing attribution. This snippet can include both the link to the image and a link to the source page on your site.

Similarly, some people add copyright text, watermarks, or other information to their images. This won't impact your image's performance in search results, but may not provide the kind of user experience you are looking for.

If you don't want search engines to crawl your images, we recommend using a robots.txt file to block access to your images.

Create a great user experience

Great image content is an excellent way to build traffic to your site. We recommend that when publishing images, you think carefully about creating the best user experience you can.

  • Good-quality photos appeal to users more than blurry, unclear images. In addition, other webmasters are much more likely to link to a good-quality image, which can increase visits to your site. Crisp, sharp images will also appear better in the thumbnail versions we display in our search results, and may therefore be more likely to be clicked on by users.
  • Even if your image appears on several pages on your site, consider creating a standalone landing page for each image, where you can gather all its related information. If you do this, be sure to provide unique information—such as descriptive titles and captions—on each page. You could also enable comments, discussions, or ratings for each picture.
  • Not all users scroll to the bottom of a page, so consider putting your images high up on the page where it can be immediately seen.
  • Consider structuring your directories so that similar images are saved together. For example, you might have one directory for thumbnails and another for full-size images; or you could create separate directories for each category of images (for example, you could create separate directories for Hawaii, Ghana, and Ireland under your Travel directory). If your site contains adult images, we recommend storing these in one or more directories separate from the rest of the images on your site.
  • Specify a width and height for all images. A web browser can begin to render a page even before images are downloaded, provided that it knows the dimensions to wrap non-replaceable elements around. Specifying these dimensions can speed up page loading and improve the user experience.

With image search, just as with web search, our goal is to provide the best and most relevant search results to our users. Following the best practices listed above will increase the likelihood that your images will be returned in those search results.

Technical guidelines

  • Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site.
  • Allow search bots to crawl your sites without session IDs or arguments that track their path through the site. These techniques are useful for tracking individual user behavior, but the access pattern of bots is entirely different. Using these techniques may result in incomplete indexing of your site, as bots may not be able to eliminate URLs that look different but actually point to the same page.
  • Make sure your web server supports the If-Modified-Since HTTP header. This feature allows your web server to tell Google whether your content has changed since we last crawled your site. Supporting this feature saves you bandwidth and overhead.
  • Make use of the robots.txt file on your web server. This file tells crawlers which directories can or cannot be crawled. Make sure it's current for your site so that you don't accidentally block the Googlebot crawler. Visit http://www.robotstxt.org/faq.html to learn how to instruct robots when they visit your site. You can test your robots.txt file to make sure you're using it correctly with the robots.txt analysis tool available in Google Webmaster Tools.
  • If your company buys a content management system, make sure that the system creates pages and links that search engines can crawl.
  • Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.
  • Test your site to make sure that it appears correctly in different browsers.

Quality guidelines

These quality guidelines cover the most common forms of deceptive or manipulative behavior, but Google may respond negatively to other misleading practices not listed here (e.g. tricking users by registering misspellings of well-known websites). It's not safe to assume that just because a specific deceptive technique isn't included on this page, Google approves of it. Webmasters who spend their energies upholding the spirit of the basic principles will provide a much better user experience and subsequently enjoy better ranking than those who spend their time looking for loopholes they can exploit.

If you believe that another site is abusing Google's quality guidelines, please report that site at https://www.google.com/webmasters/tools/spamreport. Google prefers developing scalable and automated solutions to problems, so we attempt to minimize hand-to-hand spam fighting. The spam reports we receive are used to create scalable algorithms that recognize and block future spam attempts.

Quality guidelines - basic principles

  • Make pages primarily for users, not for search engines. Don't deceive your users or present different content to search engines than you display to users, which is commonly referred to as "cloaking."
  • Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you'd feel comfortable explaining what you've done to a website that competes with you. Another useful test is to ask, "Does this help my users? Would I do this if search engines didn't exist?"
  • Don't participate in link schemes designed to increase your site's ranking or PageRank. In particular, avoid links to web spammers or "bad neighborhoods" on the web, as your own ranking may be affected adversely by those links.
  • Don't use unauthorized computer programs to submit pages, check rankings, etc. Such programs consume computing resources and violate our Terms of Service. Google does not recommend the use of products such as WebPosition Gold™ that send automatic or programmatic queries to Google.

Quality guidelines - specific guidelines

  • Avoid hidden text or hidden links.
  • Don't use cloaking or sneaky redirects.
    • Cloaking, sneaky Javascript redirects, and doorway pages
    • Cloaking
    • Cloaking refers to the practice of presenting different content or URLs to users and search engines. Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.
    • Some examples of cloaking include:
    • Serving a page of HTML text to search engines, while showing a page of images or Flash to users.
    • Serving different content to search engines than to users.
    • If your site contains elements that aren't crawlable by search engines (such as rich media files other than Flash, JavaScript, or images), you shouldn't provide cloaked content to search engines. Rather, you should consider visitors to your site who are unable to view these elements as well. For instance:
    • Provide alt text that describes images for visitors with screen readers or images turned off in their browsers.
    • Provide the textual contents of JavaScript in a noscript tag.
    • Ensure that you provide the same content in both elements (for instance, provide the same text in the JavaScript as in the noscript tag). Including substantially different content in the alternate element may cause Google to take action on the site.

Sneaky JavaScript redirects

    • When Googlebot indexes a page containing JavaScript, it will index that page but it cannot follow or index any links hidden in the JavaScript itself. Use of JavaScript is an entirely legitimate web practice. However, use of JavaScript with the intent to deceive search engines is not. For instance, placing different text in JavaScript than in a noscript tag violates our webmaster guidelines because it displays different content for users (who see the JavaScript-based text) than for search engines (which see the noscript-based text). Along those lines, it violates the webmaster guidelines to embed a link in JavaScript that redirects the user to a different page with the intent to show the user a different page than the search engine sees. When a redirect link is embedded in JavaScript, the search engine indexes the original page rather than following the link, whereas users are taken to the redirect target. Like cloaking, this practice is deceptive because it displays different content to users and to Googlebot, and can take a visitor somewhere other than where they intended to go.
    • Note that placement of links within JavaScript is alone not deceptive. When examining JavaScript on your site to ensure your site adheres to our guidelines, consider the intent.
    • Keep in mind that since search engines generally can't access the contents of JavaScript, legitimate links within JavaScript will likely be inaccessible to them (as well as to visitors without Javascript-enabled browsers). You might instead keep links outside of JavaScript or replicate them in a noscript tag.

Doorway pages ( Pages created just for search engines )

    • Doorway pages are typically large sets of poor-quality pages where each page is optimized for a specific keyword or phrase. In many cases, doorway pages are written to rank for a particular phrase and then funnel users to a single destination.
    • Whether deployed across many domains or established within one domain, doorway pages tend to frustrate users, and are in violation of our webmaster guidelines.
    • Google's aim is to give our users the most valuable and relevant search results. Therefore, we frown on practices that are designed to manipulate search engines and deceive users by directing them to sites other than the ones they selected, and that provide content solely for the benefit of search engines. Google may take action on doorway sites and other sites making use of these deceptive practice, including removing these sites from the Google index.
    • If your site has been removed from our search results, review our webmaster guidelines for more information. Once you've made your changes and are confident that your site no longer violates our guidelines, submit your site for reconsideration.
  • Don't send automated queries to Google.
  • Don't load pages with irrelevant keywords.
  • Don't create multiple pages, subdomains, or domains with substantially duplicate content.
  • Don't create pages with malicious behavior, such as phishing or installing viruses, trojans, or other badware.
  • Avoid "doorway" pages created just for search engines, or other "cookie cutter" approaches such as affiliate programs with little or no original content.
  • If your site participates in an affiliate program, make sure that your site adds value. Provide unique and relevant content that gives users a reason to visit your site first.

If you determine that your site doesn't meet these guidelines, you can modify your site so that it does and then submit your site for reconsideration.

Google Basics

When you sit down at your computer and do a Google search, you're almost instantly presented with a list of results from all over the web. How does Google find web pages matching your query, and determine the order of search results?

In the simplest terms, you could think of searching the web as looking in a very large book with an impressive index telling you exactly where everything is located. When you perform a Google search, our programs check our index to determine the most relevant search results to be returned ("served") to you.

The three key processes in delivering search results to you are:

Crawling: Does Google know about your site? Can we find it?

Learn more...

Indexing: Can Google index your site?

Learn more...

Serving: Does the site have good and useful content that is relevant to the user's search?

Learn more...

Crawling

Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

We use a huge set of computers to fetch (or "crawl") billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google's crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

Google doesn't accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.

Indexing

Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types. For example, we cannot process the content of some rich media files or dynamic pages.

Serving results

When a user enters a query, our machines search the index for matching pages and return the results we believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site's PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.

In order for your site to rank well in search results pages, it's important to make sure that Google can crawl and index your site correctly. Our Webmaster Guidelines outline some best practices that can help you avoid common pitfalls and improve your site's ranking.

Google's Related Searches, Spelling Suggestions, and Google Suggest features are designed to help users save time by displaying related terms, common misspellings, and popular queries. Like our google.com search results, the keywords used by these features are automatically generated by our web crawlers and search algorithms. We display these suggestions only when we think they might save the user time. If a site ranks well for a keyword, it's because we've algorithmically determined that its content is more relevant to the user's query.

Verify that your site ranks for your domain name = Google search for www.[yourdomain].com

Check your site is in the Google index = A search for site:google.com


Rajesh Goutam

www.rajeshgoutam.com

Rajesh Goutam Google Group

My website: