Use subdomains to bypass Googles sandbox

Obviously, the Chicago Stock Exchange, there are about to make a website more quickly conversation in regular result index. This involves using an existing, established, similar domain name (whether you own it or buying it) and using it to help you get a site out of the “sandbox.”

This technique is in the gray to black range. I want to mention it first, but if your site has been restricted for a period of time, this may be another option for you. It also requires some coding, and assumes that your site is built with PHP, although I guess there is similar ASP code there.

First let’s see how this works: you have an established related domain and a new domain, and it is “boxed”. By creating a subdomain on an established site and mirroring the content of the new domain, you will index the new subdomain faster because it will inherit some of the trust of the main domain.

Once it has established itself, you will use some form of redirection (perhaps a 301) to redirect the crawler to the new domain. Then, the new domain will inherit the link popularity of all subdomains transferred from the established trusted domain.

It sounds simple, but there are a few things you need to do.

The first, obviously, is to find an established field. If you need to buy an expired but still relevant website (it is in your budget), the author recommends you to do so. You will also not change the registration information, according to the author (this will be considered to be in the dark gray range).

You don’t want to change the registrar’s information, because Google may notice the change in ownership, and any trust you gained before the domain name you just purchased may be lost.

Suppose you just bought a relevant domain name that has existed for several years, and its PageRank is 5. By keeping the website intact and not changing the registration information, you are basically ensuring that the website maintains its current position in the engine.

Then create a subdomain on the site. Here you will place mirrored copies of all content navigation etc. Since the new website has not been added to the index, there will be no penalty for duplicate content.

You will also use some PHP code to change the page title information to trick the web server into thinking that the page was created earlier than the original time (the suggested PHP code can be found in the forum post linked above). By telling the web server that these pages are old, you are telling the crawler that these pages are also old.

This is because the crawler requests this information from the web server when indexing.

Because you have created a brand new section in an established domain, the new section will be indexed faster than the new domain.

It will inherit link popularity and trust from the parent domain, allowing it to establish itself faster than a new site.

Once this subdomain is fully indexed by Google, you need to redirect it to the new domain.

By doing this, you can let Google find the content, and then Google will assume that these pages are properly aging, because the web server tells it that these pages are actually old (even if you actually created them recently).

By redirecting the subdomain, the inheritance and trust given to the subdomain by the main domain are passed to the new site.

The reason for this is that the established website has already been trusted by Google. Therefore, votes from trusted sites help show Google that the new site is also trusted.

However, this strategy needs to consider the following points:

Now that it has been widely publicized, I don’t think it will take so long for Google to recognize this vulnerability and fix it.

Moreover, the entire trustbox patent is partly based on authority and also based on age. Therefore, although a page may look very old (because you changed the page header), Google may choose to consider the age of the page from the time it is found.

In other words, even if this page is one year old, if Googlebot only discovered this page yesterday, then it will only be one day old. Although the patent does say that “documents are scored based on their corresponding start dates, at least to a certain extent,” it also says that Google can determine that this age is not the date of the page, but the date it found the page.

Remember, like any form of blatant manipulation, you may be punished by Google. Don’t forget, Google’s engineers will also visit these forums, so they will also be keenly aware that when sharing new strategies, these strategies are designed to circumvent current algorithms.

Therefore, it is their job to fix these vulnerabilities, and they are likely to find ways to punish the websites that exploit the vulnerabilities. Although no one can prove or disprove this theory, I have heard that enough websites have been removed from the index because they did things they shouldn’t. So, while this sounds like a good way to get yourself out of the box as soon as possible, consider other options. What if you dropped out of the index really early, but Google became popular in 3 months, 6 months or longer? Do you think they will decide to “retrospect” any changes to your website if they determine that you are involved in such a strategy? Then, not only have you returned to where you started, your situation may also be worse than when you first started.

Leave a Reply

Your email address will not be published. Required fields are marked *