• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Administrator

  • Rank
    Advanced Member
  • Birthday
  1. Google recently made lot of updates in its search algorithm update. But they disclosed few updates only for public and most of them never got disclosed. Myself ( i own few websites ) and my friends website, they all used to have a good traffic for few years and months and once i/them moved to https.. Its all gone. I asked for help in google forum and other seo webmasters. But nothing helping, no htmls errors, no https redirect errors and nothing. Everything perfect. it took 4 weeks to move all our http links to https. Google announcement that they provide rank boost for https website, its just a big black monstrous lie. Who gets benefits from moving to HTTPS. 1. News websites 2. E-commerce websites 3. Websites which collects bank details or card details. Note : Moving to https is risk and do not try right now. Google is testing things and it can roll back if they get lot of issues in https. Drawbacks 1. You have to update all your back-links to https. 2. You never get any authority flow from your http to http sites. 3. Moving to https is like buying a new website and trying to get traffic to it.
  2. Will DiggBar create duplicate content issues? For example, my site is and now when you add before my site's address,, it is showing a page from Digg with my content exactly the same. Actually, the answer is No. When Digg was originally launched, it could have created duplicate content issues. Actually, it could have even resulted in pages from Digg being removed from Google. But the Digg has made a lot of better alterations like a meta no-index tag. So, it means anything that's on Digg, which is one of the framed shortened URLs, will have a no-index tag. So it says Google not to index that page. Google does not show any reference to it in search results. For the fact, Google will see two copies, one with meta no-index tag, which means, Google will not show this one. And they should correctly assign all the page rank and other characteristics to the original content. DiggBar, when it was originally implemented, was a little bit cautious, but it has been adjusted and iterated quickly. They have made better changes so one should not have duplicate content issues. Most of the people think the DiggBar is rude, but when one puts on search engine policy hat, then there will be no violation of search engine quality guidelines. And it should not create duplicate content issues for the site.
  3. If one looks to hire an SEO agency, which one do you recommend? Actually, it will not be good enough to recommend an SEO agency personally, because they may change their policies and we do not know what different agencies might be doing. So, the general answer is, one can search for Google SEO guidelines, and you will get a whole page of Webmaster Help Center, which helps one in knowing what are the things one should look for, while choosing SEO agency. So, references should tell one, what that particular agency is going to do. It should not be like they are going to wave some magic smoke and not tell you what they are doing, be a little worried about it. Even though you find them boring, they should let you know everything they are about to do. Google has changed their SEO guidelines from little controversial to more concerned on the ways to find out a good SEO agency. There are lots of great SEOs out there. So, if an SEO does not satisfy you, just don't settle for less, when you can have more.
  4. Will the new canonical tag help with issues where you by accident you have indexed by IP address rather than hostname? One has to double check it, but this kind of thing is, one would like to be able to do. One would like to put hostname rather than your IP address. Actually, Google has to consider this concern about the IP address different than the hostname. It will not be bad to go ahead and have that. This is the sort of thing where one does not wish to show up the IP address. Rather than, one wishes to have their host, their domain name. Actually it would be a nice thing to do. But, it is not sure that Google would support for IP addresses yet.
  5. Is a website designed with a CSS-based layout more SEO friendly than a table-based layout? Actually, one need not to be worried about it. Google can handle both table-based and CSS-based layout. Google scores them, regardless of what type of layout mechanism one uses. It is recommended to use what is the best for them. Nowadays, people tend to like CSS because it's easy to change the site, change the layout and it is modular as well. Whereas tables, have this Web 1.0 kind of connotation to them. But, what matters is, one must have the best site, then Google will try to find it and rank it without considering whether it's table-based or CSS-based layout.
  6. Is it true that domains registered before 2004 have a totally different way of getting PageRank? Like 'Pre-2004' domains are highly desirable because they get PageRank based on old easier criteria. No, it is completely false. There is obviously no difference between 2004 domains, 2005 domains, 2006 domains. All the domains get reputation in the same way. Literally, there is no extra value in buying a pre-2004 domain or pre-Google IPO domain or whatever you want to name it. There's no difference absolutely. Just ensure to get a domain which works well for you. There is no need to think about if it was created before 2004.
  7. Does the first link on a page matter a lot? Should I ensure that first link is really what I care the most?If so, should we modify CSS or JavaScript to show the correct link first? Generally, it is better not to worry about it. If one has a thousand links, I wouldn't make it a thousand and first but there is no special advantage in having it be the first link. Google will parse a page and they wishes to extract hundreds of links to find the relevant ones. Page should be there in a place, where the user can see and click through it, Googlebot can follow that link, and ons must be in a good shape. So, it is not recommended to worry about bending over backwards or doing CSS or JavaScript to ensure that you show the first link is the most important link. Google tries to find all of those links.
  8. 'Query deserves freshness.' Fact or fiction? Definitely, it's not a fiction, but its a fact. In the New York Times, Amit Singhal has talked about it. In that, he says that he believed there are some queries which deserve freshness. So, Query Deserve Freshness (QDF) is really a fact, not fiction.
  9. 'Query deserves freshness.' Fact or fiction? Definitely, it's not a fiction, but its a fact. In the New York Times, Amit Singhal has talked about it. In that, he says that he believed there are some queries which deserve freshness. So, Query Deserve Freshness (QDF) is really a fact, not fiction.
  10. Will Google find text in images someday? Actually, it is easy to say in words, but it is a big undertaking in real. But, it will be fun at the same time. Actually, it will be great, if Google crawled the web, found all the images and ran OCR (optical character recognition) on all the images on the web. But, to be honest, it is too high to dream, as it involves loads of work. The notable fact is, one should expect this from Google in any short term now.
  11. Will one be penalized for having every file in XML Sitemap listed with the same priority? Definitely, one will not be penalized for this. If one gives same importance to all files in XML Sitemap, then Google will try to find out which ones are really important from their perspective. So, one need not to worry about having to list a priority for every single one. Google is not going to have any sort of scoring or whatever. It is completely fine and optional as well. So, if one has some details for why they wish to put it in a priority, then it is really great. But, to be honest, it is not a must-one and it will really not get one into trouble, if you don't have it.
  12. How Google calculates site load times in the data it exposes in Google's webmaster statistics? Is the calculation simply average time to get and receive the HTML content for a page? Actually, it's a Yes. It works as, Googlebot sends out the request and beginning from there, Google calculates the time it takes for them to see the request back. So it is almost like, the end-to-end time to deliver the page or deliver the data from the server. Google sees it from the view of Googlebot. Google has no idea how to calculate the time for any given user to deliver a page. So, Google sees only at Googlebot's perspective.
  13. Do you have any specific tips for news sites, which have unique concerns compared with commercial sites? For an instance, let's say, if it's a developing news story, it is recommended to have one page, where all the page rank can gather. For an another instance, you might come across, who do many stories over several days. They do not link those stories together. They will keep it on the track, so that less likely to lose a few people through the cracks that way. And then, you can take Wikipedia, where they have one page, which gets richer and more developed. If a news story is over, and you can think of moving to a new page. But given a certain story, it is better to add updates or add more information on the same URL. Or you can think about other stuff. Take a look at Google News documentation. There is some meta tags and other tags that they have available which are not available to other people or that Google News treats specially. You can give a thought to Authorship, which helps in understanding who exactly wrote a particular story or particular page. If it's a news site, you can dig a little research on those lines.
  14. Can you explain about the proposed autocomplete type attribute? Should we add this on web forms? Many websites have forms which ask for the name (first and last name), the address, the street address, the postal code and all such stuffs. Visitors, usually, feel irritated or lazy to fill up these forms. If you are a business owner or a publisher, it would be a lot better if you make it easy for them to fill up those forms, so that they would like to do purchases or sign up for the newsletter or whatever you are interested in. An easy way to make this simpler is, first take the existing web form and Google Chrome and there is proposed a standard called autocomplete type, which means a software function that completes words or strings without the user needing to type them in full. It does not alter the form elements m i.e. the variables are same, so it's just only adding. But by commenting the forms with the right thing which you expect people to fill in with the browser's auto-complete, Chrome will know how to fill out the forms. So when a Chrome user visits the page and wish to buy something and they type something in, then they can see an option to autocomplete. It acts like, if you type in the first box, rest will get filled automatically. This makes the work simple. It should be semantically understandable in some sense. As a result, users will fly right through the form. They can sign up for newsletter, purchase or whatever. It is highly recommended to have this, but it may take some hours of yours, but it's really worth it.
  15. Is it good to keep key content pages close to the root or have them deep within a topical funnel-structure? Actually, the most noticeable fact is, this is not SEO advice, but behavioral advice. If one has stuff, some number of clicks from the root page, visitors will seem to find it. But if somebody has to click ten times to find the page to register for the admission compared to register right on the main root page, only some people will find it if it's all that many clicks away. So it is not a big deal where it is in the path, it may be at the root level or eight levels deep. Maybe, it's a big deal for other search engines, but not for Google. The only thing matter is, one should think from visitors perspective. And to be honest, this is not search engine ranking advice, but just an opinion to enhance the ROI ( return on investment).