Search the Community

Showing results for tags 'google'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Webmaster's Category
    • WebMasters Help Forum - Free Google SEO Tips and Guidelines

Calendars

  • Community Calendar

Found 31 results

  1. How exactly does a Google search work? If one searches for something in Google, he actually searches in Google's index of the web. It can be done with a software program named Spiders, which starts by bringing a few web pages. Then, they follow the links on those few web pages and bring the pages they direct to and follow the links on those pages and fetch the pages they point to and it goes on till it forms a big index of many billion pages, which are saved throughout thousands of machines. For an instance, if you are interested to know something about Apple iphone 6s and you type these words on search box and hit enter. Spiders will search in Google Index to detect every page that has these terms in it. Actually, there will be thousands of possibilities. Google will sort out the most relevant possible results from these thousands of possibilities by the formula invented by their founders Larry Page and Sergey Brin. Formula for finding out a web page's importance just by looking at how many outside link point to it, are as follows: *By finding the number of times this page have that particular keyword(s). *By keyword(s) appearing in the title, URL and directly adjacent. *By finding whether those pages have synonyms of the keywords. *By detecting the quality of those pages. *By knowing the PageRank of that particular page. And after finding out the importance of the pages, Google combines all the factors together to give the overall score for every page and it sends the result back in less than a second after you hit enter. Google takes a serious involvement in producing useful and impartial search results. Google do not accept payment to add a site to their index, update it more often or enhance its ranking. If we take a look at each search result, it shows a title, a URL and a snippet of text which helps people in deciding if this is a page, for which you are really looking for. And one can see links to other similar pages, Google's most recent stored version of that page and some other related searches which you can try further.
  2. Will DiggBar create duplicate content issues? For example, my site is example.com and now when you add digg.com before my site's address, digg.com/example.com, it is showing a page from Digg with my content exactly the same. Actually, the answer is No. When Digg was originally launched, it could have created duplicate content issues. Actually, it could have even resulted in pages from Digg being removed from Google. But the Digg has made a lot of better alterations like a meta no-index tag. So, it means anything that's on Digg, which is one of the framed shortened URLs, will have a no-index tag. So it says Google not to index that page. Google does not show any reference to it in search results. For the fact, Google will see two copies, one with meta no-index tag, which means, Google will not show this one. And they should correctly assign all the page rank and other characteristics to the original content. DiggBar, when it was originally implemented, was a little bit cautious, but it has been adjusted and iterated quickly. They have made better changes so one should not have duplicate content issues. Most of the people think the DiggBar is rude, but when one puts on search engine policy hat, then there will be no violation of search engine quality guidelines. And it should not create duplicate content issues for the site.
  3. If one looks to hire an SEO agency, which one do you recommend? Actually, it will not be good enough to recommend an SEO agency personally, because they may change their policies and we do not know what different agencies might be doing. So, the general answer is, one can search for Google SEO guidelines, and you will get a whole page of Webmaster Help Center, which helps one in knowing what are the things one should look for, while choosing SEO agency. So, references should tell one, what that particular agency is going to do. It should not be like they are going to wave some magic smoke and not tell you what they are doing, be a little worried about it. Even though you find them boring, they should let you know everything they are about to do. Google has changed their SEO guidelines from little controversial to more concerned on the ways to find out a good SEO agency. There are lots of great SEOs out there. So, if an SEO does not satisfy you, just don't settle for less, when you can have more.
  4. Will the new canonical tag help with issues where you by accident you have indexed by IP address rather than hostname? One has to double check it, but this kind of thing is, one would like to be able to do. One would like to put hostname rather than your IP address. Actually, Google has to consider this concern about the IP address different than the hostname. It will not be bad to go ahead and have that. This is the sort of thing where one does not wish to show up the IP address. Rather than, one wishes to have their host, their domain name. Actually it would be a nice thing to do. But, it is not sure that Google would support for IP addresses yet.
  5. Is a website designed with a CSS-based layout more SEO friendly than a table-based layout? Actually, one need not to be worried about it. Google can handle both table-based and CSS-based layout. Google scores them, regardless of what type of layout mechanism one uses. It is recommended to use what is the best for them. Nowadays, people tend to like CSS because it's easy to change the site, change the layout and it is modular as well. Whereas tables, have this Web 1.0 kind of connotation to them. But, what matters is, one must have the best site, then Google will try to find it and rank it without considering whether it's table-based or CSS-based layout.
  6. Is it true that domains registered before 2004 have a totally different way of getting PageRank? Like 'Pre-2004' domains are highly desirable because they get PageRank based on old easier criteria. No, it is completely false. There is obviously no difference between 2004 domains, 2005 domains, 2006 domains. All the domains get reputation in the same way. Literally, there is no extra value in buying a pre-2004 domain or pre-Google IPO domain or whatever you want to name it. There's no difference absolutely. Just ensure to get a domain which works well for you. There is no need to think about if it was created before 2004.
  7. Does the first link on a page matter a lot? Should I ensure that first link is really what I care the most?If so, should we modify CSS or JavaScript to show the correct link first? Generally, it is better not to worry about it. If one has a thousand links, I wouldn't make it a thousand and first but there is no special advantage in having it be the first link. Google will parse a page and they wishes to extract hundreds of links to find the relevant ones. Page should be there in a place, where the user can see and click through it, Googlebot can follow that link, and ons must be in a good shape. So, it is not recommended to worry about bending over backwards or doing CSS or JavaScript to ensure that you show the first link is the most important link. Google tries to find all of those links.
  8. 'Query deserves freshness.' Fact or fiction? Definitely, it's not a fiction, but its a fact. In the New York Times, Amit Singhal has talked about it. In that, he says that he believed there are some queries which deserve freshness. So, Query Deserve Freshness (QDF) is really a fact, not fiction.
  9. 'Query deserves freshness.' Fact or fiction? Definitely, it's not a fiction, but its a fact. In the New York Times, Amit Singhal has talked about it. In that, he says that he believed there are some queries which deserve freshness. So, Query Deserve Freshness (QDF) is really a fact, not fiction.
  10. Will Google find text in images someday? Actually, it is easy to say in words, but it is a big undertaking in real. But, it will be fun at the same time. Actually, it will be great, if Google crawled the web, found all the images and ran OCR (optical character recognition) on all the images on the web. But, to be honest, it is too high to dream, as it involves loads of work. The notable fact is, one should expect this from Google in any short term now.
  11. How Google calculates site load times in the data it exposes in Google's webmaster statistics? Is the calculation simply average time to get and receive the HTML content for a page? Actually, it's a Yes. It works as, Googlebot sends out the request and beginning from there, Google calculates the time it takes for them to see the request back. So it is almost like, the end-to-end time to deliver the page or deliver the data from the server. Google sees it from the view of Googlebot. Google has no idea how to calculate the time for any given user to deliver a page. So, Google sees only at Googlebot's perspective.
  12. Do you have any specific tips for news sites, which have unique concerns compared with commercial sites? For an instance, let's say, if it's a developing news story, it is recommended to have one page, where all the page rank can gather. For an another instance, you might come across, who do many stories over several days. They do not link those stories together. They will keep it on the track, so that less likely to lose a few people through the cracks that way. And then, you can take Wikipedia, where they have one page, which gets richer and more developed. If a news story is over, and you can think of moving to a new page. But given a certain story, it is better to add updates or add more information on the same URL. Or you can think about other stuff. Take a look at Google News documentation. There is some meta tags and other tags that they have available which are not available to other people or that Google News treats specially. You can give a thought to Authorship, which helps in understanding who exactly wrote a particular story or particular page. If it's a news site, you can dig a little research on those lines.
  13. Can you explain about the proposed autocomplete type attribute? Should we add this on web forms? Many websites have forms which ask for the name (first and last name), the address, the street address, the postal code and all such stuffs. Visitors, usually, feel irritated or lazy to fill up these forms. If you are a business owner or a publisher, it would be a lot better if you make it easy for them to fill up those forms, so that they would like to do purchases or sign up for the newsletter or whatever you are interested in. An easy way to make this simpler is, first take the existing web form and Google Chrome and there is proposed a standard called autocomplete type, which means a software function that completes words or strings without the user needing to type them in full. It does not alter the form elements m i.e. the variables are same, so it's just only adding. But by commenting the forms with the right thing which you expect people to fill in with the browser's auto-complete, Chrome will know how to fill out the forms. So when a Chrome user visits the page and wish to buy something and they type something in, then they can see an option to autocomplete. It acts like, if you type in the first box, rest will get filled automatically. This makes the work simple. It should be semantically understandable in some sense. As a result, users will fly right through the form. They can sign up for newsletter, purchase or whatever. It is highly recommended to have this, but it may take some hours of yours, but it's really worth it.
  14. Is it good to keep key content pages close to the root or have them deep within a topical funnel-structure? Actually, the most noticeable fact is, this is not SEO advice, but behavioral advice. If one has stuff, some number of clicks from the root page, visitors will seem to find it. But if somebody has to click ten times to find the page to register for the admission compared to register right on the main root page, only some people will find it if it's all that many clicks away. So it is not a big deal where it is in the path, it may be at the root level or eight levels deep. Maybe, it's a big deal for other search engines, but not for Google. The only thing matter is, one should think from visitors perspective. And to be honest, this is not search engine ranking advice, but just an opinion to enhance the ROI ( return on investment).
  15. Does Google crawl and treat tiny URLs using a 301 redirect the same as other links? It will be much better if we use URL-shorteners rather than tinyURLs. And coming back to the question, whenever Google index one of 301 redirects, they do follow and flow the PageRank as normally they would with a 301 from any other site. Danny Sullivan did a great work about URL-shortening services. In that, he took the top URL-shortening services such as tinyURL.com, bit.ly/ and questioned do they do a 301-redirect or do they do some other sort of redirect. When a shortening service does a 301 redirect, Google should flow all the PageRank just like they do with any other sort of 301-redirect. Google can be able to follow that and find destination URL without a big deal, with the help of URL-shortening services which do it in a right way.
  16. Underscores or dashes in URLs, Are there differences between my-page and mi_pagina? Yes, they are different. If one can opt between underscores and hyphens, I would go for hyphens (-) personally. But, if you have underscores (_) and everything works so good, one need not to worry about it and there is no need to change their architecture. A team inside Google, works on using underscores as separators, which is little bit different and which might have bigger impact. But there may be alterations in that, as far as there is no confirmation about it. But, as for now, hyphens (-) are treated as separators and underscores (_) are not. This might change in future, but not now.
  17. In some queries, Google uses the date of the post in the description snippet at Google search. Is there any reason for it? Is there any way one can say not to mention it? Google's snippets team always try their best ways to show really helpful descriptions or snippets in the search results. When you are on a forum, maybe there has been four replies. When you are on a blog, maybe there has been 40 comments on that blog post. So, snippets team trying to think about some new wats to have useful snippets. Highlighting the date of the blog post appeared is one of those ways of helpful snippets. They do this because, if something is recent, it will be helpful to you in one way or other. Google has the right to show the snippet which they think the best for users. Google has the right to show the part of the page or mention the date of the post when it went live. They do these things because they want the best for their users. And at present, there is no such way to say Google not to mention the date of the post in snippets.
  18. If one externalize all CSS style definitions, Java scripts and disallow all user agents from accessing these external files (via robots.txt), would this cause problems for Googlebot? Does Googlebot need access to these files? It is recommend not to block, because for an instance, the White House recently released a new robot.txt and they blocked the images, directory or CSS or JavaScript or whatever. It is better not to block it, as it will be very helpful when something spammy is going on with JavaScript. So, it is really good to allow Googlebot go ahead and crawl it. And the notable fact is, these files are not huge, so it does not drain a lot of bandwidth. So, you just allow Googlebot have access to all such stuff. Mostly, Google will not obtain it, but in rare cases, when they do a quality check on behalf of someone or when they get a spam report, they will fetch that and ensure that the site is clean and not having any sorts of problems.
  19. Google has been more proactive in providing results that feature "corrected" spellings. How Google will employ smart guesses in search results in the future? Google has lots of visitors every day, in that, some users are not savvy and some users do not always spell correctly. When you look at random queries, 10% of them might be misspelled. So, Google has decided to write one of the world's best Spellcheckers. But the point is, when one has a huge Click Through on "Did you mean", some people who did not know that it was there. Because of this, Google has introduced a change recently, where they will spell it correct, what they think is the correct answer for few results and then they will show the normal answers below. When a user does not know how to spell the word correctly, this helps you a lot. It helps Web spam because people are used to typos and misspellings, but the regular users do not stumble on spam, but they stumble on the good results. If you are a power user and there are a ton of people who are, one can always put a plus before words to say, "This is the exact word I meant to search for." One can also put it in double quotes, even for a single word. There are so much ways to say Google this is the exact word I meant to search for. Google tries to be smart. If someone types something, which seems to be misspelled, actually it's not, then Google figures out that over time. Google always try to come up with something, which is pretty reasonable and works for vast number of users. Google tries to find out a way to enhance for the next generation of algorithms or in the next time that we push out new data for the Spell Correcting Code.
  20. If one has a lot of blog content for a new section of a site (100+ pages), is it best to release it over a period of time or is it fine to just unleash 100 pages? Generally, one can unleash those hundred pages at a time, if it is a high quality content. But, when it comes to ten thousand or a hundred thousand or a million pages, one should be little more careful about it. One should be cautious not because it would cause any kind of automatic penalty, but because it will create a junk look. It means, one day, there is no content on the site, but other day, suddenly there is two million pages in the index. So, one may began to think whether it is a legitimate content or an auto-generated junk. When it appears to be that, there is no value added to the content. Mostly, when you are creating content organically, one will end up with a page at a time. So it is recommended to publish the content in a new page, when you have it. One need not to wait till, they have got a lot of different content and gather it together and release it. It's really a good thing to release a page at a time. One need not to worry about it, when it is a small scale high quality stuff.
  21. Is it good to put a 'coming soon' page for new domains? Yes, it's a pretty smart thing to put a "coming soon" on a page for new domains. It's good thing for the visitors, because they just don't end up on a "black hole page" atleast. If you have some content which is ought to come out, then there is no wrong from having a "coming soon" page. Then when one gets more content, they can put that content out there. So that, when the full site is ready, one can have the full site out there. So, there is no need to worry about, because it's not even a big deal, when it comes to ranking-wise. It's a good option for users and it can be a good thing for search engines as well.
  22. Why Google search does not treat the @ sign differently given the rise of Twitter? For example @google and google give me the same results. Actually, this is a considerable choice as Google did not want an index email addresses, at least. One does not want somebody scraping Google to find a whole lot of email addresses. So it's really a thoughtful option not to index the @ sign. May be its over time starting to treat that differently, but for now, that has not been a well known thing or a request that Google had heard enough, which they would probably put resources into it at present.
  23. With canonical tags, can you point that page to itself? Say for an instance, if www.google.com/webpage points to www.google.com/webpage , will this cause a loop? The rel=canonical element is referred as "canonical link". It means, an HTML element that helps webmasters prevent duplicate content issues by specifying the “canonical” or “preferred” version of a web page. It enhances site's SEO. Even if canonical tags cause a loop, it does not cause a problem in Google. If you have a rel canonical page which loops back to itself, whenever Google building and writing the code for the rel canonical tag attribute in Google and they built in support to ensure this does not cause any sort of problem. It's definitely a common thing in other search engines as well. What if you had to check every single URL and then do a self check to see if you were on that URL. If you were on that URL, then you could not have a rel canonical tags, which produces all those tags. If you wish to have rel canonical tag on every single page, there is no problem in that. And even if it loops back to itself, then it is not a bog deal at all. Google is so fine with it.
  24. What is Google's view on guest blogging for links? High quality guest blogging is worth in some cases. In some case, what if it gets extreme? Or what if someone take it too far? So for an instance, we shall take an easy case. If you are a high quality writer, you give shout out to Lisa Barone or someone. She is a blogger and maybe she wishes to do a guest blog on some other blog. They must be happy to have her. For an instance, Vanessa Fox, Danny Sullivan and such people write something on a different blog. Actually, one must be happy to have them write an article for you, as they bring lot of insight and knowledge. It will be a good win for the person who hosts the site. It will be great for those who is not so much popular, but writes really well to get to be known a little bit better. Sometimes it can be taken to so much extremes. In that case, you can see people offering the same blog post many times or spinning the blog post and offering it to several outlets. So it will become like low-quality article. It is like someone is going to write the blog post and actually going to outsource that to non-expert. And then they would insert hyperlinks, so that would get into their blog post. It's a long and time honored tradition to have high quality bloggers jump back and forth or collaborate in different ways. One should be careful because it can be taken to extremes. Some practices, which make a lot of sense when you think about it, with high quality people, you are doing get a massive number of links, which is less likely to be counted by Google. There is a case for stuffs, where one think hard about the message they want to say. They will have their own point of view. The kind of links that Google will count on, is the higher quality articles, where someone put their effort in it and in which there will be originality. It gives a bit of a feel for the space. Guest blogs have a little bit more higher value and they might not be as worth the time.
  25. Search for a physical product ranks Amazon number one, in spite of not offering the best user experience. What they have done to prevent large corporations from dominating search engine results? Generally, one has to accept the fact that Amazon has relatively good user experience. Though, it is not necessary to accept that Amazon always ranks number one for every physical product. For an instance, when one search for a book, Amazon is up there. But if there is an official homepage for a book, it ranks well and maybe number one as well. But an acceptable fact is, not every book has homepage. It still wonders me. Every author will have their own web page, but they will not have a dedicated page for that particular book. It's just a lack of savoriness. When I searched for Mrs. Byrne's Dictionary of Unsual, Obscure and Preposterous Words, there was no content about it on the web except on Amazon or Goodreads or Google eBooks. The best answer is to ensure that there is an actual page for the product. Generally, Google tries to find out what are the official homepages. If it be for governments or universities or states or whatever. And Google tries to ensure that they return those when possible. Google minds about it when users do a search and they complain to them. If users complain about not finding the actual homepage for the product, then Google takes this into account. Generally, Google looks at the number of links, content of the page and if one specific webpage can get a lot of links, because visitors think it's a great site, then it relatively ranks well.