Showing posts with label seoindepth. Show all posts
Showing posts with label seoindepth. Show all posts

Saturday, 2 April 2016

How Knowledge Graph is the Problem for SEO ?




Now a days this topic is very popular KGO - Knowledge Graph Optimization. How to Do it? Let's Say if you find information about Sachin Tendulkar Google reach the query related to Sachin Tendulkar in its database and show a panel on the right side of the SERP with images and important information about him.

We are thinking always this question where is this information coming from? The information we get is a mixture between information that other user found useful and informative post on knowledge graph. If there is a problem, then we have the option of feedback button, submit the problem and information will be modified correctly On Google and Wikipedia.



'
What is the Impact of Knowledge Graph on SEO ?


As we know Knowledge Graph is based on information data. From this Webmasters have trouble in terms of ranking and traffic as well.


Let's take example the search query entered Rakhi Sawant on right hand side all the information about Rakhi sawant is there so user would go through to this information rather then click on Rakhi Sawant officially website. We can also see the search query competitor here you can see in the result showing Poonam Pandey and Mallika Sherawat..

Wednesday, 3 February 2016

How to Create a Robots.txt File ?



How to apply or put its code?


The best and short answer is put it into top-level directory of your web server.


When a robot looks for the "/robots.txt" file for URL, it strips the path component from the URL (everything from the first single slash), and puts "/robots.txt" in its place.

For example, for "http://www.example.com/shop/index.html, it will remove the "/shop/index.html", and replace it with "/robots.txt", and will end up with "http://www.example.com/robots.txt".



See also:

What program should I use to create /robots.txt?
How do I use /robots.txt on a virtual host?
How do I use /robots.txt on a shared host?
What to put in it

The "/robots.txt" file is a text file, with one or more records. Usually contains a single record looking like this:
User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /~joe/
In this example, three directories are excluded.


Note: When you create this you need to sepaate "Disallow" line for every URL prifix you want to exclude -- Here you can't "Disallow: /cgi-bin/ /tmp/" on a single line.


Note also that globbing and regular expression are not supported in either the User-agent or Disallow lines. The '*' in the User-agent field is a special value meaning "any robot". Specifically, you cannot have lines like "User-agent: *bot*", "Disallow: /tmp/*" or "Disallow: *.gif".

What you want to exclude depends on your server. Everything not explicitly disallowed is considered fair game to retrieve. Here follow some examples:

To exclude all robots from the entire server

User-agent: *
Disallow: /

To allow all robots complete access

User-agent: *
Disallow:
(or just create an empty "/robots.txt" file, or don't use one at all)

To exclude all robots from part of the server

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/
To exclude a single robot

User-agent: BadBot
Disallow: /
To allow a single robot

User-agent: Google
Disallow:

User-agent: *
Disallow: /
To exclude all files except one

This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:
User-agent: *
Disallow: /~joe/stuff/

Alternatively you can explicitly disallow all disallowed pages:
User-agent: *
Disallow: /~joe/junk.html
Disallow: /~joe/foo.html
Disallow: /~joe/bar.html