3. Website availability
Since Bing relates users to your internet website to read through the papers, your websites should be offered to both users and crawlers all the time. The search robots will go to your websites occasionally to be able to select up the updates, along with to ensure your URLs will always be available. Then some or all of your articles could drop out of Google and Google Scholar if the search robots are unable to fetch your webpages, e.g., due to server errors, misconfiguration, or an overly slow response from your website.
- Use HTTP 5xx codes to point errors that are temporary must certanly be retried quickly, such as for example short-term shortage of backend capability.
- Use HTTP 4xx codes to point permanent mistakes that shouldn’t be retried for a while, such as for example file maybe not discovered.
- If you wish to go your write-ups to brand brand new URLs, set up HTTP 301 redirects through the location that is old of article to its brand brand new location. Do not redirect article URLs towards the website – users need certainly to see at least the abstract once they click in your URL in Google results.
4. Robots exclusion protocol
In case your web site runs on the robots.txt file, e.g., www.example.com/robots.txt, then it should never block Bing’s search robots from accessing your posts or your browse URLs. Conversely, it must block robots from accessing big dynamically generated areas which are not beneficial in the development of one’s articles, such as for instance shopping carts, remark kinds, or link between your keyword that is own search.
E.g., to allow Bing’s robots access all URLs on your own web site, add the after part to your robots.txt: